Text Replacement
Tests deterministic text transformations: renaming characters/locations, expanding contractions, tense rewriting, POV shifts, gender swaps, combined transformations, and word avoidance. Scored by checking each expected change independently.
Combined: 3rd person past → 1st person present
Text Editing
Performance Score Distribution (Top 20)
Click a model name to view its detail page.
Price-Performance Score Distribution (Top 20)
Click a model name to view its detail page.
| Score | Cost | Time | ||
|---|---|---|---|---|
| Gemini 2.5 Flash Lite | 99% | $0.0003 | 1.8s | |
| Grok 4 Fast | 98% | $0.0006 | 5.0s | |
| Gemini 2.5 Flash | 94% | $0.0015 | 2.1s | |
| GPT-4.1 Nano | 98% | $0.0003 | 3.6s | |
| Gemma 3 12B | 94% | $0.0001 | 9.3s | |
| Mistral Small 3.2 24B | 98% | $0.0002 | 4.4s | |
| GPT-4.1 Mini | 99% | $0.0010 | 12.0s | |
| Qwen 2.5 72B | 98% | $0.0003 | 9.5s | |
| Gemma 3 27B | 98% | $0.0002 | 13.1s | |
| Hermes 3 70B | 99% | $0.0003 | 13.7s | |
| Qwen 3.5 Plus (2026-02-15) | 99% | $0.0014 | 6.7s | |
| Gemini 3 Flash (Preview) | 98% | $0.0018 | 3.1s | |
| Mistral Large 3 | 98% | $0.0010 | 7.1s | |
| Claude Haiku 4.5 | 99% | $0.0033 | 2.8s | |
| Mistral Medium 3.1 | 96% | $0.0012 | 5.8s | |
| DeepSeek V3.1 | 99% | $0.0007 | 36.6s | |
| Mistral Small Creative | 95% | $0.0002 | 3.4s | |
| DeepSeek V3 (2024-12-26) | 97% | $0.0007 | 14.3s | |
| Grok 4.1 Fast | 95% | $0.0008 | 11.0s | |
| Hermes 3 405B | 99% | $0.0011 | 23.7s | |
Most Stable Models (Top 20)
Ranked by stability (median × consistency). Click a model name to view its detail page.
| Score | Consistency | Stability | ||
|---|---|---|---|---|
| Claude Opus 4.6 (Reasoning) | 99% | 100% | 99% | |
| Claude Opus 4.6 | 99% | 100% | 99% | |
| Claude Sonnet 4 | 99% | 100% | 99% | |
| Aion 2.0 | 99% | 99% | 99% | |
| Gemini 2.5 Pro | 99% | 99% | 99% | |
| Gemini 3.1 Pro (Preview) | 99% | 100% | 99% | |
| Qwen 3.5 397B A17B | 99% | 100% | 99% | |
| Claude Sonnet 4.6 | 99% | 100% | 99% | |
| Claude Opus 4.5 | 99% | 100% | 99% | |
| Qwen 3.5 Plus (2026-02-15) | 99% | 100% | 99% | |
| GPT-4o, May 13th (temp=0) | 99% | 100% | 99% | |
| Claude 3.5 Sonnet | 99% | 100% | 99% | |
| Claude 3.7 Sonnet | 99% | 100% | 99% | |
| GPT-4o, Aug. 6th (temp=0) | 99% | 100% | 99% | |
| DeepSeek V3.2 | 99% | 100% | 99% | |
| Claude Haiku 4.5 | 99% | 99% | 99% | |
| GPT-5.1 | 99% | 99% | 99% | |
| Hermes 3 405B | 99% | 100% | 98% | |
| Hermes 3 70B | 99% | 100% | 98% | |
| DeepSeek V3.1 | 99% | 99% | 98% | |
Top Overall Models (Top 20)
Ranked by composite score (performance, cost, speed & stability). Click a model name to view its detail page.
| Score | Cost | Speed | Stability | ||
|---|---|---|---|---|---|
| Gemini 2.5 Flash Lite | 99% | $0.0003 | 1.8s | 98% | |
| Mistral Small 3.2 24B | 98% | $0.0002 | 4.4s | 97% | |
| Qwen 3.5 Plus (2026-02-15) | 99% | $0.0014 | 6.7s | 99% | |
| GPT-4.1 Nano | 98% | $0.0003 | 3.6s | 96% | |
| Grok 4 Fast | 98% | $0.0006 | 5.0s | 96% | |
| Qwen 2.5 72B | 98% | $0.0003 | 9.5s | 98% | |
| Gemini 3 Flash (Preview) | 98% | $0.0018 | 3.1s | 97% | |
| Claude Haiku 4.5 | 99% | $0.0033 | 2.8s | 99% | |
| Hermes 3 70B | 99% | $0.0003 | 13.7s | 98% | |
| Mistral Large 3 | 98% | $0.0010 | 7.1s | 98% | |
| GPT-4.1 Mini | 99% | $0.0010 | 12.0s | 98% | |
| Gemma 3 27B | 98% | $0.0002 | 13.1s | 95% | |
| DeepSeek V3 (2024-12-26) | 97% | $0.0007 | 14.3s | 97% | |
| Mistral Medium 3.1 | 96% | $0.0012 | 5.8s | 95% | |
| GPT-4.1 | 99% | $0.0050 | 4.4s | 98% | |
| GPT-4o Mini (temp=0) | 95% | $0.0004 | 9.6s | 95% | |
| DeepSeek-V2 Chat | 97% | $0.0008 | 15.2s | 97% | |
| GPT-4o, Aug. 6th (temp=0) | 99% | $0.0063 | 2.9s | 99% | |
| Mistral Large | 98% | $0.0041 | 6.7s | 98% | |
| Mistral Large 2 | 98% | $0.0041 | 7.3s | 98% | |
| Median | Evaluator | Top 3 | Flop 3 |
|---|---|---|---|
| 96.1% | Combined transformation accuracy | ||
| 100.0% | Dialogue content preserved | ||
| 100.0% | Setting and Gregor references preserved |