Text Replacement
Tests deterministic text transformations: renaming characters/locations, expanding contractions, tense rewriting, POV shifts, gender swaps, combined transformations, and word avoidance. Scored by checking each expected change independently.
Avoid said/asked/replied/answered
Performance Score Distribution (Top 20)
Click a model name to view its detail page.
Price-Performance Score Distribution (Top 20)
Click a model name to view its detail page.
| Score | Cost | Time | ||
|---|---|---|---|---|
| Gemini 2.5 Flash Lite | 100% | $0.0003 | 1.5s | |
| Mistral Small 4 | 99% | $0.0004 | 3.1s | |
| Inception Mercury | 95% | $0.0004 | 3.4s | |
| Mistral Small 3.2 24B | 100% | $0.0002 | 6.2s | |
| Gemini 3.1 Flash Lite (Preview) | 100% | $0.0009 | 1.7s | |
| Gemma 3 4B | 98% | $0.0001 | 5.5s | |
| Gemma 3 12B | 98% | $0.0001 | 7.4s | |
| Grok 4 Fast | 100% | $0.0008 | 6.0s | |
| Inception Mercury 2 | 100% | $0.0012 | 1.7s | |
| Claude 3 Haiku | 85% | $0.0008 | 4.3s | |
| Stealth: Hunter Alpha | 95% | $0.0000 | 11.2s | |
| GPT-4o Mini (temp=1) | 100% | $0.0004 | 9.5s | |
| Gemini 2.5 Flash | 100% | $0.0014 | 2.0s | |
| GPT-4o Mini (temp=0) | 100% | $0.0004 | 9.7s | |
| Qwen 2.5 72B | 100% | $0.0003 | 10.4s | |
| GPT-5.4 Nano (Reasoning) | 95% | $0.0011 | 4.2s | |
| Mistral Medium 3.1 | 100% | $0.0012 | 5.1s | |
| GPT-4.1 Mini | 100% | $0.0010 | 6.3s | |
| Mistral Large 3 | 100% | $0.0011 | 7.2s | |
| Gemini 3 Flash (Preview) | 100% | $0.0018 | 3.2s | |
Most Stable Models (Top 20)
Ranked by stability (median × consistency). Click a model name to view its detail page.
| Score | Consistency | Stability | ||
|---|---|---|---|---|
| Claude Opus 4.6 (Reasoning) | 100% | 100% | 100% | |
| Gemini 3.1 Pro (Preview) | 100% | 100% | 100% | |
| Z.AI GLM 5 Turbo | 100% | 100% | 100% | |
| Claude Sonnet 4.6 (Reasoning) | 100% | 100% | 100% | |
| GPT-5.4 (Reasoning) | 100% | 100% | 100% | |
| GPT-5 Mini | 100% | 100% | 100% | |
| GPT-5.1 | 100% | 100% | 100% | |
| Claude Opus 4.6 | 100% | 100% | 100% | |
| GPT-5 | 100% | 100% | 100% | |
| Qwen 3.5 397B A17B | 100% | 100% | 100% | |
| Qwen 3.5 122B | 100% | 100% | 100% | |
| Grok 4.20 (Beta, Reasoning) | 100% | 100% | 100% | |
| Z.AI GLM 5 | 100% | 100% | 100% | |
| Claude Sonnet 4.6 | 100% | 100% | 100% | |
| MoonshotAI: Kimi K2.5 | 100% | 100% | 100% | |
| Qwen 3.5 27B | 100% | 100% | 100% | |
| ByteDance Seed 1.6 | 100% | 100% | 100% | |
| Gemini 3 Flash (Preview, Reasoning) | 100% | 100% | 100% | |
| o4 Mini High | 100% | 100% | 100% | |
| GPT-5.2 | 100% | 100% | 100% | |
Top Overall Models (Top 20)
Ranked by composite score (performance, cost, speed & stability). Click a model name to view its detail page.
| Score | Cost | Speed | Stability | ||
|---|---|---|---|---|---|
| Gemini 2.5 Flash Lite | 100% | $0.0003 | 1.5s | 100% | |
| Gemini 3.1 Flash Lite (Preview) | 100% | $0.0009 | 1.7s | 100% | |
| Inception Mercury 2 | 100% | $0.0012 | 1.7s | 100% | |
| Mistral Small 3.2 24B | 100% | $0.0002 | 6.2s | 100% | |
| Gemini 2.5 Flash | 100% | $0.0014 | 2.0s | 100% | |
| Grok 4 Fast | 100% | $0.0008 | 6.0s | 100% | |
| Mistral Medium 3.1 | 100% | $0.0012 | 5.1s | 100% | |
| Gemini 3 Flash (Preview) | 100% | $0.0018 | 3.2s | 100% | |
| GPT-4.1 Mini | 100% | $0.0010 | 6.3s | 100% | |
| GPT-4o Mini (temp=1) | 100% | $0.0004 | 9.5s | 100% | |
| Mistral Large 3 | 100% | $0.0011 | 7.2s | 100% | |
| GPT-4o Mini (temp=0) | 100% | $0.0004 | 9.7s | 100% | |
| Qwen 2.5 72B | 100% | $0.0003 | 10.4s | 100% | |
| Qwen 3.5 Plus (2026-02-15) | 100% | $0.0015 | 6.9s | 100% | |
| Grok 4.20 (Beta) | 100% | $0.0032 | 1.7s | 100% | |
| Claude Haiku 4.5 | 100% | $0.0034 | 2.7s | 100% | |
| Stealth: Healer Alpha | 100% | $0.0000 | 16.4s | 100% | |
| Mistral Small 4 | 99% | $0.0004 | 3.1s | 96% | |
| DeepSeek-V2 Chat | 100% | $0.0008 | 16.8s | 100% | |
| Qwen3 235B A22B Instruct 2507 | 100% | $0.0004 | 18.2s | 100% | |
| Median | Evaluator | Top 3 | Flop 3 |
|---|---|---|---|
| 100.0% | Forbidden words eliminated | ||
| 100.0% | Key facts and content preserved | ||
| 100.0% | Structural similarity to original |