Write N of X
Write exactly N words/sentences/paragraphs...
Performance Score Distribution (Top 20)
Click a model name to view its detail page.
Price-Performance Score Distribution (Top 20)
Click a model name to view its detail page.
| Score | Cost | Time | ||
|---|---|---|---|---|
| Stealth: Aurora Alpha | 99% | — | 3.7s | |
| Inception Mercury | 99% | $0.0003 | 1.8s | |
| Ministral 3 3B | 72% | $0.0002 | 1.2s | |
| Gemini 2.5 Flash Lite | 77% | $0.0003 | 1.3s | |
| Inception Mercury 2 | 99% | $0.0008 | 1.2s | |
| Llama 3.1 8B | 74% | $0.0004 | 1.2s | |
| Gemini 3.1 Flash Lite (Preview) | 96% | $0.0007 | 1.6s | |
| GPT-5.4 Nano | 86% | $0.0006 | 2.3s | |
| GPT-5.4 Nano (Reasoning) | 98% | $0.0009 | 3.7s | |
| GPT-5.4 Nano (Reasoning, Low) | 92% | $0.0007 | 2.6s | |
| GPT-4.1 Nano | 82% | $0.0002 | 4.4s | |
| Mistral Small Creative | 75% | $0.0003 | 1.9s | |
| Ministral 3 14B | 83% | $0.0004 | 2.7s | |
| Mistral Small 3.2 24B | 81% | $0.0002 | 5.2s | |
| GPT-4.1 Mini | 84% | $0.0006 | 3.8s | |
| Grok 4 Fast | 88% | $0.0005 | 3.9s | |
| GPT-5.4 Mini | 91% | $0.0015 | 1.6s | |
| Stealth: Healer Alpha | 87% | $0.0000 | 10.8s | |
| GPT-5.4 Mini (Reasoning, Low) | 97% | $0.0019 | 2.2s | |
| Gemini 3 Flash (Preview) | 97% | $0.0015 | 2.6s | |
Cost vs Performance
Compares total cost for this test against the test score. Quadrant lines are drawn at the median values. Only models with available cost data are shown.
4 low-scoring outliers hidden: Ministral 8B (38.1%), Ministral 3B (37.0%), Mistral NeMO (36.0%), Rocinante 12B (31.0%).
Most Stable Models (Top 20)
Ranked by stability (median × consistency). Click a model name to view its detail page.
| Score | Consistency | Stability | ||
|---|---|---|---|---|
| Gemini 3.1 Pro (Preview) | 100% | 100% | 100% | |
| Qwen 3.5 397B A17B | 100% | 100% | 100% | |
| Z.AI GLM 5 | 100% | 100% | 100% | |
| Qwen 3.5 35B | 100% | 100% | 100% | |
| Qwen 3.5 Flash | 100% | 100% | 100% | |
| Gemini 3 Flash (Preview, Reasoning) | 100% | 100% | 100% | |
| Z.AI GLM 4.7 | 100% | 100% | 100% | |
| Claude Opus 4.6 (Reasoning) | 100% | 100% | 100% | |
| Gemini 3 Pro (Preview) | 100% | 100% | 100% | |
| Z.AI GLM 5 Turbo | 100% | 100% | 100% | |
| Claude Sonnet 4.6 (Reasoning) | 100% | 100% | 100% | |
| GPT-5.4 Mini (Reasoning) | 100% | 100% | 100% | |
| MoonshotAI: Kimi K2.5 | 100% | 99% | 99% | |
| GPT-5.4 (Reasoning) | 100% | 99% | 99% | |
| GPT-5.1 | 100% | 98% | 98% | |
| GPT-5.2 | 100% | 98% | 98% | |
| o4 Mini High | 100% | 96% | 96% | |
| GPT-5 Mini | 100% | 96% | 96% | |
| ByteDance Seed 1.6 | 100% | 96% | 96% | |
| GPT-5 | 99% | 94% | 94% | |
Top Overall Models (Top 20)
Ranked by composite score (performance, cost, speed & stability). Click a model name to view its detail page.
| Score | Cost | Speed | Stability | ||
|---|---|---|---|---|---|
| GPT-5.4 Mini (Reasoning) | 100% | $0.0028 | 3.9s | 100% | |
| Inception Mercury 2 | 99% | $0.0008 | 1.2s | 88% | |
| GPT-5 Mini | 100% | $0.0022 | 10.4s | 96% | |
| Stealth: Aurora Alpha | 99% | — | 3.7s | 89% | |
| Inception Mercury | 99% | $0.0003 | 1.8s | 83% | |
| GPT-5.2 | 100% | $0.0076 | 8.5s | 98% | |
| GPT-5.1 | 100% | $0.0075 | 10.8s | 98% | |
| Nemotron 3 Super | 99% | $0.0000 | 12.8s | 83% | |
| Z.AI GLM 5 Turbo | 100% | $0.0071 | 17.7s | 100% | |
| GPT-5.4 Nano (Reasoning) | 98% | $0.0009 | 3.7s | 78% | |
| GPT-5.4 (Reasoning) | 100% | $0.010 | 10.5s | 99% | |
| Qwen 3.5 Flash | 100% | $0.0025 | 39.9s | 100% | |
| Gemini 3 Flash (Preview) | 97% | $0.0015 | 2.6s | 73% | |
| o4 Mini | 99% | $0.0058 | 13.2s | 87% | |
| GPT-5 Nano | 99% | $0.0008 | 21.9s | 83% | |
| Gemini 3 Flash (Preview, Reasoning) | 100% | $0.011 | 18.1s | 100% | |
| o4 Mini High | 100% | $0.0083 | 23.8s | 96% | |
| GPT-5 | 99% | $0.010 | 16.5s | 94% | |
| GPT-5.4 Mini (Reasoning, Low) | 97% | $0.0019 | 2.2s | 69% | |
| Gemini 3.1 Flash Lite (Preview) | 96% | $0.0007 | 1.6s | 66% | |
| words | sentences | paragraphs | ||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Model | Total â–¼ | 10 word summary | 20 word summary | 50 word summary | 100 word summary | 200 word summary | 1 sentence summary | 3 sentence summary | 10 sentence summary | 20 sentence summary | 50 sentence summary | 1 paragraph summary | 3 paragraph summary | 5 paragraph summary |
| Gemini 3.1 Pro (Preview) | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% |
| Qwen 3.5 397B A17B | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% |
| Z.AI GLM 5 | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% |
| Qwen 3.5 35B | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% |
| Qwen 3.5 Flash | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% |
| Gemini 3 Flash (Preview, Reasoning) | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% |
| Z.AI GLM 4.7 | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% |
| Claude Opus 4.6 (Reasoning) | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% |
| Gemini 3 Pro (Preview) | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% |
| Z.AI GLM 5 Turbo | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% |
| Claude Sonnet 4.6 (Reasoning) | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% |
| GPT-5.4 Mini (Reasoning) | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% |
| MoonshotAI: Kimi K2.5 | 100% | 100% | 100% | 100% | 100% | 99% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% |
| GPT-5.4 (Reasoning) | 100% | 100% | 100% | 100% | 100% | 99% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% |
| GPT-5.1 | 100% | 100% | 100% | 100% | 100% | 99% | 100% | 100% | 100% | 99% | 100% | 100% | 100% | 100% |
paragraphs
1 paragraph summary
Performance Score Distribution (Top 20)
Click a model name to view its detail page.
Price-Performance Score Distribution (Top 20)
Click a model name to view its detail page.
| Score | Cost | Time | ||
|---|---|---|---|---|
| Stealth: Aurora Alpha | 100% | — | 3.9s | |
| Arcee AI: Trinity Mini | 100% | $0.0001 | 880ms | |
| Ministral 3B | 100% | $0.0001 | 2.1s | |
| Inception Mercury | 100% | $0.0003 | 845ms | |
| Gemini 2.5 Flash Lite | 100% | $0.0002 | 688ms | |
| Nemotron 3 Super | 100% | $0.0000 | 3.6s | |
| Ministral 8B | 100% | $0.0002 | 988ms | |
| Ministral 3 3B | 100% | $0.0002 | 1.0s | |
| Gemma 3 4B | 100% | $0.0001 | 2.2s | |
| Arcee AI: Trinity Large (Preview) | 100% | $0.0000 | 5.5s | |
| GPT-4.1 Nano | 100% | $0.0001 | 2.3s | |
| LFM2 24B | 100% | $0.0001 | 2.7s | |
| Inception Mercury 2 | 100% | $0.0004 | 820ms | |
| Mistral Small Creative | 100% | $0.0002 | 1.4s | |
| Ministral 3 8B | 100% | $0.0003 | 1.3s | |
| Llama 3.1 8B | 100% | $0.0003 | 709ms | |
| Stealth: Healer Alpha | 100% | $0.0000 | 3.9s | |
| Mistral NeMO | 100% | $0.0003 | 1.8s | |
| Mistral Small 3.2 24B | 100% | $0.0002 | 2.5s | |
| Gemma 3 12B | 100% | $0.0001 | 3.8s | |
Most Stable Models (Top 20)
Ranked by stability (median × consistency). Click a model name to view its detail page.
| Score | Consistency | Stability | ||
|---|---|---|---|---|
| Claude Opus 4.6 (Reasoning) | 100% | 100% | 100% | |
| Gemini 3.1 Pro (Preview) | 100% | 100% | 100% | |
| Z.AI GLM 5 Turbo | 100% | 100% | 100% | |
| Claude Sonnet 4.6 (Reasoning) | 100% | 100% | 100% | |
| GPT-5.4 (Reasoning) | 100% | 100% | 100% | |
| GPT-5 Mini | 100% | 100% | 100% | |
| GPT-5.1 | 100% | 100% | 100% | |
| Claude Opus 4.6 | 100% | 100% | 100% | |
| GPT-5 | 100% | 100% | 100% | |
| Qwen 3.5 397B A17B | 100% | 100% | 100% | |
| Qwen 3.5 122B | 100% | 100% | 100% | |
| Grok 4.20 (Beta, Reasoning) | 100% | 100% | 100% | |
| GPT-5.4 (Reasoning, Low) | 100% | 100% | 100% | |
| Z.AI GLM 5 | 100% | 100% | 100% | |
| Claude Sonnet 4.6 | 100% | 100% | 100% | |
| MoonshotAI: Kimi K2.5 | 100% | 100% | 100% | |
| Qwen 3.5 27B | 100% | 100% | 100% | |
| ByteDance Seed 1.6 | 100% | 100% | 100% | |
| GPT-5.4 Mini (Reasoning) | 100% | 100% | 100% | |
| Gemini 3 Flash (Preview, Reasoning) | 100% | 100% | 100% | |
Top Overall Models (Top 20)
Ranked by composite score (performance, cost, speed & stability). Click a model name to view its detail page.
| Score | Cost | Speed | Stability | ||
|---|---|---|---|---|---|
| Arcee AI: Trinity Mini | 100% | $0.0001 | 880ms | 100% | |
| Gemini 2.5 Flash Lite | 100% | $0.0002 | 688ms | 100% | |
| Ministral 8B | 100% | $0.0002 | 988ms | 100% | |
| Inception Mercury | 100% | $0.0003 | 845ms | 100% | |
| Llama 3.1 8B | 100% | $0.0003 | 709ms | 100% | |
| Ministral 3 3B | 100% | $0.0002 | 1.0s | 100% | |
| Inception Mercury 2 | 100% | $0.0004 | 820ms | 100% | |
| Ministral 3 8B | 100% | $0.0003 | 1.3s | 100% | |
| Mistral Small Creative | 100% | $0.0002 | 1.4s | 100% | |
| Ministral 3B | 100% | $0.0001 | 2.1s | 100% | |
| Mistral NeMO | 100% | $0.0003 | 1.8s | 100% | |
| Gemma 3 4B | 100% | $0.0001 | 2.2s | 100% | |
| Gemini 2.5 Flash | 100% | $0.0006 | 1.3s | 100% | |
| GPT-5.4 Nano | 100% | $0.0005 | 1.5s | 100% | |
| GPT-4.1 Nano | 100% | $0.0001 | 2.3s | 100% | |
| GPT-5.4 Nano (Reasoning) | 100% | $0.0005 | 1.5s | 100% | |
| GPT-5.4 Nano (Reasoning, Low) | 100% | $0.0005 | 1.6s | 100% | |
| Mistral Small 4 | 100% | $0.0003 | 2.0s | 100% | |
| Gemini 3.1 Flash Lite (Preview) | 100% | $0.0006 | 1.4s | 100% | |
| Ministral 3 14B | 100% | $0.0004 | 1.9s | 100% | |
| Median | Evaluator | Top 3 | Flop 3 |
|---|---|---|---|
| 100.0% | Matches paragraph count |
3 paragraph summary
Performance Score Distribution (Top 20)
Click a model name to view its detail page.
Price-Performance Score Distribution (Top 20)
Click a model name to view its detail page.
| Score | Cost | Time | ||
|---|---|---|---|---|
| Stealth: Aurora Alpha | 100% | — | 1.4s | |
| Ministral 3 3B | 100% | $0.0002 | 1.3s | |
| Gemini 2.5 Flash Lite | 100% | $0.0003 | 1.4s | |
| Arcee AI: Trinity Mini | 100% | $0.0001 | 2.8s | |
| Llama 3.1 8B | 90% | $0.0004 | 1.4s | |
| Mistral Small Creative | 100% | $0.0003 | 2.3s | |
| Inception Mercury | 100% | $0.0004 | 1.9s | |
| Ministral 3 8B | 100% | $0.0003 | 2.7s | |
| Gemma 3 4B | 60% | $0.0001 | 3.7s | |
| Mistral Small 3.2 24B | 100% | $0.0002 | 4.4s | |
| Stealth: Healer Alpha | 100% | $0.0000 | 13.5s | |
| Arcee AI: Trinity Large (Preview) | 100% | $0.0000 | 6.9s | |
| Ministral 3 14B | 100% | $0.0004 | 4.1s | |
| Nemotron 3 Super | 100% | $0.0000 | 8.4s | |
| Inception Mercury 2 | 100% | $0.0007 | 1.1s | |
| Mistral Small 4 | 100% | $0.0004 | 3.4s | |
| GPT-5.4 Nano (Reasoning) | 100% | $0.0006 | 2.1s | |
| GPT-5.4 Nano (Reasoning, Low) | 100% | $0.0007 | 2.0s | |
| GPT-4.1 Nano | 100% | $0.0002 | 4.8s | |
| GPT-5.4 Nano | 100% | $0.0007 | 1.9s | |
Most Stable Models (Top 20)
Ranked by stability (median × consistency). Click a model name to view its detail page.
| Score | Consistency | Stability | ||
|---|---|---|---|---|
| Claude Opus 4.6 (Reasoning) | 100% | 100% | 100% | |
| Gemini 3.1 Pro (Preview) | 100% | 100% | 100% | |
| Z.AI GLM 5 Turbo | 100% | 100% | 100% | |
| Claude Sonnet 4.6 (Reasoning) | 100% | 100% | 100% | |
| GPT-5.4 (Reasoning) | 100% | 100% | 100% | |
| GPT-5 Mini | 100% | 100% | 100% | |
| GPT-5.1 | 100% | 100% | 100% | |
| Claude Opus 4.6 | 100% | 100% | 100% | |
| GPT-5 | 100% | 100% | 100% | |
| Qwen 3.5 397B A17B | 100% | 100% | 100% | |
| Qwen 3.5 122B | 100% | 100% | 100% | |
| Grok 4.20 (Beta, Reasoning) | 100% | 100% | 100% | |
| GPT-5.4 (Reasoning, Low) | 100% | 100% | 100% | |
| Z.AI GLM 5 | 100% | 100% | 100% | |
| Claude Sonnet 4.6 | 100% | 100% | 100% | |
| MoonshotAI: Kimi K2.5 | 100% | 100% | 100% | |
| Qwen 3.5 27B | 100% | 100% | 100% | |
| ByteDance Seed 1.6 | 100% | 100% | 100% | |
| GPT-5.4 Mini (Reasoning) | 100% | 100% | 100% | |
| Gemini 3 Flash (Preview, Reasoning) | 100% | 100% | 100% | |
Top Overall Models (Top 20)
Ranked by composite score (performance, cost, speed & stability). Click a model name to view its detail page.
| Score | Cost | Speed | Stability | ||
|---|---|---|---|---|---|
| Stealth: Aurora Alpha | 100% | — | 1.4s | 100% | |
| Ministral 3 3B | 100% | $0.0002 | 1.3s | 100% | |
| Gemini 2.5 Flash Lite | 100% | $0.0003 | 1.4s | 100% | |
| Inception Mercury 2 | 100% | $0.0007 | 1.1s | 100% | |
| Inception Mercury | 100% | $0.0004 | 1.9s | 100% | |
| Mistral Small Creative | 100% | $0.0003 | 2.3s | 100% | |
| GPT-5.4 Nano | 100% | $0.0007 | 1.9s | 100% | |
| Arcee AI: Trinity Mini | 100% | $0.0001 | 2.8s | 100% | |
| GPT-5.4 Nano (Reasoning, Low) | 100% | $0.0007 | 2.0s | 100% | |
| GPT-5.4 Nano (Reasoning) | 100% | $0.0006 | 2.1s | 100% | |
| Gemini 3.1 Flash Lite (Preview) | 100% | $0.0008 | 1.8s | 100% | |
| Ministral 3 8B | 100% | $0.0003 | 2.7s | 100% | |
| Gemini 2.5 Flash | 100% | $0.0010 | 2.0s | 100% | |
| GPT-5.4 Mini | 100% | $0.0014 | 1.8s | 100% | |
| Mistral Small 4 | 100% | $0.0004 | 3.4s | 100% | |
| GPT-4.1 Mini | 100% | $0.0007 | 3.4s | 100% | |
| GPT-5.4 Mini (Reasoning, Low) | 100% | $0.0018 | 1.7s | 100% | |
| Ministral 3 14B | 100% | $0.0004 | 4.1s | 100% | |
| Mistral Small 3.2 24B | 100% | $0.0002 | 4.4s | 100% | |
| Grok 4.20 (Beta) | 100% | $0.0021 | 1.7s | 100% | |
| Median | Evaluator | Top 3 | Flop 3 |
|---|---|---|---|
| 100.0% | Matches paragraph count |
5 paragraph summary
Performance Score Distribution (Top 20)
Click a model name to view its detail page.
Price-Performance Score Distribution (Top 20)
Click a model name to view its detail page.
| Score | Cost | Time | ||
|---|---|---|---|---|
| Stealth: Aurora Alpha | 100% | — | 5.5s | |
| Ministral 3 3B | 100% | $0.0002 | 1.8s | |
| Gemini 2.5 Flash Lite | 100% | $0.0003 | 2.0s | |
| Llama 3.1 8B | 80% | $0.0004 | 1.6s | |
| Inception Mercury | 100% | $0.0005 | 1.8s | |
| Ministral 3 8B | 100% | $0.0003 | 2.9s | |
| Inception Mercury 2 | 100% | $0.0006 | 1.2s | |
| Mistral Small Creative | 100% | $0.0003 | 3.5s | |
| Stealth: Healer Alpha | 80% | $0.0000 | 6.3s | |
| Nemotron 3 Super | 100% | $0.0000 | 7.6s | |
| GPT-4.1 Nano | 100% | $0.0002 | 5.6s | |
| GPT-5.4 Nano (Reasoning) | 100% | $0.0008 | 2.5s | |
| GPT-5.4 Nano | 100% | $0.0008 | 2.4s | |
| GPT-5.4 Nano (Reasoning, Low) | 100% | $0.0008 | 2.5s | |
| Ministral 3 14B | 100% | $0.0004 | 4.7s | |
| Mistral Small 3.2 24B | 100% | $0.0003 | 5.8s | |
| Grok 4 Fast | 100% | $0.0005 | 4.6s | |
| Gemini 3.1 Flash Lite (Preview) | 100% | $0.0010 | 2.3s | |
| ByteDance Seed 1.6 Flash | 100% | $0.0004 | 5.9s | |
| LFM2 24B | 100% | $0.0001 | 9.6s | |
Most Stable Models (Top 20)
Ranked by stability (median × consistency). Click a model name to view its detail page.
| Score | Consistency | Stability | ||
|---|---|---|---|---|
| Claude Opus 4.6 (Reasoning) | 100% | 100% | 100% | |
| Gemini 3.1 Pro (Preview) | 100% | 100% | 100% | |
| Z.AI GLM 5 Turbo | 100% | 100% | 100% | |
| Claude Sonnet 4.6 (Reasoning) | 100% | 100% | 100% | |
| GPT-5.4 (Reasoning) | 100% | 100% | 100% | |
| GPT-5 Mini | 100% | 100% | 100% | |
| GPT-5.1 | 100% | 100% | 100% | |
| Claude Opus 4.6 | 100% | 100% | 100% | |
| GPT-5 | 100% | 100% | 100% | |
| Qwen 3.5 397B A17B | 100% | 100% | 100% | |
| Qwen 3.5 122B | 100% | 100% | 100% | |
| Grok 4.20 (Beta, Reasoning) | 100% | 100% | 100% | |
| GPT-5.4 (Reasoning, Low) | 100% | 100% | 100% | |
| Z.AI GLM 5 | 100% | 100% | 100% | |
| Claude Sonnet 4.6 | 100% | 100% | 100% | |
| MoonshotAI: Kimi K2.5 | 100% | 100% | 100% | |
| Qwen 3.5 27B | 100% | 100% | 100% | |
| ByteDance Seed 1.6 | 100% | 100% | 100% | |
| GPT-5.4 Mini (Reasoning) | 100% | 100% | 100% | |
| Gemini 3 Flash (Preview, Reasoning) | 100% | 100% | 100% | |
Top Overall Models (Top 20)
Ranked by composite score (performance, cost, speed & stability). Click a model name to view its detail page.
| Score | Cost | Speed | Stability | ||
|---|---|---|---|---|---|
| Ministral 3 3B | 100% | $0.0002 | 1.8s | 100% | |
| Inception Mercury 2 | 100% | $0.0006 | 1.2s | 100% | |
| Gemini 2.5 Flash Lite | 100% | $0.0003 | 2.0s | 100% | |
| Inception Mercury | 100% | $0.0005 | 1.8s | 100% | |
| Ministral 3 8B | 100% | $0.0003 | 2.9s | 100% | |
| GPT-5.4 Nano | 100% | $0.0008 | 2.4s | 100% | |
| GPT-5.4 Nano (Reasoning) | 100% | $0.0008 | 2.5s | 100% | |
| Mistral Small Creative | 100% | $0.0003 | 3.5s | 100% | |
| GPT-5.4 Nano (Reasoning, Low) | 100% | $0.0008 | 2.5s | 100% | |
| Gemini 3.1 Flash Lite (Preview) | 100% | $0.0010 | 2.3s | 100% | |
| Ministral 3 14B | 100% | $0.0004 | 4.7s | 100% | |
| Grok 4 Fast | 100% | $0.0005 | 4.6s | 100% | |
| Gemini 2.5 Flash | 100% | $0.0013 | 2.9s | 100% | |
| GPT-4.1 Nano | 100% | $0.0002 | 5.6s | 100% | |
| GPT-5.4 Mini | 100% | $0.0019 | 2.2s | 100% | |
| Mistral Small 3.2 24B | 100% | $0.0003 | 5.8s | 100% | |
| Mistral Small 4 | 100% | $0.0005 | 5.3s | 100% | |
| GPT-5.4 Mini (Reasoning, Low) | 100% | $0.0020 | 2.1s | 100% | |
| ByteDance Seed 1.6 Flash | 100% | $0.0004 | 5.9s | 100% | |
| Stealth: Aurora Alpha | 100% | — | 5.5s | 100% | |
| Median | Evaluator | Top 3 | Flop 3 |
|---|---|---|---|
| 100.0% | Matches paragraph count |
sentences
1 sentence summary
Performance Score Distribution (Top 20)
Click a model name to view its detail page.
Price-Performance Score Distribution (Top 20)
Click a model name to view its detail page.
| Score | Cost | Time | ||
|---|---|---|---|---|
| Stealth: Aurora Alpha | 100% | — | 3.0s | |
| Ministral 3B | 100% | $0.0001 | 474ms | |
| Inception Mercury | 100% | $0.0001 | 547ms | |
| Gemma 3 4B | 100% | $0.0001 | 756ms | |
| LFM2 24B | 100% | $0.0001 | 1.9s | |
| Ministral 3 3B | 100% | $0.0002 | 530ms | |
| Arcee AI: Trinity Large (Preview) | 100% | $0.0000 | 2.2s | |
| Gemini 2.5 Flash Lite | 100% | $0.0002 | 536ms | |
| Ministral 8B | 90% | $0.0002 | 800ms | |
| Arcee AI: Trinity Mini | 100% | $0.0001 | 1.5s | |
| Gemma 3 12B | 100% | $0.0001 | 1.6s | |
| GPT-4.1 Nano | 100% | $0.0001 | 1.6s | |
| Mistral Small Creative | 100% | $0.0002 | 758ms | |
| Mistral Small 3.2 24B | 100% | $0.0002 | 1.2s | |
| Ministral 3 8B | 100% | $0.0003 | 648ms | |
| Stealth: Healer Alpha | 100% | $0.0000 | 3.1s | |
| Nemotron 3 Super | 100% | $0.0000 | 4.7s | |
| Mistral Small 4 | 100% | $0.0003 | 872ms | |
| Llama 3.1 8B | 100% | $0.0003 | 524ms | |
| Gemma 3 27B | 100% | $0.0002 | 2.5s | |
Most Stable Models (Top 20)
Ranked by stability (median × consistency). Click a model name to view its detail page.
| Score | Consistency | Stability | ||
|---|---|---|---|---|
| Claude Opus 4.6 (Reasoning) | 100% | 100% | 100% | |
| Gemini 3.1 Pro (Preview) | 100% | 100% | 100% | |
| Z.AI GLM 5 Turbo | 100% | 100% | 100% | |
| Claude Sonnet 4.6 (Reasoning) | 100% | 100% | 100% | |
| GPT-5.4 (Reasoning) | 100% | 100% | 100% | |
| GPT-5 Mini | 100% | 100% | 100% | |
| GPT-5.1 | 100% | 100% | 100% | |
| Claude Opus 4.6 | 100% | 100% | 100% | |
| GPT-5 | 100% | 100% | 100% | |
| Qwen 3.5 397B A17B | 100% | 100% | 100% | |
| Qwen 3.5 122B | 100% | 100% | 100% | |
| Grok 4.20 (Beta, Reasoning) | 100% | 100% | 100% | |
| GPT-5.4 (Reasoning, Low) | 100% | 100% | 100% | |
| Z.AI GLM 5 | 100% | 100% | 100% | |
| Claude Sonnet 4.6 | 100% | 100% | 100% | |
| MoonshotAI: Kimi K2.5 | 100% | 100% | 100% | |
| Qwen 3.5 27B | 100% | 100% | 100% | |
| ByteDance Seed 1.6 | 100% | 100% | 100% | |
| GPT-5.4 Mini (Reasoning) | 100% | 100% | 100% | |
| Gemini 3 Flash (Preview, Reasoning) | 100% | 100% | 100% | |
Top Overall Models (Top 20)
Ranked by composite score (performance, cost, speed & stability). Click a model name to view its detail page.
| Score | Cost | Speed | Stability | ||
|---|---|---|---|---|---|
| Ministral 3B | 100% | $0.0001 | 474ms | 100% | |
| Inception Mercury | 100% | $0.0001 | 547ms | 100% | |
| Gemma 3 4B | 100% | $0.0001 | 756ms | 100% | |
| Ministral 3 3B | 100% | $0.0002 | 530ms | 100% | |
| Gemini 2.5 Flash Lite | 100% | $0.0002 | 536ms | 100% | |
| Mistral Small Creative | 100% | $0.0002 | 758ms | 100% | |
| Ministral 3 8B | 100% | $0.0003 | 648ms | 100% | |
| Llama 3.1 8B | 100% | $0.0003 | 524ms | 100% | |
| Mistral Small 4 | 100% | $0.0003 | 872ms | 100% | |
| Mistral Small 3.2 24B | 100% | $0.0002 | 1.2s | 100% | |
| GPT-4.1 Nano | 100% | $0.0001 | 1.6s | 100% | |
| Arcee AI: Trinity Mini | 100% | $0.0001 | 1.5s | 100% | |
| Inception Mercury 2 | 100% | $0.0004 | 561ms | 100% | |
| Gemma 3 12B | 100% | $0.0001 | 1.6s | 100% | |
| Gemini 2.5 Flash | 100% | $0.0004 | 796ms | 100% | |
| LFM2 24B | 100% | $0.0001 | 1.9s | 100% | |
| Arcee AI: Trinity Large (Preview) | 100% | $0.0000 | 2.2s | 100% | |
| Ministral 3 14B | 100% | $0.0003 | 1.2s | 100% | |
| GPT-5.4 Nano (Reasoning, Low) | 100% | $0.0004 | 1.1s | 100% | |
| GPT-5.4 Nano (Reasoning) | 100% | $0.0004 | 1.1s | 100% | |
| Median | Evaluator | Top 3 | Flop 3 |
|---|---|---|---|
| 100.0% | Matches sentence count |
3 sentence summary
Performance Score Distribution (Top 20)
Click a model name to view its detail page.
Price-Performance Score Distribution (Top 20)
Click a model name to view its detail page.
| Score | Cost | Time | ||
|---|---|---|---|---|
| Ministral 3B | 100% | $0.0001 | 741ms | |
| Stealth: Aurora Alpha | 100% | — | 3.0s | |
| Gemma 3 4B | 100% | $0.0001 | 1.5s | |
| Ministral 3 3B | 100% | $0.0002 | 737ms | |
| Gemini 2.5 Flash Lite | 100% | $0.0002 | 658ms | |
| Inception Mercury | 100% | $0.0003 | 769ms | |
| Ministral 8B | 100% | $0.0002 | 792ms | |
| Mistral Small Creative | 100% | $0.0002 | 964ms | |
| LFM2 24B | 100% | $0.0001 | 4.9s | |
| GPT-4.1 Nano | 100% | $0.0001 | 2.2s | |
| Arcee AI: Trinity Mini | 100% | $0.0001 | 2.2s | |
| Arcee AI: Trinity Large (Preview) | 100% | $0.0000 | 3.6s | |
| Ministral 3 8B | 100% | $0.0003 | 932ms | |
| Mistral Small 3.2 24B | 100% | $0.0002 | 2.7s | |
| Gemma 3 12B | 100% | $0.0001 | 2.8s | |
| Qwen3 235B A22B Instruct 2507 | 100% | $0.0002 | 3.0s | |
| Nemotron 3 Super | 100% | $0.0000 | 5.0s | |
| Llama 3.1 8B | 100% | $0.0003 | 1.1s | |
| Ministral 3 14B | 100% | $0.0003 | 1.2s | |
| Mistral Small 4 | 100% | $0.0003 | 1.3s | |
Most Stable Models (Top 20)
Ranked by stability (median × consistency). Click a model name to view its detail page.
| Score | Consistency | Stability | ||
|---|---|---|---|---|
| Claude Opus 4.6 (Reasoning) | 100% | 100% | 100% | |
| Gemini 3.1 Pro (Preview) | 100% | 100% | 100% | |
| Z.AI GLM 5 Turbo | 100% | 100% | 100% | |
| Claude Sonnet 4.6 (Reasoning) | 100% | 100% | 100% | |
| GPT-5.4 (Reasoning) | 100% | 100% | 100% | |
| GPT-5 Mini | 100% | 100% | 100% | |
| Claude Opus 4.6 | 100% | 100% | 100% | |
| GPT-5 | 100% | 100% | 100% | |
| Qwen 3.5 397B A17B | 100% | 100% | 100% | |
| Qwen 3.5 122B | 100% | 100% | 100% | |
| Grok 4.20 (Beta, Reasoning) | 100% | 100% | 100% | |
| GPT-5.4 (Reasoning, Low) | 100% | 100% | 100% | |
| Z.AI GLM 5 | 100% | 100% | 100% | |
| Claude Sonnet 4.6 | 100% | 100% | 100% | |
| MoonshotAI: Kimi K2.5 | 100% | 100% | 100% | |
| Qwen 3.5 27B | 100% | 100% | 100% | |
| ByteDance Seed 1.6 | 100% | 100% | 100% | |
| GPT-5.4 Mini (Reasoning) | 100% | 100% | 100% | |
| Gemini 3 Flash (Preview, Reasoning) | 100% | 100% | 100% | |
| o4 Mini High | 100% | 100% | 100% | |
Top Overall Models (Top 20)
Ranked by composite score (performance, cost, speed & stability). Click a model name to view its detail page.
| Score | Cost | Speed | Stability | ||
|---|---|---|---|---|---|
| Gemini 2.5 Flash Lite | 100% | $0.0002 | 658ms | 100% | |
| Ministral 3 3B | 100% | $0.0002 | 737ms | 100% | |
| Inception Mercury | 100% | $0.0003 | 769ms | 100% | |
| Mistral Small Creative | 100% | $0.0002 | 964ms | 100% | |
| Ministral 3 8B | 100% | $0.0003 | 932ms | 100% | |
| Inception Mercury 2 | 100% | $0.0005 | 713ms | 100% | |
| Ministral 3 14B | 100% | $0.0003 | 1.2s | 100% | |
| Gemma 3 4B | 100% | $0.0001 | 1.5s | 100% | |
| Mistral Small 4 | 100% | $0.0003 | 1.3s | 100% | |
| Gemini 3.1 Flash Lite (Preview) | 100% | $0.0005 | 1.1s | 100% | |
| Gemini 2.5 Flash | 100% | $0.0005 | 978ms | 100% | |
| GPT-4.1 Nano | 100% | $0.0001 | 2.2s | 100% | |
| GPT-5.4 Nano | 100% | $0.0005 | 1.4s | 100% | |
| GPT-5.4 Nano (Reasoning, Low) | 100% | $0.0005 | 1.5s | 100% | |
| Arcee AI: Trinity Mini | 100% | $0.0001 | 2.2s | 100% | |
| GPT-5.4 Nano (Reasoning) | 100% | $0.0004 | 1.8s | 100% | |
| Gemma 3 12B | 100% | $0.0001 | 2.8s | 100% | |
| Ministral 3B | 100% | $0.0001 | 741ms | 99% | |
| Mistral Small 3.2 24B | 100% | $0.0002 | 2.7s | 100% | |
| GPT-5.4 Mini | 100% | $0.0010 | 1.0s | 100% | |
| Median | Evaluator | Top 3 | Flop 3 |
|---|---|---|---|
| 100.0% | Matches sentence count |
10 sentence summary
Performance Score Distribution (Top 20)
Click a model name to view its detail page.
| Score | ||
|---|---|---|
| Claude Opus 4.6 (Reasoning) | 100% | |
| Gemini 3.1 Pro (Preview) | 100% | |
| Z.AI GLM 5 Turbo | 100% | |
| GPT-5.4 (Reasoning) | 100% | |
| GPT-5 | 100% | |
| Qwen 3.5 397B A17B | 100% | |
| Qwen 3.5 122B | 100% | |
| Grok 4.20 (Beta, Reasoning) | 100% | |
| GPT-5.4 (Reasoning, Low) | 100% | |
| Z.AI GLM 5 | 100% | |
| Claude Sonnet 4.6 | 100% | |
| MoonshotAI: Kimi K2.5 | 100% | |
| Qwen 3.5 27B | 100% | |
| GPT-5.4 Mini (Reasoning) | 100% | |
| o4 Mini High | 100% | |
| Claude Opus 4.5 | 100% | |
| Gemini 3 Pro (Preview) | 100% | |
| GPT-4.1 | 100% | |
| Grok 4 | 100% | |
| Qwen 3.5 35B | 100% | |
Price-Performance Score Distribution (Top 20)
Click a model name to view its detail page.
| Score | Cost | Time | ||
|---|---|---|---|---|
| Stealth: Aurora Alpha | 99% | — | 5.6s | |
| Ministral 3 3B | 100% | $0.0002 | 1.4s | |
| GPT-4.1 Nano | 100% | $0.0002 | 3.0s | |
| Mistral Small Creative | 100% | $0.0003 | 1.9s | |
| Inception Mercury | 100% | $0.0004 | 1.4s | |
| Gemma 3 4B | 100% | $0.0001 | 3.2s | |
| Llama 3.1 8B | 98% | $0.0004 | 1.4s | |
| LFM2 24B | 100% | $0.0001 | 4.8s | |
| Mistral Small 3.2 24B | 100% | $0.0002 | 3.4s | |
| Stealth: Healer Alpha | 100% | $0.0000 | 5.6s | |
| Ministral 3 14B | 100% | $0.0004 | 2.6s | |
| Gemini 2.5 Flash Lite | 84% | $0.0003 | 1.5s | |
| Grok 4.1 Fast | 100% | $0.0004 | 3.6s | |
| GPT-5.4 Nano (Reasoning, Low) | 100% | $0.0006 | 1.8s | |
| Nemotron 3 Super | 100% | $0.0000 | 9.1s | |
| GPT-5.4 Nano | 100% | $0.0006 | 1.9s | |
| Gemini 3.1 Flash Lite (Preview) | 100% | $0.0007 | 1.6s | |
| GPT-4.1 Mini | 100% | $0.0006 | 3.8s | |
| Grok 4 Fast | 100% | $0.0005 | 3.8s | |
| Inception Mercury 2 | 99% | $0.0008 | 1.1s | |
Most Stable Models (Top 20)
Ranked by stability (median × consistency). Click a model name to view its detail page.
| Score | Consistency | Stability | ||
|---|---|---|---|---|
| Claude Opus 4.6 (Reasoning) | 100% | 100% | 100% | |
| Gemini 3.1 Pro (Preview) | 100% | 100% | 100% | |
| Z.AI GLM 5 Turbo | 100% | 100% | 100% | |
| GPT-5.4 (Reasoning) | 100% | 100% | 100% | |
| GPT-5 | 100% | 100% | 100% | |
| Qwen 3.5 397B A17B | 100% | 100% | 100% | |
| Qwen 3.5 122B | 100% | 100% | 100% | |
| Grok 4.20 (Beta, Reasoning) | 100% | 100% | 100% | |
| GPT-5.4 (Reasoning, Low) | 100% | 100% | 100% | |
| Z.AI GLM 5 | 100% | 100% | 100% | |
| Claude Sonnet 4.6 | 100% | 100% | 100% | |
| MoonshotAI: Kimi K2.5 | 100% | 100% | 100% | |
| Qwen 3.5 27B | 100% | 100% | 100% | |
| GPT-5.4 Mini (Reasoning) | 100% | 100% | 100% | |
| o4 Mini High | 100% | 100% | 100% | |
| Claude Opus 4.5 | 100% | 100% | 100% | |
| Gemini 3 Pro (Preview) | 100% | 100% | 100% | |
| GPT-4.1 | 100% | 100% | 100% | |
| Grok 4 | 100% | 100% | 100% | |
| Qwen 3.5 35B | 100% | 100% | 100% | |
Top Overall Models (Top 20)
Ranked by composite score (performance, cost, speed & stability). Click a model name to view its detail page.
| Score | Cost | Speed | Stability | ||
|---|---|---|---|---|---|
| Mistral Small Creative | 100% | $0.0003 | 1.9s | 100% | |
| Ministral 3 3B | 100% | $0.0002 | 1.4s | 99% | |
| Gemini 3.1 Flash Lite (Preview) | 100% | $0.0007 | 1.6s | 100% | |
| Inception Mercury | 100% | $0.0004 | 1.4s | 99% | |
| GPT-4.1 Nano | 100% | $0.0002 | 3.0s | 100% | |
| Mistral Small 3.2 24B | 100% | $0.0002 | 3.4s | 100% | |
| Grok 4.1 Fast | 100% | $0.0004 | 3.6s | 100% | |
| Gemma 3 4B | 100% | $0.0001 | 3.2s | 99% | |
| Grok 4 Fast | 100% | $0.0005 | 3.8s | 100% | |
| GPT-5.4 Nano (Reasoning, Low) | 100% | $0.0006 | 1.8s | 99% | |
| GPT-4.1 Mini | 100% | $0.0006 | 3.8s | 100% | |
| Ministral 3 14B | 100% | $0.0004 | 2.6s | 99% | |
| GPT-5.4 Nano | 100% | $0.0006 | 1.9s | 99% | |
| Stealth: Healer Alpha | 100% | $0.0000 | 5.6s | 100% | |
| GPT-5.4 Mini | 100% | $0.0017 | 2.2s | 100% | |
| Grok 4.20 (Beta) | 100% | $0.0020 | 1.5s | 100% | |
| GPT-5.4 Nano (Reasoning) | 100% | $0.0007 | 2.8s | 99% | |
| LFM2 24B | 100% | $0.0001 | 4.8s | 99% | |
| Gemini 3 Flash (Preview) | 100% | $0.0015 | 2.6s | 100% | |
| GPT-5.4 Mini (Reasoning, Low) | 100% | $0.0019 | 2.1s | 100% | |
| Median | Evaluator | Top 3 | Flop 3 |
|---|---|---|---|
| 100.0% | Matches sentence count |
20 sentence summary
Performance Score Distribution (Top 20)
Click a model name to view its detail page.
Price-Performance Score Distribution (Top 20)
Click a model name to view its detail page.
| Score | Cost | Time | ||
|---|---|---|---|---|
| Mistral Small 3.2 24B | 100% | $0.0003 | 5.5s | |
| Gemini 3.1 Flash Lite (Preview) | 100% | $0.0010 | 2.3s | |
| GPT-5.4 Nano | 99% | $0.0009 | 2.8s | |
| Gemma 3 4B | 100% | $0.0001 | 6.6s | |
| Grok 4 Fast | 100% | $0.0005 | 5.3s | |
| Grok 4.20 (Beta) | 100% | $0.0025 | 1.6s | |
| GPT-5.4 Nano (Reasoning) | 99% | $0.0013 | 6.1s | |
| GPT-5.4 Mini | 100% | $0.0027 | 2.8s | |
| Llama 3.1 8B | 90% | $0.0004 | 2.3s | |
| GPT-5.4 Mini (Reasoning, Low) | 100% | $0.0029 | 3.0s | |
| Grok 4.1 Fast | 88% | $0.0006 | 7.5s | |
| Gemini 3 Flash (Preview) | 100% | $0.0021 | 3.9s | |
| Stealth: Hunter Alpha | 90% | $0.0000 | 10.4s | |
| Stealth: Healer Alpha | 94% | $0.0000 | 9.4s | |
| DeepSeek V3 (2025-03-24) | 100% | $0.0007 | 17.4s | |
| Mistral Large 3 | 100% | $0.0014 | 7.7s | |
| DeepSeek V3.1 | 90% | $0.0010 | 8.1s | |
| ByteDance Seed 1.6 Flash | 89% | $0.0007 | 10.2s | |
| Inception Mercury | 96% | $0.0005 | 2.1s | |
| Mistral Small Creative | 84% | $0.0003 | 2.7s | |
Most Stable Models (Top 20)
Ranked by stability (median × consistency). Click a model name to view its detail page.
| Score | Consistency | Stability | ||
|---|---|---|---|---|
| Claude Opus 4.6 (Reasoning) | 100% | 100% | 100% | |
| Gemini 3.1 Pro (Preview) | 100% | 100% | 100% | |
| Z.AI GLM 5 Turbo | 100% | 100% | 100% | |
| GPT-5.4 (Reasoning) | 100% | 100% | 100% | |
| GPT-5 Mini | 100% | 100% | 100% | |
| GPT-5 | 100% | 100% | 100% | |
| Qwen 3.5 397B A17B | 100% | 100% | 100% | |
| Qwen 3.5 122B | 100% | 100% | 100% | |
| Grok 4.20 (Beta, Reasoning) | 100% | 100% | 100% | |
| GPT-5.4 (Reasoning, Low) | 100% | 100% | 100% | |
| Z.AI GLM 5 | 100% | 100% | 100% | |
| Claude Sonnet 4.6 | 100% | 100% | 100% | |
| Qwen 3.5 27B | 100% | 100% | 100% | |
| ByteDance Seed 1.6 | 100% | 100% | 100% | |
| GPT-5.4 Mini (Reasoning) | 100% | 100% | 100% | |
| Gemini 3 Flash (Preview, Reasoning) | 100% | 100% | 100% | |
| Gemini 3 Pro (Preview) | 100% | 100% | 100% | |
| Qwen 3.5 35B | 100% | 100% | 100% | |
| Qwen 3.5 Flash | 100% | 100% | 100% | |
| Qwen 3.5 9B | 100% | 100% | 100% | |
Top Overall Models (Top 20)
Ranked by composite score (performance, cost, speed & stability). Click a model name to view its detail page.
| Score | Cost | Speed | Stability | ||
|---|---|---|---|---|---|
| Gemini 3.1 Flash Lite (Preview) | 100% | $0.0010 | 2.3s | 99% | |
| Mistral Small 3.2 24B | 100% | $0.0003 | 5.5s | 100% | |
| Grok 4.20 (Beta) | 100% | $0.0025 | 1.6s | 100% | |
| Gemma 3 4B | 100% | $0.0001 | 6.6s | 100% | |
| GPT-5.4 Mini | 100% | $0.0027 | 2.8s | 100% | |
| Gemini 3 Flash (Preview) | 100% | $0.0021 | 3.9s | 100% | |
| Grok 4 Fast | 100% | $0.0005 | 5.3s | 99% | |
| GPT-5.4 Mini (Reasoning, Low) | 100% | $0.0029 | 3.0s | 100% | |
| Mistral Large 3 | 100% | $0.0014 | 7.7s | 100% | |
| GPT-5.4 Nano | 99% | $0.0009 | 2.8s | 95% | |
| GPT-5.4 Nano (Reasoning) | 99% | $0.0013 | 6.1s | 95% | |
| Llama 3.1 Nemotron 70B | 100% | $0.0007 | 14.8s | 100% | |
| GPT-5.4 Mini (Reasoning) | 100% | $0.0044 | 7.9s | 100% | |
| Grok 4.20 (Beta, Reasoning) | 100% | $0.0065 | 4.9s | 100% | |
| Nemotron 3 Super | 100% | $0.0000 | 17.3s | 99% | |
| Gemini 2.5 Flash Lite (Reasoning) | 99% | $0.0009 | 10.2s | 95% | |
| DeepSeek V3 (2025-03-24) | 100% | $0.0007 | 17.4s | 100% | |
| Qwen 3.5 Plus (2026-02-15) | 100% | $0.0016 | 17.7s | 100% | |
| GPT-5 Mini | 100% | $0.0029 | 16.1s | 100% | |
| Inception Mercury | 96% | $0.0005 | 2.1s | 85% | |
| Median | Evaluator | Top 3 | Flop 3 |
|---|---|---|---|
| 96.0% | Matches sentence count |
50 sentence summary
Performance Score Distribution (Top 20)
Click a model name to view its detail page.
| Score | ||
|---|---|---|
| Gemini 3.1 Pro (Preview) | 100% | |
| GPT-5 Mini | 100% | |
| GPT-5.1 | 100% | |
| Claude Opus 4.6 | 100% | |
| GPT-5 | 100% | |
| Qwen 3.5 397B A17B | 100% | |
| Qwen 3.5 122B | 100% | |
| GPT-5.4 (Reasoning, Low) | 100% | |
| Z.AI GLM 5 | 100% | |
| MoonshotAI: Kimi K2.5 | 100% | |
| ByteDance Seed 1.6 | 100% | |
| GPT-5.2 | 100% | |
| Z.AI GLM 4.7 | 100% | |
| Qwen 3.5 35B | 100% | |
| Qwen 3.5 Flash | 100% | |
| Qwen 3.5 9B | 100% | |
| Qwen 3.5 Plus (2026-02-15) | 100% | |
| Mistral Large 3 | 100% | |
| Gemini 3 Flash (Preview) | 100% | |
| Z.AI GLM 4.7 Flash | 100% | |
Price-Performance Score Distribution (Top 20)
Click a model name to view its detail page.
| Score | Cost | Time | ||
|---|---|---|---|---|
| Inception Mercury | 88% | $0.0007 | 3.3s | |
| Mistral Small Creative | 100% | $0.0006 | 6.9s | |
| Ministral 3 14B | 100% | $0.0005 | 9.4s | |
| Gemini 3.1 Flash Lite (Preview) | 93% | $0.0015 | 3.9s | |
| Inception Mercury 2 | 90% | $0.0018 | 2.6s | |
| Stealth: Aurora Alpha | 94% | — | 4.8s | |
| Mistral Small 4 | 80% | $0.0009 | 10.1s | |
| Grok 4.1 Fast | 90% | $0.0008 | 10.3s | |
| Gemma 3 4B | 75% | $0.0002 | 13.1s | |
| Mistral Small 3.2 24B | 100% | $0.0004 | 30.4s | |
| GPT-5.4 Nano (Reasoning) | 80% | $0.0022 | 7.8s | |
| GPT-5.4 Mini (Reasoning, Low) | 100% | $0.0038 | 4.0s | |
| Gemini 3 Flash (Preview) | 100% | $0.0036 | 7.1s | |
| Mistral Large 3 | 100% | $0.0020 | 13.8s | |
| DeepSeek V3.2 | 73% | $0.0006 | 39.6s | |
| Grok 4.20 (Beta) | 97% | $0.0045 | 2.5s | |
| GPT-5.4 Mini | 91% | $0.0039 | 4.0s | |
| GPT-5.4 Nano | 69% | $0.0014 | 9.4s | |
| Mistral Medium 3.1 | 100% | $0.0027 | 20.2s | |
| Llama 3.1 Nemotron 70B | 100% | $0.0009 | 32.0s | |
Most Stable Models (Top 20)
Ranked by stability (median × consistency). Click a model name to view its detail page.
| Score | Consistency | Stability | ||
|---|---|---|---|---|
| Gemini 3.1 Pro (Preview) | 100% | 100% | 100% | |
| GPT-5 Mini | 100% | 100% | 100% | |
| GPT-5.1 | 100% | 100% | 100% | |
| Claude Opus 4.6 | 100% | 100% | 100% | |
| GPT-5 | 100% | 100% | 100% | |
| Qwen 3.5 397B A17B | 100% | 100% | 100% | |
| Qwen 3.5 122B | 100% | 100% | 100% | |
| GPT-5.4 (Reasoning, Low) | 100% | 100% | 100% | |
| Z.AI GLM 5 | 100% | 100% | 100% | |
| MoonshotAI: Kimi K2.5 | 100% | 100% | 100% | |
| ByteDance Seed 1.6 | 100% | 100% | 100% | |
| GPT-5.2 | 100% | 100% | 100% | |
| Z.AI GLM 4.7 | 100% | 100% | 100% | |
| Qwen 3.5 35B | 100% | 100% | 100% | |
| Qwen 3.5 Flash | 100% | 100% | 100% | |
| Qwen 3.5 9B | 100% | 100% | 100% | |
| Qwen 3.5 Plus (2026-02-15) | 100% | 100% | 100% | |
| Mistral Large 3 | 100% | 100% | 100% | |
| Gemini 3 Flash (Preview) | 100% | 100% | 100% | |
| Z.AI GLM 4.7 Flash | 100% | 100% | 100% | |
Top Overall Models (Top 20)
Ranked by composite score (performance, cost, speed & stability). Click a model name to view its detail page.
| Score | Cost | Speed | Stability | ||
|---|---|---|---|---|---|
| Mistral Small Creative | 100% | $0.0006 | 6.9s | 100% | |
| Ministral 3 14B | 100% | $0.0005 | 9.4s | 100% | |
| Gemini 3 Flash (Preview) | 100% | $0.0036 | 7.1s | 100% | |
| Mistral Large 3 | 100% | $0.0020 | 13.8s | 100% | |
| GPT-5.4 Mini (Reasoning, Low) | 100% | $0.0038 | 4.0s | 99% | |
| Mistral Medium 3.1 | 100% | $0.0027 | 20.2s | 100% | |
| GPT-5 Mini | 100% | $0.0039 | 17.2s | 100% | |
| GPT-5.4 Mini (Reasoning) | 100% | $0.0074 | 8.8s | 100% | |
| Llama 3.1 Nemotron 70B | 100% | $0.0009 | 32.0s | 100% | |
| Mistral Small 3.2 24B | 100% | $0.0004 | 30.4s | 99% | |
| Qwen 3.5 Plus (2026-02-15) | 100% | $0.0026 | 34.3s | 100% | |
| Z.AI GLM 5 Turbo | 100% | $0.0081 | 21.5s | 100% | |
| GPT-5.4 | 100% | $0.014 | 15.2s | 100% | |
| GPT-5.4 (Reasoning, Low) | 100% | $0.015 | 14.3s | 100% | |
| Grok 4.20 (Beta, Reasoning) | 100% | $0.018 | 11.0s | 100% | |
| Grok 4.20 (Beta) | 97% | $0.0045 | 2.5s | 86% | |
| Z.AI GLM 4.7 Flash | 100% | $0.0018 | 1.1m | 100% | |
| GPT-5 | 100% | $0.017 | 26.0s | 100% | |
| MoonshotAI: Kimi K2.5 | 100% | $0.0078 | 57.9s | 100% | |
| Gemini 3 Flash (Preview, Reasoning) | 100% | $0.018 | 29.5s | 100% | |
| Median | Evaluator | Top 3 | Flop 3 |
|---|---|---|---|
| 80.9% | Matches sentence count |
words
10 word summary
Performance Score Distribution (Top 20)
Click a model name to view its detail page.
| Score | ||
|---|---|---|
| Gemini 3.1 Pro (Preview) | 100% | |
| Z.AI GLM 5 Turbo | 100% | |
| Claude Sonnet 4.6 (Reasoning) | 100% | |
| GPT-5.4 (Reasoning) | 100% | |
| GPT-5 Mini | 100% | |
| GPT-5.1 | 100% | |
| GPT-5 | 100% | |
| Qwen 3.5 397B A17B | 100% | |
| Qwen 3.5 122B | 100% | |
| GPT-5.4 (Reasoning, Low) | 100% | |
| Z.AI GLM 5 | 100% | |
| MoonshotAI: Kimi K2.5 | 100% | |
| Qwen 3.5 27B | 100% | |
| ByteDance Seed 1.6 | 100% | |
| GPT-5.4 Mini (Reasoning) | 100% | |
| Gemini 3 Flash (Preview, Reasoning) | 100% | |
| o4 Mini High | 100% | |
| GPT-5.2 | 100% | |
| Gemini 3 Pro (Preview) | 100% | |
| Z.AI GLM 4.7 | 100% | |
Price-Performance Score Distribution (Top 20)
Click a model name to view its detail page.
| Score | Cost | Time | ||
|---|---|---|---|---|
| Stealth: Aurora Alpha | 100% | — | 697ms | |
| Inception Mercury | 100% | $0.0002 | 550ms | |
| Arcee AI: Trinity Large (Preview) | 85% | $0.0000 | 1.1s | |
| Mistral Small Creative | 100% | $0.0002 | 402ms | |
| Gemini 2.5 Flash Lite | 100% | $0.0002 | 432ms | |
| Gemma 3 12B | 100% | $0.0001 | 766ms | |
| GPT-4.1 Nano | 100% | $0.0001 | 1.5s | |
| Mistral Small 3.2 24B | 100% | $0.0002 | 759ms | |
| Gemma 3 27B | 100% | $0.0002 | 1.0s | |
| Mistral Small 4 | 100% | $0.0002 | 638ms | |
| GPT-5.4 Nano | 100% | $0.0003 | 1.0s | |
| GPT-4.1 Mini | 100% | $0.0002 | 1.7s | |
| Ministral 3 14B | 100% | $0.0003 | 523ms | |
| GPT-5.4 Nano (Reasoning, Low) | 100% | $0.0004 | 1.4s | |
| Gemini 3.1 Flash Lite (Preview) | 100% | $0.0004 | 731ms | |
| Qwen3 235B A22B Instruct 2507 | 95% | $0.0002 | 2.2s | |
| Inception Mercury 2 | 100% | $0.0004 | 560ms | |
| GPT-5.4 Mini | 100% | $0.0007 | 741ms | |
| Grok 4.20 (Beta) | 100% | $0.0013 | 443ms | |
| DeepSeek-V2 Chat | 100% | $0.0002 | 2.2s | |
Most Stable Models (Top 20)
Ranked by stability (median × consistency). Click a model name to view its detail page.
| Score | Consistency | Stability | ||
|---|---|---|---|---|
| Gemini 3.1 Pro (Preview) | 100% | 100% | 100% | |
| Z.AI GLM 5 Turbo | 100% | 100% | 100% | |
| Claude Sonnet 4.6 (Reasoning) | 100% | 100% | 100% | |
| GPT-5.4 (Reasoning) | 100% | 100% | 100% | |
| GPT-5 Mini | 100% | 100% | 100% | |
| GPT-5.1 | 100% | 100% | 100% | |
| GPT-5 | 100% | 100% | 100% | |
| Qwen 3.5 397B A17B | 100% | 100% | 100% | |
| Qwen 3.5 122B | 100% | 100% | 100% | |
| GPT-5.4 (Reasoning, Low) | 100% | 100% | 100% | |
| Z.AI GLM 5 | 100% | 100% | 100% | |
| MoonshotAI: Kimi K2.5 | 100% | 100% | 100% | |
| Qwen 3.5 27B | 100% | 100% | 100% | |
| ByteDance Seed 1.6 | 100% | 100% | 100% | |
| GPT-5.4 Mini (Reasoning) | 100% | 100% | 100% | |
| Gemini 3 Flash (Preview, Reasoning) | 100% | 100% | 100% | |
| o4 Mini High | 100% | 100% | 100% | |
| GPT-5.2 | 100% | 100% | 100% | |
| Gemini 3 Pro (Preview) | 100% | 100% | 100% | |
| Z.AI GLM 4.7 | 100% | 100% | 100% | |
Top Overall Models (Top 20)
Ranked by composite score (performance, cost, speed & stability). Click a model name to view its detail page.
| Score | Cost | Speed | Stability | ||
|---|---|---|---|---|---|
| Stealth: Aurora Alpha | 100% | — | 697ms | 100% | |
| Inception Mercury | 100% | $0.0002 | 550ms | 100% | |
| Gemini 2.5 Flash Lite | 100% | $0.0002 | 432ms | 100% | |
| Inception Mercury 2 | 100% | $0.0004 | 560ms | 100% | |
| Gemma 3 12B | 100% | $0.0001 | 766ms | 100% | |
| Gemini 3.1 Flash Lite (Preview) | 100% | $0.0004 | 731ms | 100% | |
| Ministral 3 14B | 100% | $0.0003 | 523ms | 100% | |
| Gemma 3 27B | 100% | $0.0002 | 1.0s | 100% | |
| GPT-5.4 Nano | 100% | $0.0003 | 1.0s | 100% | |
| GPT-5.4 Nano (Reasoning, Low) | 100% | $0.0004 | 1.4s | 100% | |
| GPT-5.4 Mini | 100% | $0.0007 | 741ms | 100% | |
| GPT-4.1 Mini | 100% | $0.0002 | 1.7s | 100% | |
| GPT-5.4 Nano (Reasoning) | 100% | $0.0005 | 1.8s | 100% | |
| Gemini 3 Flash (Preview) | 100% | $0.0008 | 881ms | 100% | |
| Mistral Small 4 | 100% | $0.0002 | 638ms | 99% | |
| GPT-4o Mini (temp=0) | 100% | $0.0002 | 3.0s | 100% | |
| Grok 4 Fast | 100% | $0.0004 | 2.3s | 100% | |
| Grok 4.20 (Beta) | 100% | $0.0013 | 443ms | 100% | |
| Mistral Small Creative | 100% | $0.0002 | 402ms | 99% | |
| GPT-4.1 Nano | 100% | $0.0001 | 1.5s | 99% | |
| Median | Evaluator | Top 3 | Flop 3 |
|---|---|---|---|
| 99.9% | Matches word count |
20 word summary
Performance Score Distribution (Top 20)
Click a model name to view its detail page.
| Score | ||
|---|---|---|
| Claude Opus 4.6 (Reasoning) | 100% | |
| Gemini 3.1 Pro (Preview) | 100% | |
| Z.AI GLM 5 Turbo | 100% | |
| GPT-5.4 (Reasoning) | 100% | |
| GPT-5 Mini | 100% | |
| GPT-5.1 | 100% | |
| Qwen 3.5 397B A17B | 100% | |
| GPT-5.4 (Reasoning, Low) | 100% | |
| Z.AI GLM 5 | 100% | |
| MoonshotAI: Kimi K2.5 | 100% | |
| Qwen 3.5 27B | 100% | |
| GPT-5.4 Mini (Reasoning) | 100% | |
| Gemini 3 Flash (Preview, Reasoning) | 100% | |
| o4 Mini High | 100% | |
| GPT-5.2 | 100% | |
| MiniMax M2.7 | 100% | |
| Gemini 3 Pro (Preview) | 100% | |
| Z.AI GLM 4.7 | 100% | |
| o4 Mini | 100% | |
| Qwen 3.5 35B | 100% | |
Price-Performance Score Distribution (Top 20)
Click a model name to view its detail page.
| Score | Cost | Time | ||
|---|---|---|---|---|
| Stealth: Aurora Alpha | 100% | — | 757ms | |
| Inception Mercury | 100% | $0.0002 | 657ms | |
| Ministral 3 3B | 100% | $0.0002 | 393ms | |
| Inception Mercury 2 | 100% | $0.0004 | 637ms | |
| GPT-4.1 Nano | 100% | $0.0001 | 2.3s | |
| GPT-5.4 Nano | 99% | $0.0003 | 1.1s | |
| Gemini 3.1 Flash Lite (Preview) | 100% | $0.0004 | 866ms | |
| GPT-5.4 Mini | 99% | $0.0005 | 685ms | |
| GPT-4.1 Mini | 99% | $0.0002 | 1.7s | |
| Z.AI GLM 4.5 | 99% | $0.0004 | 2.1s | |
| GPT-4o Mini (temp=0) | 100% | $0.0002 | 3.0s | |
| DeepSeek V3 (2025-03-24) | 100% | $0.0004 | 2.4s | |
| DeepSeek-V2 Chat | 98% | $0.0002 | 2.8s | |
| GPT-5.4 Nano (Reasoning) | 100% | $0.0005 | 2.0s | |
| GPT-5.4 Nano (Reasoning, Low) | 100% | $0.0005 | 1.9s | |
| Qwen 2.5 72B | 98% | $0.0007 | 1.3s | |
| DeepSeek V3 (2024-12-26) | 99% | $0.0006 | 1.4s | |
| Stealth: Healer Alpha | 99% | $0.0000 | 12.2s | |
| Gemini 2.5 Flash Lite | 98% | $0.0002 | 445ms | |
| Nemotron 3 Super | 100% | $0.0000 | 5.2s | |
Most Stable Models (Top 20)
Ranked by stability (median × consistency). Click a model name to view its detail page.
| Score | Consistency | Stability | ||
|---|---|---|---|---|
| Claude Opus 4.6 (Reasoning) | 100% | 100% | 100% | |
| Gemini 3.1 Pro (Preview) | 100% | 100% | 100% | |
| Z.AI GLM 5 Turbo | 100% | 100% | 100% | |
| GPT-5.4 (Reasoning) | 100% | 100% | 100% | |
| GPT-5 Mini | 100% | 100% | 100% | |
| GPT-5.1 | 100% | 100% | 100% | |
| Qwen 3.5 397B A17B | 100% | 100% | 100% | |
| GPT-5.4 (Reasoning, Low) | 100% | 100% | 100% | |
| Z.AI GLM 5 | 100% | 100% | 100% | |
| MoonshotAI: Kimi K2.5 | 100% | 100% | 100% | |
| Qwen 3.5 27B | 100% | 100% | 100% | |
| GPT-5.4 Mini (Reasoning) | 100% | 100% | 100% | |
| Gemini 3 Flash (Preview, Reasoning) | 100% | 100% | 100% | |
| o4 Mini High | 100% | 100% | 100% | |
| GPT-5.2 | 100% | 100% | 100% | |
| MiniMax M2.7 | 100% | 100% | 100% | |
| Gemini 3 Pro (Preview) | 100% | 100% | 100% | |
| Z.AI GLM 4.7 | 100% | 100% | 100% | |
| o4 Mini | 100% | 100% | 100% | |
| Qwen 3.5 35B | 100% | 100% | 100% | |
Top Overall Models (Top 20)
Ranked by composite score (performance, cost, speed & stability). Click a model name to view its detail page.
| Score | Cost | Speed | Stability | ||
|---|---|---|---|---|---|
| Stealth: Aurora Alpha | 100% | — | 757ms | 100% | |
| Inception Mercury | 100% | $0.0002 | 657ms | 100% | |
| Inception Mercury 2 | 100% | $0.0004 | 637ms | 100% | |
| Gemini 3.1 Flash Lite (Preview) | 100% | $0.0004 | 866ms | 100% | |
| Ministral 3 3B | 100% | $0.0002 | 393ms | 99% | |
| GPT-5.4 Nano (Reasoning, Low) | 100% | $0.0005 | 1.9s | 100% | |
| GPT-4o Mini (temp=0) | 100% | $0.0002 | 3.0s | 100% | |
| GPT-5.4 Nano (Reasoning) | 100% | $0.0005 | 2.0s | 100% | |
| Mistral Large 3 | 100% | $0.0008 | 998ms | 100% | |
| Gemini 3 Flash (Preview) | 100% | $0.0009 | 1.1s | 100% | |
| GPT-4.1 Nano | 100% | $0.0001 | 2.3s | 99% | |
| Nemotron 3 Super | 100% | $0.0000 | 5.2s | 100% | |
| GPT-5.4 Mini (Reasoning, Low) | 100% | $0.0011 | 1.7s | 100% | |
| DeepSeek V3 (2025-03-24) | 100% | $0.0004 | 2.4s | 99% | |
| GPT-4.1 Mini | 99% | $0.0002 | 1.7s | 98% | |
| Z.AI GLM 4.5 | 99% | $0.0004 | 2.1s | 98% | |
| GPT-5.4 Mini (Reasoning) | 100% | $0.0016 | 2.1s | 100% | |
| Gemini 2.5 Flash Lite (Reasoning) | 100% | $0.0006 | 4.6s | 99% | |
| Grok 4.1 Fast | 100% | $0.0005 | 4.0s | 99% | |
| GPT-4.1 | 99% | $0.0012 | 1.7s | 98% | |
| Median | Evaluator | Top 3 | Flop 3 |
|---|---|---|---|
| 99.5% | Matches word count |
50 word summary
Performance Score Distribution (Top 20)
Click a model name to view its detail page.
Price-Performance Score Distribution (Top 20)
Click a model name to view its detail page.
| Score | Cost | Time | ||
|---|---|---|---|---|
| Stealth: Aurora Alpha | 100% | — | 1.5s | |
| Inception Mercury | 100% | $0.0002 | 1.2s | |
| Gemini 3.1 Flash Lite (Preview) | 100% | $0.0005 | 927ms | |
| GPT-4o Mini (temp=0) | 100% | $0.0003 | 3.4s | |
| Inception Mercury 2 | 100% | $0.0008 | 1.0s | |
| GPT-4.1 Mini | 98% | $0.0003 | 2.1s | |
| Gemini 3 Flash (Preview) | 100% | $0.0010 | 1.2s | |
| GPT-5.4 Nano (Reasoning) | 100% | $0.0008 | 3.4s | |
| GPT-5.4 Nano (Reasoning, Low) | 100% | $0.0008 | 3.9s | |
| Stealth: Healer Alpha | 98% | $0.0000 | 9.6s | |
| GPT-4.1 | 97% | $0.0018 | 2.3s | |
| GPT-4.1 Nano | 93% | $0.0001 | 1.9s | |
| GPT-5.4 Mini (Reasoning, Low) | 100% | $0.0019 | 2.7s | |
| GPT-5.4 Mini | 96% | $0.0008 | 888ms | |
| Gemma 3 27B | 85% | $0.0002 | 2.6s | |
| Mistral Medium 3.1 | 95% | $0.0008 | 1.9s | |
| GPT-5 Nano | 100% | $0.0005 | 11.9s | |
| Nemotron 3 Super | 100% | $0.0000 | 13.8s | |
| GPT-5.4 | 98% | $0.0024 | 2.1s | |
| Qwen 3.5 Plus (2026-02-15) | 81% | $0.0008 | 3.7s | |
Most Stable Models (Top 20)
Ranked by stability (median × consistency). Click a model name to view its detail page.
| Score | Consistency | Stability | ||
|---|---|---|---|---|
| Claude Opus 4.6 (Reasoning) | 100% | 100% | 100% | |
| Gemini 3.1 Pro (Preview) | 100% | 100% | 100% | |
| Z.AI GLM 5 Turbo | 100% | 100% | 100% | |
| Claude Sonnet 4.6 (Reasoning) | 100% | 100% | 100% | |
| GPT-5 Mini | 100% | 100% | 100% | |
| Qwen 3.5 397B A17B | 100% | 100% | 100% | |
| Qwen 3.5 122B | 100% | 100% | 100% | |
| Z.AI GLM 5 | 100% | 100% | 100% | |
| MoonshotAI: Kimi K2.5 | 100% | 100% | 100% | |
| Qwen 3.5 27B | 100% | 100% | 100% | |
| Gemini 3 Flash (Preview, Reasoning) | 100% | 100% | 100% | |
| o4 Mini High | 100% | 100% | 100% | |
| Gemini 3 Pro (Preview) | 100% | 100% | 100% | |
| Z.AI GLM 4.7 | 100% | 100% | 100% | |
| Qwen 3.5 35B | 100% | 100% | 100% | |
| ByteDance Seed 2.0 Mini | 100% | 100% | 100% | |
| Qwen 3.5 Flash | 100% | 100% | 100% | |
| Qwen 3.5 9B | 100% | 100% | 100% | |
| Z.AI GLM 4.7 Flash | 100% | 100% | 100% | |
| ByteDance Seed 2.0 Lite | 100% | 100% | 100% | |
Top Overall Models (Top 20)
Ranked by composite score (performance, cost, speed & stability). Click a model name to view its detail page.
| Score | Cost | Speed | Stability | ||
|---|---|---|---|---|---|
| Stealth: Aurora Alpha | 100% | — | 1.5s | 100% | |
| Inception Mercury | 100% | $0.0002 | 1.2s | 100% | |
| Gemini 3.1 Flash Lite (Preview) | 100% | $0.0005 | 927ms | 100% | |
| Inception Mercury 2 | 100% | $0.0008 | 1.0s | 100% | |
| GPT-4o Mini (temp=0) | 100% | $0.0003 | 3.4s | 100% | |
| Gemini 3 Flash (Preview) | 100% | $0.0010 | 1.2s | 100% | |
| GPT-5.4 Nano (Reasoning, Low) | 100% | $0.0008 | 3.9s | 100% | |
| GPT-5.4 Nano (Reasoning) | 100% | $0.0008 | 3.4s | 99% | |
| GPT-5.4 Mini (Reasoning, Low) | 100% | $0.0019 | 2.7s | 100% | |
| Nemotron 3 Super | 100% | $0.0000 | 13.8s | 100% | |
| GPT-5 Nano | 100% | $0.0005 | 11.9s | 100% | |
| GPT-5.4 Mini (Reasoning) | 100% | $0.0025 | 2.8s | 99% | |
| GPT-5 Mini | 100% | $0.0021 | 9.6s | 100% | |
| GPT-4.1 Mini | 98% | $0.0003 | 2.1s | 94% | |
| GPT-4o, Aug. 6th (temp=0) | 100% | $0.0045 | 1.7s | 99% | |
| GPT-5.4 | 98% | $0.0024 | 2.1s | 94% | |
| o4 Mini | 100% | $0.0047 | 10.0s | 100% | |
| MiniMax M2.7 | 100% | $0.0018 | 23.1s | 99% | |
| GPT-5.2 | 100% | $0.0057 | 11.4s | 100% | |
| GPT-5.1 | 100% | $0.0065 | 7.6s | 99% | |
| Median | Evaluator | Top 3 | Flop 3 |
|---|---|---|---|
| 92.1% | Matches word count |
100 word summary
Performance Score Distribution (Top 20)
Click a model name to view its detail page.
Price-Performance Score Distribution (Top 20)
Click a model name to view its detail page.
| Score | Cost | Time | ||
|---|---|---|---|---|
| Stealth: Aurora Alpha | 100% | — | 2.8s | |
| Gemini 3.1 Flash Lite (Preview) | 100% | $0.0006 | 1.2s | |
| Inception Mercury | 99% | $0.0002 | 4.2s | |
| GPT-4o Mini (temp=0) | 100% | $0.0003 | 15.2s | |
| Inception Mercury 2 | 99% | $0.0013 | 1.6s | |
| Gemini 3 Flash (Preview) | 99% | $0.0012 | 1.8s | |
| GPT-5.4 Nano (Reasoning) | 100% | $0.0011 | 5.1s | |
| GPT-5.4 Nano (Reasoning, Low) | 90% | $0.0009 | 4.3s | |
| GPT-4.1 | 99% | $0.0023 | 3.1s | |
| Nemotron 3 Super | 100% | $0.0000 | 19.9s | |
| GPT-5.4 Mini | 82% | $0.0011 | 1.2s | |
| GPT-5.4 Mini (Reasoning, Low) | 95% | $0.0026 | 2.6s | |
| GPT-5.4 Mini (Reasoning) | 100% | $0.0040 | 4.9s | |
| Nemotron 3 Nano | 90% | $0.0003 | 20.6s | |
| GPT-4o, Aug. 6th (temp=1) | 100% | $0.0051 | 2.2s | |
| GPT-5 Mini | 100% | $0.0033 | 15.8s | |
| GPT-4o, Aug. 6th (temp=0) | 97% | $0.0051 | 2.4s | |
| GPT-5 Nano | 100% | $0.0013 | 31.1s | |
| GPT-5.4 (Reasoning, Low) | 95% | $0.0044 | 3.4s | |
| Claude 3.7 Sonnet | 83% | $0.0074 | 5.9s | |
Most Stable Models (Top 20)
Ranked by stability (median × consistency). Click a model name to view its detail page.
| Score | Consistency | Stability | ||
|---|---|---|---|---|
| Claude Opus 4.6 (Reasoning) | 100% | 100% | 100% | |
| Gemini 3.1 Pro (Preview) | 100% | 100% | 100% | |
| Z.AI GLM 5 Turbo | 100% | 100% | 100% | |
| Claude Sonnet 4.6 (Reasoning) | 100% | 100% | 100% | |
| Qwen 3.5 397B A17B | 100% | 100% | 100% | |
| Qwen 3.5 122B | 100% | 100% | 100% | |
| Z.AI GLM 5 | 100% | 100% | 100% | |
| Qwen 3.5 27B | 100% | 100% | 100% | |
| Gemini 3 Flash (Preview, Reasoning) | 100% | 100% | 100% | |
| Gemini 3 Pro (Preview) | 100% | 100% | 100% | |
| Z.AI GLM 4.7 | 100% | 100% | 100% | |
| Qwen 3.5 35B | 100% | 100% | 100% | |
| Qwen 3.5 9B | 100% | 100% | 100% | |
| ByteDance Seed 2.0 Mini | 100% | 100% | 100% | |
| Qwen 3.5 Flash | 100% | 100% | 100% | |
| ByteDance Seed 2.0 Lite | 100% | 100% | 100% | |
| Stealth: Aurora Alpha | 100% | 100% | 100% | |
| MoonshotAI: Kimi K2.5 | 100% | 100% | 100% | |
| GPT-5.4 Nano (Reasoning) | 100% | 100% | 100% | |
| o4 Mini | 100% | 100% | 100% | |
Top Overall Models (Top 20)
Ranked by composite score (performance, cost, speed & stability). Click a model name to view its detail page.
| Score | Cost | Speed | Stability | ||
|---|---|---|---|---|---|
| Stealth: Aurora Alpha | 100% | — | 2.8s | 100% | |
| Gemini 3.1 Flash Lite (Preview) | 100% | $0.0006 | 1.2s | 99% | |
| GPT-5.4 Nano (Reasoning) | 100% | $0.0011 | 5.1s | 100% | |
| Gemini 3 Flash (Preview) | 99% | $0.0012 | 1.8s | 98% | |
| GPT-4o Mini (temp=0) | 100% | $0.0003 | 15.2s | 99% | |
| Inception Mercury | 99% | $0.0002 | 4.2s | 95% | |
| Inception Mercury 2 | 99% | $0.0013 | 1.6s | 95% | |
| Nemotron 3 Super | 100% | $0.0000 | 19.9s | 99% | |
| GPT-5.4 Mini (Reasoning) | 100% | $0.0040 | 4.9s | 99% | |
| GPT-4.1 | 99% | $0.0023 | 3.1s | 95% | |
| GPT-4o, Aug. 6th (temp=1) | 100% | $0.0051 | 2.2s | 99% | |
| GPT-5 Mini | 100% | $0.0033 | 15.8s | 99% | |
| GPT-5 Nano | 100% | $0.0013 | 31.1s | 99% | |
| GPT-4o, Aug. 6th (temp=0) | 97% | $0.0051 | 2.4s | 92% | |
| GPT-5.2 | 99% | $0.011 | 9.7s | 98% | |
| GPT-5.1 | 100% | $0.011 | 14.3s | 99% | |
| Z.AI GLM 5 Turbo | 100% | $0.011 | 24.4s | 100% | |
| o4 Mini | 100% | $0.011 | 25.6s | 100% | |
| GPT-4o Mini (temp=1) | 98% | $0.0003 | 46.8s | 93% | |
| GPT-5.4 (Reasoning) | 100% | $0.012 | 22.2s | 99% | |
| Median | Evaluator | Top 3 | Flop 3 |
|---|---|---|---|
| 76.7% | Matches word count |
200 word summary
Performance Score Distribution (Top 20)
Click a model name to view its detail page.
Price-Performance Score Distribution (Top 20)
Click a model name to view its detail page.
| Score | Cost | Time | ||
|---|---|---|---|---|
| Stealth: Aurora Alpha | 98% | — | 6.5s | |
| Inception Mercury | 100% | $0.0004 | 3.7s | |
| Inception Mercury 2 | 100% | $0.0017 | 2.3s | |
| GPT-5.4 Nano (Reasoning) | 99% | $0.0019 | 9.8s | |
| GPT-5.4 Mini (Reasoning) | 100% | $0.0063 | 7.4s | |
| Nemotron 3 Super | 100% | $0.0000 | 31.8s | |
| GPT-4o, Aug. 6th (temp=0) | 78% | $0.0062 | 3.8s | |
| GPT-5 Nano | 100% | $0.0015 | 35.2s | |
| Nemotron 3 Nano | 100% | $0.0008 | 51.1s | |
| GPT-5 Mini | 96% | $0.0038 | 20.4s | |
| GPT-4.1 | 78% | $0.0035 | 3.5s | |
| Mistral Large 3 | 65% | $0.0012 | 6.0s | |
| GPT-5.1 | 99% | $0.014 | 16.6s | |
| GPT-4o, Aug. 6th (temp=1) | 72% | $0.0063 | 3.7s | |
| o4 Mini | 97% | $0.012 | 25.3s | |
| GPT-5.2 | 99% | $0.019 | 19.9s | |
| Qwen 3.5 Flash | 100% | $0.0049 | 1.2m | |
| GPT-5.4 (Reasoning) | 99% | $0.018 | 16.6s | |
| GPT-5.4 Mini (Reasoning, Low) | 65% | $0.0026 | 2.8s | |
| Gemini 3 Flash (Preview) | 66% | $0.0015 | 2.5s | |
Most Stable Models (Top 20)
Ranked by stability (median × consistency). Click a model name to view its detail page.
| Score | Consistency | Stability | ||
|---|---|---|---|---|
| Claude Opus 4.6 (Reasoning) | 100% | 100% | 100% | |
| Gemini 3.1 Pro (Preview) | 100% | 100% | 100% | |
| Claude Sonnet 4.6 (Reasoning) | 100% | 100% | 100% | |
| Qwen 3.5 397B A17B | 100% | 100% | 100% | |
| Z.AI GLM 5 | 100% | 100% | 100% | |
| Qwen 3.5 27B | 100% | 100% | 100% | |
| Gemini 3 Pro (Preview) | 100% | 100% | 100% | |
| Qwen 3.5 35B | 100% | 100% | 100% | |
| Qwen 3.5 Flash | 100% | 100% | 100% | |
| Gemini 3 Flash (Preview, Reasoning) | 100% | 100% | 100% | |
| Z.AI GLM 4.7 | 100% | 100% | 100% | |
| Inception Mercury 2 | 100% | 100% | 100% | |
| GPT-5 Nano | 100% | 100% | 100% | |
| Inception Mercury | 100% | 100% | 100% | |
| Nemotron 3 Nano | 100% | 100% | 100% | |
| Nemotron 3 Super | 100% | 100% | 100% | |
| GPT-5.4 Mini (Reasoning) | 100% | 99% | 99% | |
| Z.AI GLM 5 Turbo | 100% | 99% | 99% | |
| GPT-5.1 | 99% | 99% | 98% | |
| GPT-5.4 Nano (Reasoning) | 99% | 99% | 98% | |
Top Overall Models (Top 20)
Ranked by composite score (performance, cost, speed & stability). Click a model name to view its detail page.
| Score | Cost | Speed | Stability | ||
|---|---|---|---|---|---|
| Inception Mercury | 100% | $0.0004 | 3.7s | 100% | |
| Inception Mercury 2 | 100% | $0.0017 | 2.3s | 100% | |
| GPT-5.4 Nano (Reasoning) | 99% | $0.0019 | 9.8s | 98% | |
| GPT-5.4 Mini (Reasoning) | 100% | $0.0063 | 7.4s | 99% | |
| Nemotron 3 Super | 100% | $0.0000 | 31.8s | 100% | |
| GPT-5 Nano | 100% | $0.0015 | 35.2s | 100% | |
| GPT-5.1 | 99% | $0.014 | 16.6s | 98% | |
| Nemotron 3 Nano | 100% | $0.0008 | 51.1s | 100% | |
| Stealth: Aurora Alpha | 98% | — | 6.5s | 86% | |
| GPT-5.4 (Reasoning) | 99% | $0.018 | 16.6s | 95% | |
| GPT-5.2 | 99% | $0.019 | 19.9s | 95% | |
| o4 Mini | 97% | $0.012 | 25.3s | 92% | |
| GPT-5 Mini | 96% | $0.0038 | 20.4s | 85% | |
| Qwen 3.5 Flash | 100% | $0.0049 | 1.2m | 100% | |
| Gemini 3 Flash (Preview, Reasoning) | 100% | $0.026 | 40.2s | 100% | |
| Z.AI GLM 5 Turbo | 100% | $0.022 | 49.1s | 99% | |
| GPT-5 | 97% | $0.025 | 34.8s | 92% | |
| Qwen 3.5 35B | 100% | $0.027 | 1.2m | 100% | |
| o4 Mini High | 97% | $0.020 | 43.8s | 87% | |
| Qwen 3.5 27B | 100% | $0.026 | 1.8m | 100% | |
| Median | Evaluator | Top 3 | Flop 3 |
|---|---|---|---|
| 24.1% | Matches word count |