Language Comprehension
Does the model understand more than just English?
Performance Score Distribution (Top 20)
Click a model name to view its detail page.
Price-Performance Score Distribution (Top 20)
Click a model name to view its detail page.
| Score | Cost | Time | ||
|---|---|---|---|---|
| Mistral NeMO | 95% | $0.0000 | 625ms | |
| Inception Mercury | 65% | $0.0000 | 562ms | |
| Ministral 3 3B | 100% | $0.0000 | 833ms | |
| Gemini 2.5 Flash Lite | 80% | $0.0001 | 648ms | |
| Ministral 8B | 55% | $0.0000 | 540ms | |
| GPT-5.4 Mini | 80% | $0.0002 | 747ms | |
| GPT-4o Mini (temp=1) | 55% | $0.0000 | 942ms | |
| Gemini 3.1 Flash Lite (Preview) | 95% | $0.0001 | 975ms | |
| GPT-5.4 Nano | 70% | $0.0001 | 990ms | |
| Inception Mercury 2 | 75% | $0.0003 | 771ms | |
| Gemini 2.5 Flash | 75% | $0.0002 | 817ms | |
| Stealth: Aurora Alpha | 85% | — | 1.3s | |
| Grok 4.20 (Beta) | 85% | $0.0005 | 726ms | |
| Mistral Small 4 | 55% | $0.0001 | 1.1s | |
| Gemini 3 Flash (Preview) | 90% | $0.0004 | 1.5s | |
| Claude 3 Haiku | 65% | $0.0001 | 1.3s | |
| Gemma 3 4B | 70% | $0.0000 | 1.7s | |
| Mistral Small 3.2 24B | 75% | $0.0000 | 2.4s | |
| Llama 3.1 8B | 55% | $0.0000 | 1.3s | |
| GPT-4.1 Nano | 65% | $0.0000 | 1.7s | |
Cost vs Performance
Compares total cost for this test against the test score. Quadrant lines are drawn at the median values. Only models with available cost data are shown.
1 low-scoring outlier hidden: Ministral 3B (25.0%).
Most Stable Models (Top 20)
Ranked by stability (median × consistency). Click a model name to view its detail page.
| Score | Consistency | Stability | ||
|---|---|---|---|---|
| Claude Opus 4.6 (Reasoning) | 100% | 100% | 100% | |
| Z.AI GLM 5 Turbo | 100% | 100% | 100% | |
| Claude Sonnet 4.6 (Reasoning) | 100% | 100% | 100% | |
| Claude Opus 4.6 | 100% | 100% | 100% | |
| Qwen 3.5 397B A17B | 100% | 100% | 100% | |
| Qwen 3.5 122B | 100% | 100% | 100% | |
| Grok 4.20 (Beta, Reasoning) | 100% | 100% | 100% | |
| Claude Sonnet 4.6 | 100% | 100% | 100% | |
| MoonshotAI: Kimi K2.5 | 100% | 100% | 100% | |
| Qwen 3.5 27B | 100% | 100% | 100% | |
| ByteDance Seed 1.6 | 100% | 100% | 100% | |
| GPT-5.4 Mini (Reasoning) | 100% | 100% | 100% | |
| Claude Opus 4.5 | 100% | 100% | 100% | |
| Aion 2.0 | 100% | 100% | 100% | |
| Z.AI GLM 4.6 | 100% | 100% | 100% | |
| MiniMax M2.7 | 100% | 100% | 100% | |
| Qwen 3.5 35B | 100% | 100% | 100% | |
| Claude Opus 4 | 100% | 100% | 100% | |
| Qwen 3.5 Plus (2026-02-15) | 100% | 100% | 100% | |
| Mistral Large 3 | 100% | 100% | 100% | |
Top Overall Models (Top 20)
Ranked by composite score (performance, cost, speed & stability). Click a model name to view its detail page.
| Score | Cost | Speed | Stability | ||
|---|---|---|---|---|---|
| Ministral 3 3B | 100% | $0.0000 | 833ms | 100% | |
| Mistral Large 3 | 100% | $0.0002 | 3.7s | 100% | |
| DeepSeek V3 (2024-12-26) | 100% | $0.0002 | 4.8s | 100% | |
| GPT-4o, May 13th (temp=0) | 100% | $0.0016 | 2.4s | 100% | |
| Qwen 3.5 Plus (2026-02-15) | 100% | $0.0003 | 4.8s | 100% | |
| Mistral Large 2 | 100% | $0.0009 | 3.8s | 100% | |
| DeepSeek-V2 Chat | 100% | $0.0000 | 6.6s | 100% | |
| Claude Sonnet 4.6 | 100% | $0.0021 | 3.1s | 100% | |
| GPT-5.4 Mini (Reasoning) | 100% | $0.0012 | 4.8s | 100% | |
| DeepSeek V3 (2025-03-24) | 100% | $0.0001 | 7.5s | 100% | |
| Claude Opus 4.5 | 100% | $0.0035 | 3.7s | 100% | |
| Claude Opus 4.6 | 100% | $0.0037 | 4.3s | 100% | |
| Hermes 3 405B | 100% | $0.0000 | 12.1s | 100% | |
| Z.AI GLM 5 Turbo | 100% | $0.0030 | 9.5s | 100% | |
| Aion 2.0 | 100% | $0.0011 | 15.9s | 100% | |
| ByteDance Seed 1.6 | 100% | $0.0013 | 15.8s | 100% | |
| Claude Sonnet 4.6 (Reasoning) | 100% | $0.0061 | 8.3s | 100% | |
| Claude Opus 4.6 (Reasoning) | 100% | $0.0085 | 7.3s | 100% | |
| Qwen 3.5 122B | 100% | $0.0053 | 13.6s | 100% | |
| Grok 4.20 (Beta, Reasoning) | 100% | $0.011 | 7.1s | 100% | |
| Model | Total â–¼ | Friend got new kittens (Tagalog) | Friend got new kittens (German) | Asking for directions (German) | Asking for directions (Dutch) |
|---|---|---|---|---|---|
| Claude Opus 4.6 (Reasoning) | 100% | 100% | 100% | 100% | 100% |
| Z.AI GLM 5 Turbo | 100% | 100% | 100% | 100% | 100% |
| Claude Sonnet 4.6 (Reasoning) | 100% | 100% | 100% | 100% | 100% |
| Claude Opus 4.6 | 100% | 100% | 100% | 100% | 100% |
| Qwen 3.5 397B A17B | 100% | 100% | 100% | 100% | 100% |
| Qwen 3.5 122B | 100% | 100% | 100% | 100% | 100% |
| Grok 4.20 (Beta, Reasoning) | 100% | 100% | 100% | 100% | 100% |
| Claude Sonnet 4.6 | 100% | 100% | 100% | 100% | 100% |
| MoonshotAI: Kimi K2.5 | 100% | 100% | 100% | 100% | 100% |
| Qwen 3.5 27B | 100% | 100% | 100% | 100% | 100% |
| ByteDance Seed 1.6 | 100% | 100% | 100% | 100% | 100% |
| GPT-5.4 Mini (Reasoning) | 100% | 100% | 100% | 100% | 100% |
| Claude Opus 4.5 | 100% | 100% | 100% | 100% | 100% |
| Aion 2.0 | 100% | 100% | 100% | 100% | 100% |
| Z.AI GLM 4.6 | 100% | 100% | 100% | 100% | 100% |
Friend got new kittens (Tagalog)
Performance Score Distribution (Top 20)
Click a model name to view its detail page.
Price-Performance Score Distribution (Top 20)
Click a model name to view its detail page.
| Score | Cost | Time | ||
|---|---|---|---|---|
| Mistral NeMO | 100% | $0.0000 | 515ms | |
| Inception Mercury | 100% | $0.0000 | 418ms | |
| Gemini 2.5 Flash Lite | 100% | $0.0000 | 505ms | |
| Ministral 3 3B | 100% | $0.0000 | 715ms | |
| Gemma 3 4B | 100% | $0.0000 | 903ms | |
| Gemini 2.5 Flash | 100% | $0.0001 | 518ms | |
| GPT-4o Mini (temp=1) | 100% | $0.0000 | 883ms | |
| Stealth: Aurora Alpha | 100% | — | 1.5s | |
| GPT-5.4 Nano | 100% | $0.0001 | 759ms | |
| Ministral 3 8B | 100% | $0.0000 | 1.2s | |
| Mistral Small Creative | 100% | $0.0000 | 918ms | |
| GPT-5.4 Nano (Reasoning, Low) | 100% | $0.0001 | 955ms | |
| GPT-4.1 Nano | 100% | $0.0000 | 1.4s | |
| GPT-4o Mini (temp=0) | 100% | $0.0000 | 1.2s | |
| Ministral 3 14B | 100% | $0.0000 | 1.3s | |
| Llama 3.1 8B | 80% | $0.0000 | 1.2s | |
| Mistral Small 4 | 100% | $0.0001 | 997ms | |
| Arcee AI: Trinity Large (Preview) | 100% | $0.0000 | 3.5s | |
| Claude 3 Haiku | 100% | $0.0001 | 1.0s | |
| Gemma 3 12B | 100% | $0.0000 | 2.0s | |
Most Stable Models (Top 20)
Ranked by stability (median × consistency). Click a model name to view its detail page.
| Score | Consistency | Stability | ||
|---|---|---|---|---|
| Claude Opus 4.6 (Reasoning) | 100% | 100% | 100% | |
| Gemini 3.1 Pro (Preview) | 100% | 100% | 100% | |
| Z.AI GLM 5 Turbo | 100% | 100% | 100% | |
| Claude Sonnet 4.6 (Reasoning) | 100% | 100% | 100% | |
| GPT-5.4 (Reasoning) | 100% | 100% | 100% | |
| GPT-5 Mini | 100% | 100% | 100% | |
| GPT-5.1 | 100% | 100% | 100% | |
| Claude Opus 4.6 | 100% | 100% | 100% | |
| GPT-5 | 100% | 100% | 100% | |
| Qwen 3.5 397B A17B | 100% | 100% | 100% | |
| Qwen 3.5 122B | 100% | 100% | 100% | |
| Grok 4.20 (Beta, Reasoning) | 100% | 100% | 100% | |
| GPT-5.4 (Reasoning, Low) | 100% | 100% | 100% | |
| Z.AI GLM 5 | 100% | 100% | 100% | |
| Claude Sonnet 4.6 | 100% | 100% | 100% | |
| MoonshotAI: Kimi K2.5 | 100% | 100% | 100% | |
| Qwen 3.5 27B | 100% | 100% | 100% | |
| ByteDance Seed 1.6 | 100% | 100% | 100% | |
| GPT-5.4 Mini (Reasoning) | 100% | 100% | 100% | |
| Gemini 3 Flash (Preview, Reasoning) | 100% | 100% | 100% | |
Top Overall Models (Top 20)
Ranked by composite score (performance, cost, speed & stability). Click a model name to view its detail page.
| Score | Cost | Speed | Stability | ||
|---|---|---|---|---|---|
| Inception Mercury | 100% | $0.0000 | 418ms | 100% | |
| Mistral NeMO | 100% | $0.0000 | 515ms | 100% | |
| Gemini 2.5 Flash Lite | 100% | $0.0000 | 505ms | 100% | |
| Ministral 3 3B | 100% | $0.0000 | 715ms | 100% | |
| Gemini 2.5 Flash | 100% | $0.0001 | 518ms | 100% | |
| Gemma 3 4B | 100% | $0.0000 | 903ms | 100% | |
| GPT-5.4 Nano | 100% | $0.0001 | 759ms | 100% | |
| GPT-4o Mini (temp=1) | 100% | $0.0000 | 883ms | 100% | |
| Mistral Small Creative | 100% | $0.0000 | 918ms | 100% | |
| GPT-5.4 Nano (Reasoning, Low) | 100% | $0.0001 | 955ms | 100% | |
| Mistral Small 4 | 100% | $0.0001 | 997ms | 100% | |
| Ministral 3 8B | 100% | $0.0000 | 1.2s | 100% | |
| GPT-4o Mini (temp=0) | 100% | $0.0000 | 1.2s | 100% | |
| Ministral 3 14B | 100% | $0.0000 | 1.3s | 100% | |
| GPT-4.1 Nano | 100% | $0.0000 | 1.4s | 100% | |
| Inception Mercury 2 | 100% | $0.0002 | 614ms | 100% | |
| Claude 3 Haiku | 100% | $0.0001 | 1.0s | 100% | |
| GPT-5.4 Mini | 100% | $0.0002 | 597ms | 100% | |
| Stealth: Aurora Alpha | 100% | — | 1.5s | 100% | |
| Gemini 3.1 Flash Lite (Preview) | 100% | $0.0002 | 1.1s | 100% | |
| Median | Evaluator | Top 3 | Flop 3 |
|---|---|---|---|
| 100.0% | Contains a count of nouns |
Friend got new kittens (German)
Performance Score Distribution (Top 20)
Click a model name to view its detail page.
| Score | ||
|---|---|---|
| Claude Opus 4.6 (Reasoning) | 100% | |
| Z.AI GLM 5 Turbo | 100% | |
| Claude Sonnet 4.6 (Reasoning) | 100% | |
| GPT-5 Mini | 100% | |
| Claude Opus 4.6 | 100% | |
| Qwen 3.5 397B A17B | 100% | |
| Qwen 3.5 122B | 100% | |
| Grok 4.20 (Beta, Reasoning) | 100% | |
| Claude Sonnet 4.6 | 100% | |
| MoonshotAI: Kimi K2.5 | 100% | |
| Qwen 3.5 27B | 100% | |
| ByteDance Seed 1.6 | 100% | |
| GPT-5.4 Mini (Reasoning) | 100% | |
| Claude Opus 4.5 | 100% | |
| Grok 4.1 Fast | 100% | |
| Aion 2.0 | 100% | |
| Z.AI GLM 4.6 | 100% | |
| MiniMax M2.7 | 100% | |
| MiniMax M2.5 | 100% | |
| Grok 4 | 100% | |
Price-Performance Score Distribution (Top 20)
Click a model name to view its detail page.
| Score | Cost | Time | ||
|---|---|---|---|---|
| Inception Mercury | 100% | $0.0000 | 544ms | |
| Ministral 8B | 80% | $0.0000 | 733ms | |
| Ministral 3 3B | 100% | $0.0000 | 995ms | |
| Stealth: Aurora Alpha | 100% | — | 914ms | |
| Ministral 3 8B | 100% | $0.0000 | 1.0s | |
| Gemini 3.1 Flash Lite (Preview) | 80% | $0.0001 | 841ms | |
| Mistral NeMO | 100% | $0.0000 | 1.2s | |
| Mistral Small 4 | 100% | $0.0001 | 1.2s | |
| Llama 3.1 8B | 80% | $0.0000 | 1.1s | |
| Ministral 3 14B | 100% | $0.0000 | 1.6s | |
| Mistral Small Creative | 100% | $0.0001 | 1.2s | |
| GPT-5.4 Nano | 100% | $0.0001 | 989ms | |
| GPT-5.4 Mini | 80% | $0.0002 | 896ms | |
| Arcee AI: Trinity Large (Preview) | 100% | $0.0000 | 3.0s | |
| Mistral Small 3.2 24B | 100% | $0.0001 | 2.3s | |
| Inception Mercury 2 | 80% | $0.0002 | 712ms | |
| GPT-4.1 Mini | 100% | $0.0001 | 2.0s | |
| GPT-4.1 Nano | 100% | $0.0000 | 2.4s | |
| Stealth: Healer Alpha | 100% | $0.0000 | 3.8s | |
| Mistral Medium 3.1 | 100% | $0.0003 | 1.6s | |
Most Stable Models (Top 20)
Ranked by stability (median × consistency). Click a model name to view its detail page.
| Score | Consistency | Stability | ||
|---|---|---|---|---|
| Claude Opus 4.6 (Reasoning) | 100% | 100% | 100% | |
| Z.AI GLM 5 Turbo | 100% | 100% | 100% | |
| Claude Sonnet 4.6 (Reasoning) | 100% | 100% | 100% | |
| GPT-5 Mini | 100% | 100% | 100% | |
| Claude Opus 4.6 | 100% | 100% | 100% | |
| Qwen 3.5 397B A17B | 100% | 100% | 100% | |
| Qwen 3.5 122B | 100% | 100% | 100% | |
| Grok 4.20 (Beta, Reasoning) | 100% | 100% | 100% | |
| Claude Sonnet 4.6 | 100% | 100% | 100% | |
| MoonshotAI: Kimi K2.5 | 100% | 100% | 100% | |
| Qwen 3.5 27B | 100% | 100% | 100% | |
| ByteDance Seed 1.6 | 100% | 100% | 100% | |
| GPT-5.4 Mini (Reasoning) | 100% | 100% | 100% | |
| Claude Opus 4.5 | 100% | 100% | 100% | |
| Grok 4.1 Fast | 100% | 100% | 100% | |
| Aion 2.0 | 100% | 100% | 100% | |
| Z.AI GLM 4.6 | 100% | 100% | 100% | |
| MiniMax M2.7 | 100% | 100% | 100% | |
| MiniMax M2.5 | 100% | 100% | 100% | |
| Grok 4 | 100% | 100% | 100% | |
Top Overall Models (Top 20)
Ranked by composite score (performance, cost, speed & stability). Click a model name to view its detail page.
| Score | Cost | Speed | Stability | ||
|---|---|---|---|---|---|
| Inception Mercury | 100% | $0.0000 | 544ms | 100% | |
| Stealth: Aurora Alpha | 100% | — | 914ms | 100% | |
| Ministral 3 3B | 100% | $0.0000 | 995ms | 100% | |
| Ministral 3 8B | 100% | $0.0000 | 1.0s | 100% | |
| Mistral NeMO | 100% | $0.0000 | 1.2s | 100% | |
| GPT-5.4 Nano | 100% | $0.0001 | 989ms | 100% | |
| Mistral Small 4 | 100% | $0.0001 | 1.2s | 100% | |
| Mistral Small Creative | 100% | $0.0001 | 1.2s | 100% | |
| Ministral 3 14B | 100% | $0.0000 | 1.6s | 100% | |
| Mistral Small 3.2 24B | 100% | $0.0001 | 2.3s | 100% | |
| GPT-4.1 Nano | 100% | $0.0000 | 2.4s | 100% | |
| GPT-4.1 Mini | 100% | $0.0001 | 2.0s | 100% | |
| Mistral Medium 3.1 | 100% | $0.0003 | 1.6s | 100% | |
| Arcee AI: Trinity Large (Preview) | 100% | $0.0000 | 3.0s | 100% | |
| Hermes 3 70B | 100% | $0.0001 | 3.1s | 100% | |
| Stealth: Healer Alpha | 100% | $0.0000 | 3.8s | 100% | |
| LFM2 24B | 100% | $0.0000 | 3.9s | 100% | |
| Mistral Large 3 | 100% | $0.0002 | 2.7s | 100% | |
| Claude 3.5 Haiku | 100% | $0.0005 | 1.6s | 100% | |
| Qwen 2.5 72B | 100% | $0.0001 | 3.9s | 100% | |
| Median | Evaluator | Top 3 | Flop 3 |
|---|---|---|---|
| 100.0% | Contains a count of nouns |
Asking for directions (German)
Performance Score Distribution (Top 20)
Click a model name to view its detail page.
Price-Performance Score Distribution (Top 20)
Click a model name to view its detail page.
| Score | Cost | Time | ||
|---|---|---|---|---|
| Mistral NeMO | 100% | $0.0000 | 321ms | |
| Ministral 8B | 60% | $0.0000 | 341ms | |
| Ministral 3 3B | 100% | $0.0000 | 691ms | |
| GPT-4o Mini (temp=0) | 100% | $0.0000 | 728ms | |
| Mistral Large 3 | 100% | $0.0001 | 7.5s | |
| GPT-4o Mini (temp=1) | 100% | $0.0000 | 800ms | |
| Gemini 2.5 Flash Lite | 100% | $0.0001 | 722ms | |
| GPT-5.4 Mini | 80% | $0.0002 | 812ms | |
| Claude 3 Haiku | 100% | $0.0001 | 1.1s | |
| Arcee AI: Trinity Large (Preview) | 80% | $0.0000 | 1.5s | |
| Inception Mercury 2 | 60% | $0.0003 | 709ms | |
| Mistral Small 3.2 24B | 100% | $0.0000 | 1.6s | |
| Grok 4.20 (Beta) | 100% | $0.0004 | 728ms | |
| Gemini 3.1 Flash Lite (Preview) | 100% | $0.0002 | 995ms | |
| Gemini 3 Flash (Preview) | 100% | $0.0003 | 1.3s | |
| Gemma 3 4B | 100% | $0.0000 | 2.4s | |
| Gemini 2.5 Flash | 80% | $0.0004 | 1.2s | |
| DeepSeek V3.1 | 100% | $0.0002 | 2.4s | |
| Z.AI GLM 4.5 | 80% | $0.0001 | 3.8s | |
| DeepSeek V3 (2025-03-24) | 100% | $0.0001 | 7.5s | |
Most Stable Models (Top 20)
Ranked by stability (median × consistency). Click a model name to view its detail page.
| Score | Consistency | Stability | ||
|---|---|---|---|---|
| Claude Opus 4.6 (Reasoning) | 100% | 100% | 100% | |
| Gemini 3.1 Pro (Preview) | 100% | 100% | 100% | |
| Z.AI GLM 5 Turbo | 100% | 100% | 100% | |
| Claude Sonnet 4.6 (Reasoning) | 100% | 100% | 100% | |
| GPT-5.4 (Reasoning) | 100% | 100% | 100% | |
| GPT-5 Mini | 100% | 100% | 100% | |
| Claude Opus 4.6 | 100% | 100% | 100% | |
| Qwen 3.5 397B A17B | 100% | 100% | 100% | |
| Qwen 3.5 122B | 100% | 100% | 100% | |
| Grok 4.20 (Beta, Reasoning) | 100% | 100% | 100% | |
| GPT-5.4 (Reasoning, Low) | 100% | 100% | 100% | |
| Z.AI GLM 5 | 100% | 100% | 100% | |
| Claude Sonnet 4.6 | 100% | 100% | 100% | |
| MoonshotAI: Kimi K2.5 | 100% | 100% | 100% | |
| Qwen 3.5 27B | 100% | 100% | 100% | |
| ByteDance Seed 1.6 | 100% | 100% | 100% | |
| GPT-5.4 Mini (Reasoning) | 100% | 100% | 100% | |
| Gemini 3 Flash (Preview, Reasoning) | 100% | 100% | 100% | |
| Claude Opus 4.5 | 100% | 100% | 100% | |
| Aion 2.0 | 100% | 100% | 100% | |
Top Overall Models (Top 20)
Ranked by composite score (performance, cost, speed & stability). Click a model name to view its detail page.
| Score | Cost | Speed | Stability | ||
|---|---|---|---|---|---|
| Mistral NeMO | 100% | $0.0000 | 321ms | 100% | |
| Ministral 3 3B | 100% | $0.0000 | 691ms | 100% | |
| GPT-4o Mini (temp=0) | 100% | $0.0000 | 728ms | 100% | |
| Gemini 2.5 Flash Lite | 100% | $0.0001 | 722ms | 100% | |
| GPT-4o Mini (temp=1) | 100% | $0.0000 | 800ms | 100% | |
| Gemini 3.1 Flash Lite (Preview) | 100% | $0.0002 | 995ms | 100% | |
| Claude 3 Haiku | 100% | $0.0001 | 1.1s | 100% | |
| Grok 4.20 (Beta) | 100% | $0.0004 | 728ms | 100% | |
| Mistral Small 3.2 24B | 100% | $0.0000 | 1.6s | 100% | |
| Gemini 3 Flash (Preview) | 100% | $0.0003 | 1.3s | 100% | |
| Gemma 3 4B | 100% | $0.0000 | 2.4s | 100% | |
| Claude 3.5 Haiku | 100% | $0.0005 | 1.6s | 100% | |
| DeepSeek V3.1 | 100% | $0.0002 | 2.4s | 100% | |
| DeepSeek V3 (2024-12-26) | 100% | $0.0001 | 3.2s | 100% | |
| Llama 3.1 70B | 100% | $0.0002 | 3.3s | 100% | |
| Claude Haiku 4.5 | 100% | $0.0010 | 2.3s | 100% | |
| WizardLM 2 8x22b | 100% | $0.0002 | 3.8s | 100% | |
| GPT-5.4 | 100% | $0.0016 | 2.0s | 100% | |
| DeepSeek-V2 Chat | 100% | $0.0000 | 4.6s | 100% | |
| GPT-5.4 Mini (Reasoning, Low) | 100% | $0.0008 | 3.5s | 100% | |
| Median | Evaluator | Top 3 | Flop 3 |
|---|---|---|---|
| 100.0% | Matches Regex |
Asking for directions (Dutch)
Performance Score Distribution (Top 20)
Click a model name to view its detail page.
Price-Performance Score Distribution (Top 20)
Click a model name to view its detail page.
| Score | Cost | Time | ||
|---|---|---|---|---|
| Mistral NeMO | 80% | $0.0000 | 440ms | |
| Ministral 3 3B | 100% | $0.0000 | 930ms | |
| Gemini 2.5 Flash Lite | 80% | $0.0001 | 735ms | |
| GPT-5.4 Mini | 60% | $0.0001 | 685ms | |
| Gemini 3.1 Flash Lite (Preview) | 100% | $0.0001 | 954ms | |
| Mistral Large 3 | 100% | $0.0001 | 984ms | |
| Grok 4.20 (Beta) | 60% | $0.0004 | 571ms | |
| Gemini 2.5 Flash | 80% | $0.0002 | 796ms | |
| Stealth: Aurora Alpha | 80% | — | 1.2s | |
| Gemini 3 Flash (Preview) | 80% | $0.0003 | 1.2s | |
| Inception Mercury 2 | 60% | $0.0005 | 1.0s | |
| Gemma 3 4B | 60% | $0.0000 | 2.0s | |
| Z.AI GLM 4.5 | 100% | $0.0001 | 2.4s | |
| Llama 3.1 8B | 60% | $0.0000 | 1.7s | |
| Claude Haiku 4.5 | 80% | $0.0008 | 2.0s | |
| GPT-4.1 Mini | 80% | $0.0004 | 3.3s | |
| Claude 3.5 Haiku | 60% | $0.0005 | 1.9s | |
| GPT-5.4 Nano (Reasoning, Low) | 80% | $0.0004 | 4.0s | |
| Grok 4 Fast | 100% | $0.0003 | 3.8s | |
| GPT-5.4 Mini (Reasoning, Low) | 100% | $0.0009 | 3.6s | |
Most Stable Models (Top 20)
Ranked by stability (median × consistency). Click a model name to view its detail page.
| Score | Consistency | Stability | ||
|---|---|---|---|---|
| Claude Opus 4.6 (Reasoning) | 100% | 100% | 100% | |
| Gemini 3.1 Pro (Preview) | 100% | 100% | 100% | |
| Z.AI GLM 5 Turbo | 100% | 100% | 100% | |
| Claude Sonnet 4.6 (Reasoning) | 100% | 100% | 100% | |
| GPT-5.4 (Reasoning) | 100% | 100% | 100% | |
| GPT-5.1 | 100% | 100% | 100% | |
| Claude Opus 4.6 | 100% | 100% | 100% | |
| GPT-5 | 100% | 100% | 100% | |
| Qwen 3.5 397B A17B | 100% | 100% | 100% | |
| Qwen 3.5 122B | 100% | 100% | 100% | |
| Grok 4.20 (Beta, Reasoning) | 100% | 100% | 100% | |
| GPT-5.4 (Reasoning, Low) | 100% | 100% | 100% | |
| Z.AI GLM 5 | 100% | 100% | 100% | |
| Claude Sonnet 4.6 | 100% | 100% | 100% | |
| MoonshotAI: Kimi K2.5 | 100% | 100% | 100% | |
| Qwen 3.5 27B | 100% | 100% | 100% | |
| ByteDance Seed 1.6 | 100% | 100% | 100% | |
| GPT-5.4 Mini (Reasoning) | 100% | 100% | 100% | |
| Gemini 3 Flash (Preview, Reasoning) | 100% | 100% | 100% | |
| GPT-5.2 | 100% | 100% | 100% | |
Top Overall Models (Top 20)
Ranked by composite score (performance, cost, speed & stability). Click a model name to view its detail page.
| Score | Cost | Speed | Stability | ||
|---|---|---|---|---|---|
| Ministral 3 3B | 100% | $0.0000 | 930ms | 100% | |
| Mistral Large 3 | 100% | $0.0001 | 984ms | 100% | |
| Gemini 3.1 Flash Lite (Preview) | 100% | $0.0001 | 954ms | 100% | |
| Z.AI GLM 4.5 | 100% | $0.0001 | 2.4s | 100% | |
| Grok 4 Fast | 100% | $0.0003 | 3.8s | 100% | |
| GPT-5.4 Mini (Reasoning, Low) | 100% | $0.0009 | 3.6s | 100% | |
| GPT-4.1 | 100% | $0.0014 | 3.1s | 100% | |
| GPT-4o, May 13th (temp=1) | 100% | $0.0017 | 2.6s | 100% | |
| GPT-4o, May 13th (temp=0) | 100% | $0.0018 | 2.5s | 100% | |
| Qwen 3.5 Plus (2026-02-15) | 100% | $0.0003 | 5.5s | 100% | |
| DeepSeek V3 (2024-12-26) | 100% | $0.0002 | 5.9s | 100% | |
| Mistral Large 2 | 100% | $0.0011 | 5.0s | 100% | |
| GPT-5.4 Nano (Reasoning) | 100% | $0.0005 | 6.1s | 100% | |
| Claude Sonnet 4.6 | 100% | $0.0026 | 3.2s | 100% | |
| Claude Sonnet 4 | 100% | $0.0026 | 3.6s | 100% | |
| Gemini 3 Flash (Preview, Reasoning) | 100% | $0.0022 | 4.9s | 100% | |
| Claude Sonnet 4.5 | 100% | $0.0030 | 3.6s | 100% | |
| GPT-5.4 (Reasoning, Low) | 100% | $0.0028 | 4.2s | 100% | |
| DeepSeek V3.1 | 100% | $0.0003 | 8.8s | 100% | |
| GPT-5.2 | 100% | $0.0027 | 4.9s | 100% | |
| Median | Evaluator | Top 3 | Flop 3 |
|---|---|---|---|
| 80.0% | Matches Regex |