Data extraction
Extract key details from a given block of text.
What's the correct time?
Performance Score Distribution (Top 20)
Click a model name to view its detail page.
Price-Performance Score Distribution (Top 20)
Click a model name to view its detail page.
| Score | Cost | Time | ||
|---|---|---|---|---|
| Mistral Small Creative | 70% | $0.0000 | 307ms | |
| GPT-5.4 Nano | 60% | $0.0000 | 809ms | |
| Claude Sonnet 4 | 100% | $0.0003 | 1.4s | |
| GPT-4o Mini (temp=0) | 100% | $0.0000 | 3.1s | |
| GPT-4o Mini (temp=1) | 80% | $0.0000 | 13.7s | |
| DeepSeek V3 (2025-03-24) | 70% | $0.0002 | 7.8s | |
| Gemini 3 Flash (Preview, Reasoning) | 90% | $0.024 | 45.0s | |
| Gemini 2.5 Flash (Reasoning) | 60% | $0.028 | 47.6s | |
| Rocinante 12B | 50% | $0.0000 | 4.0s | |
| Claude Opus 4 | 50% | $0.0014 | 4.6s | |
| Z.AI GLM 4.6 | 50% | $0.0076 | 2.5m | |
| Mistral NeMO | 0% | $0.0000 | 7.2s | |
| Gemma 3 4B | 0% | $0.0000 | 321ms | |
| Ministral 3B | 20% | $0.0000 | 293ms | |
| Ministral 8B | 10% | $0.0000 | 284ms | |
| Ministral 3 3B | 0% | $0.0000 | 319ms | |
| Ministral 3 8B | 0% | $0.0000 | 324ms | |
| Ministral 3 14B | 0% | $0.0000 | 347ms | |
| LFM2 24B | 0% | $0.0000 | 1.1s | |
| Gemini 2.5 Flash Lite | 0% | $0.0000 | 327ms | |
Most Stable Models (Top 20)
Ranked by stability (median × consistency). Click a model name to view its detail page.
| Score | Consistency | Stability | ||
|---|---|---|---|---|
| Claude Sonnet 4 | 100% | 100% | 100% | |
| GPT-4o Mini (temp=0) | 100% | 100% | 100% | |
| Gemini 3 Flash (Preview, Reasoning) | 90% | 40% | 40% | |
| GPT-4o Mini (temp=1) | 80% | 20% | 20% | |
| DeepSeek V3 (2025-03-24) | 70% | 8% | 8% | |
| Mistral Small Creative | 70% | 8% | 8% | |
| Gemini 2.5 Flash (Reasoning) | 60% | 2% | 2% | |
| GPT-5.4 Nano | 60% | 2% | 2% | |
| Claude Opus 4.6 (Reasoning) | 0% | 100% | 0% | |
| Gemini 3.1 Pro (Preview) | 20% | 20% | 0% | |
| Z.AI GLM 5 Turbo | 0% | 100% | 0% | |
| Claude Sonnet 4.6 (Reasoning) | 0% | 100% | 0% | |
| GPT-5.4 (Reasoning) | 0% | 100% | 0% | |
| GPT-5 Mini | 10% | 40% | 0% | |
| GPT-5.1 | 0% | 100% | 0% | |
| Claude Opus 4.6 | 0% | 100% | 0% | |
| GPT-5 | 0% | 100% | 0% | |
| Qwen 3.5 397B A17B | 0% | 100% | 0% | |
| Qwen 3.5 122B | 0% | 100% | 0% | |
| Grok 4.20 (Beta, Reasoning) | 0% | 100% | 0% | |
Top Overall Models (Top 20)
Ranked by composite score (performance, cost, speed & stability). Click a model name to view its detail page.
| Score | Cost | Speed | Stability | ||
|---|---|---|---|---|---|
| Claude Sonnet 4 | 100% | $0.0003 | 1.4s | 100% | |
| GPT-4o Mini (temp=0) | 100% | $0.0000 | 3.1s | 100% | |
| Gemini 3 Flash (Preview, Reasoning) | 90% | $0.024 | 45.0s | 40% | |
| GPT-4o Mini (temp=1) | 80% | $0.0000 | 13.7s | 20% | |
| Mistral Small Creative | 70% | $0.0000 | 307ms | 8% | |
| DeepSeek V3 (2025-03-24) | 70% | $0.0002 | 7.8s | 8% | |
| GPT-5.4 Nano | 60% | $0.0000 | 809ms | 2% | |
| Rocinante 12B | 50% | $0.0000 | 4.0s | 0% | |
| Claude Opus 4 | 50% | $0.0014 | 4.6s | 0% | |
| Gemini 2.5 Flash (Reasoning) | 60% | $0.028 | 47.6s | 2% | |
| Mistral Small 4 (Reasoning) | 40% | $0.0027 | 25.0s | 0% | |
| DeepSeek V3 (2024-12-26) | 30% | $0.0002 | 4.8s | 0% | |
| Hermes 3 405B | 30% | $0.0000 | 7.8s | 0% | |
| Gemini 2.5 Flash Lite (Reasoning) | 30% | $0.0033 | 24.9s | 0% | |
| Ministral 3B | 20% | $0.0000 | 293ms | 0% | |
| GPT-5.4 Mini | 20% | $0.0001 | 542ms | 0% | |
| DeepSeek V3.1 | 20% | $0.0005 | 9.4s | 0% | |
| Arcee AI: Trinity Mini | 20% | $0.0007 | 26.6s | 0% | |
| Ministral 8B | 10% | $0.0000 | 284ms | 0% | |
| Cohere Command R+ (Aug. 2024) | 10% | $0.0002 | 454ms | 0% | |
| Median | Evaluator | Top 3 | Flop 3 |
|---|---|---|---|
| 0.0% | Matches Regex |