Codex Extraction
Evaluates a model's ability to extract structured codex entries (characters, locations, objects, lore) from prose passages and return them as well-formed XML.
Performance Score Distribution (Top 20)
Click a model name to view its detail page.
Price-Performance Score Distribution (Top 20)
Click a model name to view its detail page.
| Score | Cost | Time | ||
|---|---|---|---|---|
| Gemini 3 Flash (Preview) | 97% | $0.0027 | 3.9s | |
| Grok 4 Fast | 96% | $0.0012 | 8.7s | |
| Qwen 3.5 Plus (2026-02-15) | 98% | $0.0030 | 10.6s | |
| Mistral Medium 3.1 | 96% | $0.0026 | 5.8s | |
| Mistral Small Creative | 94% | $0.0006 | 3.9s | |
| Mistral Large 3 | 94% | $0.0027 | 8.2s | |
| Grok 4.1 Fast | 97% | $0.0017 | 22.1s | |
| Gemini 2.5 Flash | 94% | $0.0023 | 2.5s | |
| Gemini 3.1 Flash Lite (Preview) | 94% | $0.0017 | 2.0s | |
| Z.AI GLM 5 Turbo | 97% | $0.0068 | 16.0s | |
| Z.AI GLM 4.5 | 96% | $0.0028 | 16.8s | |
| Grok 4.20 (Beta) | 95% | $0.0049 | 2.0s | |
| Ministral 3 8B | 94% | $0.0006 | 3.3s | |
| DeepSeek V3.1 | 91% | $0.0012 | 26.1s | |
| DeepSeek-V2 Chat | 95% | $0.0019 | 14.8s | |
| Claude Haiku 4.5 | 95% | $0.0073 | 4.5s | |
| Mistral Small 3.2 24B | 93% | $0.0005 | 4.4s | |
| Stealth: Healer Alpha | 95% | $0.0000 | 24.6s | |
| Gemini 3 Flash (Preview, Reasoning) | 98% | $0.0096 | 22.2s | |
| DeepSeek V3 (2024-12-26) | 94% | $0.0017 | 13.5s | |
Cost vs Performance
Compares total cost for this test against the test score. Quadrant lines are drawn at the median values. Only models with available cost data are shown.
6 low-scoring outliers hidden: Gemma 3 12B (77.7%), GPT-4.1 Nano (75.3%), Llama 3.1 8B (71.5%), LFM2 24B (49.0%), Rocinante 12B (47.9%), Mistral NeMO (26.1%).
Most Stable Models (Top 20)
Ranked by stability (median × consistency). Click a model name to view its detail page.
| Score | Consistency | Stability | ||
|---|---|---|---|---|
| Claude Opus 4.6 (Reasoning) | 98% | 99% | 97% | |
| Claude Opus 4.5 | 99% | 99% | 97% | |
| Claude Opus 4.6 | 98% | 98% | 97% | |
| Grok 4 | 98% | 98% | 97% | |
| Claude Sonnet 4.6 | 98% | 98% | 96% | |
| GPT-5 | 98% | 97% | 96% | |
| Claude Opus 4 | 98% | 98% | 96% | |
| Claude Sonnet 4.6 (Reasoning) | 97% | 98% | 96% | |
| Gemini 3 Flash (Preview, Reasoning) | 98% | 97% | 96% | |
| Z.AI GLM 5 | 97% | 97% | 95% | |
| Aion 2.0 | 97% | 97% | 95% | |
| Grok 4.20 (Beta, Reasoning) | 97% | 98% | 95% | |
| Qwen 3.5 Plus (2026-02-15) | 98% | 98% | 95% | |
| Z.AI GLM 5 Turbo | 97% | 97% | 95% | |
| o4 Mini High | 97% | 97% | 95% | |
| Gemini 3 Flash (Preview) | 97% | 97% | 95% | |
| Gemini 2.5 Pro | 97% | 97% | 94% | |
| Gemini 3 Pro (Preview) | 97% | 96% | 94% | |
| GPT-5.4 (Reasoning) | 97% | 97% | 94% | |
| GPT-5.4 Mini (Reasoning) | 97% | 97% | 94% | |
Top Overall Models (Top 20)
Ranked by composite score (performance, cost, speed & stability). Click a model name to view its detail page.
| Score | Cost | Speed | Stability | ||
|---|---|---|---|---|---|
| Gemini 3 Flash (Preview) | 97% | $0.0027 | 3.9s | 95% | |
| Qwen 3.5 Plus (2026-02-15) | 98% | $0.0030 | 10.6s | 95% | |
| Grok 4 Fast | 96% | $0.0012 | 8.7s | 93% | |
| Gemini 3.1 Flash Lite (Preview) | 94% | $0.0017 | 2.0s | 92% | |
| Mistral Medium 3.1 | 96% | $0.0026 | 5.8s | 92% | |
| Grok 4.20 (Beta) | 95% | $0.0049 | 2.0s | 92% | |
| Mistral Small Creative | 94% | $0.0006 | 3.9s | 89% | |
| Grok 4.1 Fast | 97% | $0.0017 | 22.1s | 94% | |
| Ministral 3 8B | 94% | $0.0006 | 3.3s | 88% | |
| Z.AI GLM 5 Turbo | 97% | $0.0068 | 16.0s | 95% | |
| Gemini 2.5 Flash | 94% | $0.0023 | 2.5s | 89% | |
| Mistral Large 3 | 94% | $0.0027 | 8.2s | 91% | |
| Z.AI GLM 4.5 | 96% | $0.0028 | 16.8s | 92% | |
| Mistral Small 3.2 24B | 93% | $0.0005 | 4.4s | 89% | |
| Claude Haiku 4.5 | 95% | $0.0073 | 4.5s | 91% | |
| DeepSeek-V2 Chat | 95% | $0.0019 | 14.8s | 90% | |
| MiniMax M2.7 | 96% | $0.0022 | 21.7s | 92% | |
| GPT-5.4 Mini | 93% | $0.0031 | 2.2s | 88% | |
| Gemini 3 Flash (Preview, Reasoning) | 98% | $0.0096 | 22.2s | 96% | |
| Inception Mercury 2 | 92% | $0.0022 | 3.5s | 88% | |
| Model | Total â–¼ | Short: The Rusty Lantern (Explicit) | Medium: Through the Thornveil (Scattered) | Medium: The Hollow (Inferred) | Long: The Spire of Echoes (Dense) |
|---|---|---|---|---|---|
| Claude Opus 4.5 | 99% | 99% | 98% | 99% | 98% |
| Claude Opus 4.6 (Reasoning) | 98% | 99% | 99% | 99% | 98% |
| Grok 4 | 98% | 99% | 99% | 98% | 97% |
| Claude Opus 4.6 | 98% | 99% | 97% | 99% | 98% |
| GPT-5 | 98% | 99% | 98% | 99% | 96% |
| Gemini 3 Flash (Preview, Reasoning) | 98% | 98% | 97% | 98% | 99% |
| Claude Opus 4 | 98% | 99% | 97% | 97% | 98% |
| Claude Sonnet 4.6 | 98% | 99% | 97% | 98% | 97% |
| Qwen 3.5 Plus (2026-02-15) | 98% | 99% | 97% | 97% | 97% |
| Aion 2.0 | 97% | 98% | 97% | 97% | 97% |
| Claude Sonnet 4.6 (Reasoning) | 97% | 97% | 98% | 97% | 97% |
| Grok 4.20 (Beta, Reasoning) | 97% | 99% | 97% | 97% | 97% |
| Z.AI GLM 5 Turbo | 97% | 98% | 98% | 96% | 96% |
| Gemini 3 Pro (Preview) | 97% | 95% | 98% | 97% | 99% |
| Z.AI GLM 5 | 97% | 97% | 98% | 96% | 98% |
Short: The Rusty Lantern (Explicit)
Performance Score Distribution (Top 20)
Click a model name to view its detail page.
Price-Performance Score Distribution (Top 20)
Click a model name to view its detail page.
| Score | Cost | Time | ||
|---|---|---|---|---|
| Mistral Small Creative | 97% | $0.0004 | 2.7s | |
| Ministral 3 8B | 98% | $0.0005 | 2.6s | |
| Gemini 3 Flash (Preview) | 98% | $0.0022 | 3.1s | |
| Mistral Medium 3.1 | 98% | $0.0021 | 6.5s | |
| Z.AI GLM 4.5 | 99% | $0.0013 | 8.1s | |
| Qwen 3.5 Plus (2026-02-15) | 99% | $0.0024 | 7.9s | |
| GPT-5.4 Nano (Reasoning) | 96% | $0.0017 | 7.3s | |
| Gemini 2.5 Flash Lite | 92% | $0.0004 | 1.6s | |
| Gemini 3.1 Flash Lite (Preview) | 93% | $0.0014 | 1.8s | |
| Grok 4 Fast | 96% | $0.0010 | 7.4s | |
| Grok 4.20 (Beta) | 97% | $0.0044 | 1.6s | |
| Gemini 2.5 Flash | 94% | $0.0018 | 2.0s | |
| DeepSeek V3 (2024-12-26) | 96% | $0.0016 | 11.9s | |
| Qwen 2.5 72B | 94% | $0.0007 | 8.5s | |
| Mistral Large 3 | 96% | $0.0022 | 6.3s | |
| Ministral 3 14B | 92% | $0.0007 | 4.8s | |
| Inception Mercury 2 | 92% | $0.0017 | 2.9s | |
| Ministral 3B | 91% | $0.0001 | 1.4s | |
| Arcee AI: Trinity Large (Preview) | 95% | $0.0000 | 14.5s | |
| Mistral Small 3.2 24B | 91% | $0.0004 | 3.4s | |
Most Stable Models (Top 20)
Ranked by stability (median × consistency). Click a model name to view its detail page.
| Score | Consistency | Stability | ||
|---|---|---|---|---|
| Grok 4 | 99% | 100% | 99% | |
| Qwen 3.5 Plus (2026-02-15) | 99% | 100% | 99% | |
| Claude Sonnet 4.5 | 99% | 99% | 99% | |
| Claude Sonnet 4 | 99% | 100% | 99% | |
| GPT-5 | 99% | 100% | 99% | |
| Claude Opus 4.6 | 99% | 100% | 99% | |
| Claude Opus 4.5 | 99% | 99% | 99% | |
| Claude Opus 4.6 (Reasoning) | 99% | 100% | 99% | |
| Claude Opus 4 | 99% | 99% | 98% | |
| Grok 4.1 Fast | 99% | 99% | 98% | |
| Claude Sonnet 4.6 | 99% | 99% | 98% | |
| Z.AI GLM 4.5 | 99% | 99% | 98% | |
| Hermes 3 405B | 99% | 99% | 98% | |
| Claude 3.7 Sonnet | 98% | 99% | 98% | |
| Grok 4.20 (Beta, Reasoning) | 99% | 99% | 98% | |
| Ministral 3 8B | 98% | 99% | 98% | |
| Z.AI GLM 5 Turbo | 98% | 99% | 97% | |
| MiniMax M2.7 | 99% | 98% | 97% | |
| Qwen 3.5 397B A17B | 99% | 98% | 97% | |
| GPT-5.4 Mini (Reasoning) | 98% | 98% | 97% | |
Top Overall Models (Top 20)
Ranked by composite score (performance, cost, speed & stability). Click a model name to view its detail page.
| Score | Cost | Speed | Stability | ||
|---|---|---|---|---|---|
| Ministral 3 8B | 98% | $0.0005 | 2.6s | 98% | |
| Qwen 3.5 Plus (2026-02-15) | 99% | $0.0024 | 7.9s | 99% | |
| Z.AI GLM 4.5 | 99% | $0.0013 | 8.1s | 98% | |
| Gemini 3 Flash (Preview) | 98% | $0.0022 | 3.1s | 96% | |
| Mistral Small Creative | 97% | $0.0004 | 2.7s | 94% | |
| Mistral Medium 3.1 | 98% | $0.0021 | 6.5s | 96% | |
| Grok 4.20 (Beta) | 97% | $0.0044 | 1.6s | 95% | |
| Gemini 2.5 Flash | 94% | $0.0018 | 2.0s | 94% | |
| Grok 4.1 Fast | 99% | $0.0015 | 19.0s | 98% | |
| Mistral Large 3 | 96% | $0.0022 | 6.3s | 95% | |
| Hermes 3 405B | 99% | $0.0034 | 16.8s | 98% | |
| GPT-5.4 Nano (Reasoning) | 96% | $0.0017 | 7.3s | 93% | |
| Z.AI GLM 5 Turbo | 98% | $0.0053 | 12.9s | 97% | |
| Grok 4 Fast | 96% | $0.0010 | 7.4s | 91% | |
| Gemini 3.1 Flash Lite (Preview) | 93% | $0.0014 | 1.8s | 92% | |
| DeepSeek V3 (2024-12-26) | 96% | $0.0016 | 11.9s | 94% | |
| MiniMax M2.7 | 99% | $0.0016 | 22.0s | 97% | |
| Ministral 3 14B | 92% | $0.0007 | 4.8s | 90% | |
| Qwen 2.5 72B | 94% | $0.0007 | 8.5s | 90% | |
| DeepSeek-V2 Chat | 96% | $0.0016 | 11.4s | 90% | |
| Median | Evaluator | Top 3 | Flop 3 |
|---|---|---|---|
| 87.7% | Accuracy | ||
| 96.6% | Precision | ||
| 97.3% | Recall | ||
| 100.0% | Structural validity |
Medium: Through the Thornveil (Scattered)
Performance Score Distribution (Top 20)
Click a model name to view its detail page.
Price-Performance Score Distribution (Top 20)
Click a model name to view its detail page.
| Score | Cost | Time | ||
|---|---|---|---|---|
| Gemini 3 Flash (Preview) | 98% | $0.0027 | 3.7s | |
| Grok 4 Fast | 97% | $0.0013 | 10.2s | |
| Mistral Medium 3.1 | 96% | $0.0025 | 5.1s | |
| Qwen 3.5 Plus (2026-02-15) | 97% | $0.0029 | 11.0s | |
| Claude Haiku 4.5 | 97% | $0.0071 | 4.3s | |
| Z.AI GLM 5 Turbo | 98% | $0.0061 | 13.8s | |
| Grok 4.1 Fast | 98% | $0.0019 | 29.4s | |
| Gemini 3.1 Flash Lite (Preview) | 95% | $0.0017 | 1.7s | |
| Mistral Small 3.2 24B | 94% | $0.0005 | 4.4s | |
| Stealth: Healer Alpha | 96% | $0.0000 | 26.7s | |
| Grok 4.20 (Beta) | 95% | $0.0049 | 2.0s | |
| Z.AI GLM 4.5 | 95% | $0.0018 | 13.7s | |
| DeepSeek-V2 Chat | 94% | $0.0019 | 14.0s | |
| DeepSeek V3 (2024-12-26) | 94% | $0.0017 | 13.5s | |
| Gemini 2.5 Flash | 95% | $0.0021 | 2.4s | |
| Arcee AI: Trinity Large (Preview) | 94% | $0.0000 | 18.6s | |
| GPT-5.4 | 96% | $0.011 | 6.0s | |
| DeepSeek V3.1 | 95% | $0.0012 | 22.2s | |
| Gemini 2.5 Flash (Reasoning) | 89% | $0.0086 | 12.6s | |
| DeepSeek V3.2 | 96% | $0.0011 | 33.5s | |
Most Stable Models (Top 20)
Ranked by stability (median × consistency). Click a model name to view its detail page.
| Score | Consistency | Stability | ||
|---|---|---|---|---|
| Claude Opus 4.6 (Reasoning) | 99% | 100% | 98% | |
| Grok 4 | 99% | 99% | 98% | |
| Grok 4.1 Fast | 98% | 99% | 97% | |
| Claude Opus 4.5 | 98% | 99% | 97% | |
| Z.AI GLM 5 Turbo | 98% | 99% | 97% | |
| Z.AI GLM 5 | 98% | 99% | 97% | |
| Claude Sonnet 4.6 (Reasoning) | 98% | 99% | 97% | |
| Gemini 3 Pro (Preview) | 98% | 99% | 97% | |
| Claude Haiku 4.5 | 97% | 99% | 97% | |
| GPT-5.4 (Reasoning) | 97% | 99% | 97% | |
| GPT-5.4 (Reasoning, Low) | 98% | 99% | 97% | |
| Gemini 3.1 Pro (Preview) | 98% | 98% | 96% | |
| GPT-5 | 98% | 98% | 96% | |
| Claude Sonnet 4.6 | 97% | 98% | 96% | |
| Claude 3.5 Sonnet | 97% | 99% | 96% | |
| Grok 4.20 (Beta, Reasoning) | 97% | 99% | 96% | |
| Claude Opus 4 | 97% | 99% | 96% | |
| Qwen 3.5 Plus (2026-02-15) | 97% | 99% | 96% | |
| o4 Mini High | 97% | 99% | 96% | |
| Mistral Medium 3.1 | 96% | 99% | 96% | |
Top Overall Models (Top 20)
Ranked by composite score (performance, cost, speed & stability). Click a model name to view its detail page.
| Score | Cost | Speed | Stability | ||
|---|---|---|---|---|---|
| Gemini 3 Flash (Preview) | 98% | $0.0027 | 3.7s | 95% | |
| Z.AI GLM 5 Turbo | 98% | $0.0061 | 13.8s | 97% | |
| Mistral Medium 3.1 | 96% | $0.0025 | 5.1s | 96% | |
| Claude Haiku 4.5 | 97% | $0.0071 | 4.3s | 97% | |
| Qwen 3.5 Plus (2026-02-15) | 97% | $0.0029 | 11.0s | 96% | |
| Grok 4.1 Fast | 98% | $0.0019 | 29.4s | 97% | |
| Grok 4 Fast | 97% | $0.0013 | 10.2s | 94% | |
| Gemini 3.1 Flash Lite (Preview) | 95% | $0.0017 | 1.7s | 93% | |
| Mistral Small 3.2 24B | 94% | $0.0005 | 4.4s | 93% | |
| Grok 4.20 (Beta) | 95% | $0.0049 | 2.0s | 93% | |
| Stealth: Healer Alpha | 96% | $0.0000 | 26.7s | 93% | |
| Z.AI GLM 4.5 | 95% | $0.0018 | 13.7s | 93% | |
| GPT-5.4 (Reasoning, Low) | 98% | $0.016 | 9.5s | 97% | |
| Gemini 3 Flash (Preview, Reasoning) | 97% | $0.0100 | 16.7s | 95% | |
| DeepSeek V3.1 | 95% | $0.0012 | 22.2s | 93% | |
| Gemini 2.5 Flash | 95% | $0.0021 | 2.4s | 89% | |
| Arcee AI: Trinity Large (Preview) | 94% | $0.0000 | 18.6s | 92% | |
| MiniMax M2.7 | 95% | $0.0017 | 22.1s | 93% | |
| DeepSeek V3 (2025-03-24) | 95% | $0.0011 | 30.3s | 94% | |
| MiniMax M2.5 | 96% | $0.0023 | 33.5s | 94% | |
| Median | Evaluator | Top 3 | Flop 3 |
|---|---|---|---|
| 82.6% | Accuracy | ||
| 97.5% | Precision | ||
| 96.3% | Recall | ||
| 100.0% | Structural validity |
Medium: The Hollow (Inferred)
Performance Score Distribution (Top 20)
Click a model name to view its detail page.
Price-Performance Score Distribution (Top 20)
Click a model name to view its detail page.
| Score | Cost | Time | ||
|---|---|---|---|---|
| Gemini 3.1 Flash Lite (Preview) | 96% | $0.0016 | 2.2s | |
| Gemini 2.5 Flash Lite | 92% | $0.0004 | 1.9s | |
| Grok 4 Fast | 96% | $0.0012 | 7.5s | |
| Mistral Small Creative | 94% | $0.0005 | 3.4s | |
| Gemini 2.5 Flash | 95% | $0.0019 | 2.2s | |
| Ministral 3 8B | 94% | $0.0006 | 2.6s | |
| Mistral Large 3 | 97% | $0.0025 | 6.6s | |
| Ministral 8B | 93% | $0.0004 | 3.0s | |
| Qwen 3.5 Plus (2026-02-15) | 97% | $0.0027 | 8.7s | |
| Z.AI GLM 4.5 | 96% | $0.0024 | 14.6s | |
| Mistral Small 3.2 24B | 93% | $0.0005 | 4.1s | |
| Gemini 3 Flash (Preview) | 95% | $0.0025 | 3.3s | |
| Inception Mercury 2 | 95% | $0.0020 | 3.2s | |
| Gemini 2.5 Flash Lite (Reasoning) | 95% | $0.0016 | 11.4s | |
| GPT-5.4 Mini | 94% | $0.0029 | 2.0s | |
| DeepSeek V3.1 | 96% | $0.0011 | 22.6s | |
| Ministral 3 14B | 91% | $0.0008 | 4.5s | |
| GPT-5.4 Mini (Reasoning, Low) | 94% | $0.0039 | 3.5s | |
| Stealth: Healer Alpha | 95% | $0.0000 | 22.8s | |
| Mistral Medium 3.1 | 93% | $0.0024 | 4.1s | |
Most Stable Models (Top 20)
Ranked by stability (median × consistency). Click a model name to view its detail page.
| Score | Consistency | Stability | ||
|---|---|---|---|---|
| GPT-5 | 99% | 100% | 98% | |
| Claude Opus 4.6 | 99% | 99% | 98% | |
| Claude Sonnet 4.6 | 98% | 100% | 98% | |
| GPT-5.1 | 98% | 99% | 98% | |
| Claude Opus 4.6 (Reasoning) | 99% | 98% | 97% | |
| GPT-5.4 (Reasoning) | 98% | 99% | 97% | |
| Claude Sonnet 4.6 (Reasoning) | 97% | 99% | 97% | |
| Claude Opus 4.5 | 99% | 99% | 97% | |
| Gemini 3 Flash (Preview, Reasoning) | 98% | 98% | 96% | |
| Gemini 3.1 Pro (Preview) | 98% | 98% | 96% | |
| Mistral Large 3 | 97% | 100% | 96% | |
| Grok 4 | 98% | 98% | 96% | |
| Grok 4.20 (Beta, Reasoning) | 97% | 99% | 96% | |
| Z.AI GLM 4.7 | 97% | 99% | 96% | |
| Claude Sonnet 4.5 | 96% | 99% | 96% | |
| Claude 3.5 Sonnet | 96% | 99% | 96% | |
| Qwen 3.5 Plus (2026-02-15) | 97% | 99% | 96% | |
| Claude Opus 4 | 97% | 98% | 96% | |
| Qwen 3.5 35B | 97% | 99% | 96% | |
| Gemini 2.5 Pro | 97% | 98% | 95% | |
Top Overall Models (Top 20)
Ranked by composite score (performance, cost, speed & stability). Click a model name to view its detail page.
| Score | Cost | Speed | Stability | ||
|---|---|---|---|---|---|
| Gemini 3.1 Flash Lite (Preview) | 96% | $0.0016 | 2.2s | 95% | |
| Mistral Large 3 | 97% | $0.0025 | 6.6s | 96% | |
| Qwen 3.5 Plus (2026-02-15) | 97% | $0.0027 | 8.7s | 96% | |
| Grok 4 Fast | 96% | $0.0012 | 7.5s | 94% | |
| Ministral 3 8B | 94% | $0.0006 | 2.6s | 92% | |
| Mistral Small Creative | 94% | $0.0005 | 3.4s | 92% | |
| Gemini 2.5 Flash | 95% | $0.0019 | 2.2s | 93% | |
| Ministral 8B | 93% | $0.0004 | 3.0s | 92% | |
| Gemini 3 Flash (Preview) | 95% | $0.0025 | 3.3s | 92% | |
| Inception Mercury 2 | 95% | $0.0020 | 3.2s | 91% | |
| Mistral Small 3.2 24B | 93% | $0.0005 | 4.1s | 92% | |
| Claude Haiku 4.5 | 95% | $0.0066 | 3.8s | 95% | |
| GPT-5.4 Mini (Reasoning, Low) | 94% | $0.0039 | 3.5s | 93% | |
| GPT-5.4 Mini | 94% | $0.0029 | 2.0s | 92% | |
| Gemini 2.5 Flash Lite (Reasoning) | 95% | $0.0016 | 11.4s | 93% | |
| Mistral Medium 3.1 | 93% | $0.0024 | 4.1s | 92% | |
| Grok 4.1 Fast | 96% | $0.0015 | 22.0s | 95% | |
| Z.AI GLM 4.5 | 96% | $0.0024 | 14.6s | 93% | |
| Gemini 3 Flash (Preview, Reasoning) | 98% | $0.0086 | 14.7s | 96% | |
| MiniMax M2.7 | 95% | $0.0033 | 16.6s | 94% | |
| Median | Evaluator | Top 3 | Flop 3 |
|---|---|---|---|
| 83.4% | Accuracy | ||
| 100.0% | Precision | ||
| 96.6% | Recall | ||
| 100.0% | Structural validity |
Long: The Spire of Echoes (Dense)
Performance Score Distribution (Top 20)
Click a model name to view its detail page.
Price-Performance Score Distribution (Top 20)
Click a model name to view its detail page.
| Score | Cost | Time | ||
|---|---|---|---|---|
| Gemini 3 Flash (Preview) | 98% | $0.0033 | 5.4s | |
| Grok 4 Fast | 96% | $0.0014 | 9.6s | |
| Mistral Medium 3.1 | 97% | $0.0035 | 7.5s | |
| Mistral Small Creative | 95% | $0.0007 | 5.8s | |
| Gemini 2.5 Flash | 93% | $0.0033 | 3.6s | |
| Qwen 3.5 Plus (2026-02-15) | 97% | $0.0041 | 14.9s | |
| Grok 4.20 (Beta) | 96% | $0.0072 | 2.8s | |
| Gemini 3.1 Flash Lite (Preview) | 94% | $0.0022 | 2.3s | |
| GPT-5.4 Mini (Reasoning, Low) | 95% | $0.0060 | 4.5s | |
| Mistral Small 3.2 24B | 93% | $0.0007 | 5.7s | |
| Gemini 2.5 Flash Lite (Reasoning) | 95% | $0.0025 | 15.5s | |
| Grok 4.1 Fast | 95% | $0.0018 | 17.9s | |
| DeepSeek V3.1 | 78% | $0.0016 | 38.0s | |
| Z.AI GLM 4.5 | 96% | $0.0056 | 30.8s | |
| Gemini 3 Flash (Preview, Reasoning) | 99% | $0.011 | 19.1s | |
| DeepSeek-V2 Chat | 95% | $0.0022 | 20.5s | |
| Claude Haiku 4.5 | 96% | $0.0096 | 6.6s | |
| Z.AI GLM 4.6 | 96% | $0.0050 | 59.2s | |
| Stealth: Hunter Alpha | 97% | $0.0000 | 50.4s | |
| Stealth: Healer Alpha | 95% | $0.0000 | 27.4s | |
Most Stable Models (Top 20)
Ranked by stability (median × consistency). Click a model name to view its detail page.
| Score | Consistency | Stability | ||
|---|---|---|---|---|
| Gemini 3 Pro (Preview) | 99% | 100% | 98% | |
| Claude Opus 4 | 98% | 100% | 98% | |
| Gemini 3 Flash (Preview, Reasoning) | 99% | 99% | 98% | |
| Claude Opus 4.5 | 98% | 100% | 98% | |
| Claude Opus 4.6 | 98% | 100% | 98% | |
| Z.AI GLM 5 | 98% | 99% | 97% | |
| Claude Sonnet 4.6 (Reasoning) | 97% | 99% | 97% | |
| GPT-5.4 (Reasoning) | 98% | 99% | 97% | |
| Gemini 3 Flash (Preview) | 98% | 99% | 97% | |
| Grok 4 | 97% | 99% | 97% | |
| Claude Opus 4.6 (Reasoning) | 98% | 99% | 97% | |
| GPT-5.4 Mini (Reasoning) | 97% | 100% | 97% | |
| Mistral Medium 3.1 | 97% | 100% | 96% | |
| Qwen 3.5 Plus (2026-02-15) | 97% | 99% | 96% | |
| o4 Mini High | 97% | 98% | 96% | |
| Aion 2.0 | 97% | 98% | 96% | |
| Claude Sonnet 4.6 | 97% | 99% | 96% | |
| GPT-5 | 96% | 98% | 95% | |
| Gemini 2.5 Pro | 97% | 98% | 95% | |
| Claude Haiku 4.5 | 96% | 99% | 95% | |
Top Overall Models (Top 20)
Ranked by composite score (performance, cost, speed & stability). Click a model name to view its detail page.
| Score | Cost | Speed | Stability | ||
|---|---|---|---|---|---|
| Gemini 3 Flash (Preview) | 98% | $0.0033 | 5.4s | 97% | |
| Mistral Medium 3.1 | 97% | $0.0035 | 7.5s | 96% | |
| Mistral Small Creative | 95% | $0.0007 | 5.8s | 94% | |
| Grok 4 Fast | 96% | $0.0014 | 9.6s | 94% | |
| Qwen 3.5 Plus (2026-02-15) | 97% | $0.0041 | 14.9s | 96% | |
| Gemini 3.1 Flash Lite (Preview) | 94% | $0.0022 | 2.3s | 93% | |
| Grok 4.20 (Beta) | 96% | $0.0072 | 2.8s | 95% | |
| GPT-5.4 Mini (Reasoning, Low) | 95% | $0.0060 | 4.5s | 94% | |
| Gemini 3 Flash (Preview, Reasoning) | 99% | $0.011 | 19.1s | 98% | |
| Claude Haiku 4.5 | 96% | $0.0096 | 6.6s | 95% | |
| Mistral Small 3.2 24B | 93% | $0.0007 | 5.7s | 91% | |
| DeepSeek-V2 Chat | 95% | $0.0022 | 20.5s | 93% | |
| Grok 4.1 Fast | 95% | $0.0018 | 17.9s | 92% | |
| GPT-5.4 Mini | 93% | $0.0042 | 2.7s | 92% | |
| Gemini 2.5 Flash Lite (Reasoning) | 95% | $0.0025 | 15.5s | 91% | |
| Gemini 2.5 Flash Lite | 92% | $0.0005 | 2.4s | 90% | |
| Z.AI GLM 5 Turbo | 96% | $0.0093 | 17.7s | 95% | |
| Stealth: Healer Alpha | 95% | $0.0000 | 27.4s | 92% | |
| Mistral Large 3 | 94% | $0.0036 | 11.9s | 91% | |
| Inception Mercury 2 | 92% | $0.0027 | 4.2s | 91% | |
| Median | Evaluator | Top 3 | Flop 3 |
|---|---|---|---|
| 81.4% | Accuracy | ||
| 99.2% | Precision | ||
| 98.1% | Recall | ||
| 100.0% | Structural validity |