Codex Extraction
Evaluates a model's ability to extract structured codex entries (characters, locations, objects, lore) from prose passages and return them as well-formed XML.
Medium: The Hollow (Inferred)
ToolingReasoning
Performance Score Distribution (Top 20)
Click a model name to view its detail page.
Price-Performance Score Distribution (Top 20)
Click a model name to view its detail page.
| Score | Cost | Time | ||
|---|---|---|---|---|
| Gemini 2.5 Flash Lite | 92% | $0.0004 | 1.9s | |
| Grok 4 Fast | 96% | $0.0012 | 7.5s | |
| Mistral Small Creative | 94% | $0.0005 | 3.4s | |
| Gemini 2.5 Flash | 95% | $0.0019 | 2.2s | |
| Ministral 3 8B | 94% | $0.0006 | 2.6s | |
| Mistral Large 3 | 97% | $0.0025 | 6.6s | |
| Ministral 8B | 93% | $0.0004 | 3.0s | |
| Qwen 3.5 Plus (2026-02-15) | 97% | $0.0027 | 8.7s | |
| Z.AI GLM 4.5 | 96% | $0.0024 | 14.6s | |
| Mistral Small 3.2 24B | 93% | $0.0005 | 4.1s | |
| Gemini 3 Flash (Preview) | 95% | $0.0025 | 3.3s | |
| Gemini 2.5 Flash Lite (Reasoning) | 95% | $0.0016 | 11.4s | |
| DeepSeek V3.1 | 96% | $0.0011 | 22.6s | |
| Ministral 3 14B | 91% | $0.0008 | 4.5s | |
| Mistral Medium 3.1 | 93% | $0.0024 | 4.1s | |
| Minimax M2.5 | 94% | $0.0020 | 13.9s | |
| DeepSeek-V2 Chat | 94% | $0.0019 | 13.3s | |
| Grok 4.1 Fast | 96% | $0.0015 | 22.0s | |
| Claude Haiku 4.5 | 95% | $0.0066 | 3.8s | |
| Qwen 2.5 72B | 91% | $0.0008 | 8.8s | |
Most Stable Models (Top 20)
Ranked by stability (median × consistency). Click a model name to view its detail page.
| Score | Consistency | Stability | ||
|---|---|---|---|---|
| GPT-5 | 99% | 100% | 98% | |
| Claude Opus 4.6 | 99% | 99% | 98% | |
| Claude Sonnet 4.6 | 98% | 100% | 98% | |
| GPT-5.1 | 98% | 99% | 98% | |
| Claude Opus 4.6 (Reasoning) | 99% | 98% | 97% | |
| Claude Sonnet 4.6 (Reasoning) | 97% | 99% | 97% | |
| Claude Opus 4.5 | 99% | 99% | 97% | |
| Gemini 3 Flash (Preview, Reasoning) | 98% | 98% | 96% | |
| Gemini 3.1 Pro (Preview) | 98% | 98% | 96% | |
| Mistral Large 3 | 97% | 100% | 96% | |
| Grok 4 | 98% | 98% | 96% | |
| Z.AI GLM 4.7 | 97% | 99% | 96% | |
| Claude Sonnet 4.5 | 96% | 99% | 96% | |
| Claude 3.5 Sonnet | 96% | 99% | 96% | |
| Qwen 3.5 Plus (2026-02-15) | 97% | 99% | 96% | |
| Claude Opus 4 | 97% | 98% | 96% | |
| Gemini 2.5 Pro | 97% | 98% | 95% | |
| Gemini 3 Pro (Preview) | 97% | 98% | 95% | |
| Grok 4.1 Fast | 96% | 99% | 95% | |
| Z.AI GLM 5 | 96% | 99% | 95% | |
Top Overall Models (Top 20)
Ranked by composite score (performance, cost, speed & stability). Click a model name to view its detail page.
| Score | Cost | Speed | Stability | ||
|---|---|---|---|---|---|
| Mistral Large 3 | 97% | $0.0025 | 6.6s | 96% | |
| Qwen 3.5 Plus (2026-02-15) | 97% | $0.0027 | 8.7s | 96% | |
| Grok 4 Fast | 96% | $0.0012 | 7.5s | 94% | |
| Ministral 3 8B | 94% | $0.0006 | 2.6s | 92% | |
| Gemini 2.5 Flash | 95% | $0.0019 | 2.2s | 93% | |
| Mistral Small Creative | 94% | $0.0005 | 3.4s | 92% | |
| Ministral 8B | 93% | $0.0004 | 3.0s | 92% | |
| Gemini 3 Flash (Preview) | 95% | $0.0025 | 3.3s | 92% | |
| Mistral Small 3.2 24B | 93% | $0.0005 | 4.1s | 92% | |
| Claude Haiku 4.5 | 95% | $0.0066 | 3.8s | 95% | |
| Gemini 2.5 Flash Lite (Reasoning) | 95% | $0.0016 | 11.4s | 93% | |
| Mistral Medium 3.1 | 93% | $0.0024 | 4.1s | 92% | |
| Z.AI GLM 4.5 | 96% | $0.0024 | 14.6s | 93% | |
| Gemini 2.5 Flash Lite | 92% | $0.0004 | 1.9s | 87% | |
| Grok 4.1 Fast | 96% | $0.0015 | 22.0s | 95% | |
| Gemini 3 Flash (Preview, Reasoning) | 98% | $0.0086 | 14.7s | 96% | |
| Mistral Large 2 | 96% | $0.010 | 6.4s | 94% | |
| Ministral 3 14B | 91% | $0.0008 | 4.5s | 89% | |
| Minimax M2.5 | 94% | $0.0020 | 13.9s | 92% | |
| DeepSeek V3.1 | 96% | $0.0011 | 22.6s | 94% | |
| Median | Evaluator | Top 3 | Flop 3 |
|---|---|---|---|
| 83.4% | Accuracy | ||
| 100.0% | Precision | ||
| 95.9% | Recall | ||
| 100.0% | Structural validity |