Codex Red Herring (False Positive Detection)
Tests whether models correctly report "no violations" when a codex is fully consistent with the prose passage. Models that hallucinate false violations (false positives) fail. Uses a 2×2 matrix of text length × codex size, with bare and detailed-entry variants.
Short text (~524 words), small codex (11 detailed entries)
Hallucination
Performance Score Distribution (Top 20)
Click a model name to view its detail page.
| Score | ||
|---|---|---|
| GPT-5 Mini | 100% | |
| Z.AI GLM 5 | 100% | |
| Claude Sonnet 4.6 | 100% | |
| ByteDance Seed 1.6 | 100% | |
| o4 Mini High | 100% | |
| Claude Opus 4.5 | 100% | |
| Grok 4.1 Fast | 100% | |
| GPT-4.1 | 100% | |
| Grok 4 | 100% | |
| Gemini 2.5 Flash (Reasoning) | 100% | |
| Gemini 2.5 Flash Lite (Reasoning) | 100% | |
| Mistral Large 3 | 100% | |
| Mistral Large 2 | 100% | |
| Mistral Large | 100% | |
| Ministral 3 14B | 100% | |
| Ministral 3 8B | 100% | |
| Ministral 8B | 100% | |
| Claude Haiku 4.5 | 95% | |
| GPT-4.1 Mini | 93% | |
| GPT-5.1 | 93% | |
Price-Performance Score Distribution (Top 20)
Click a model name to view its detail page.
| Score | Cost | Time | ||
|---|---|---|---|---|
| Ministral 8B | 100% | $0.0004 | 338ms | |
| Mistral Small 3.2 24B | 63% | $0.0004 | 1.7s | |
| Ministral 3 8B | 100% | $0.0006 | 339ms | |
| GPT-4.1 Mini | 93% | $0.0007 | 1.6s | |
| Ministral 3 14B | 100% | $0.0008 | 531ms | |
| Arcee AI: Trinity Mini | 93% | $0.0013 | 41.5s | |
| ByteDance Seed 1.6 Flash | 84% | $0.0006 | 7.9s | |
| Mistral Large 3 | 100% | $0.0020 | 774ms | |
| Grok 4.1 Fast | 100% | $0.0011 | 8.6s | |
| GPT-4.1 | 100% | $0.0035 | 613ms | |
| Grok 4 Fast | 93% | $0.0013 | 13.5s | |
| Hermes 3 405B | 74% | $0.0039 | 2.5s | |
| Gemini 2.5 Flash Lite (Reasoning) | 100% | $0.0021 | 15.1s | |
| Claude Haiku 4.5 | 95% | $0.0044 | 1.3s | |
| Gemini 2.5 Flash (Reasoning) | 100% | $0.0060 | 21.5s | |
| Mistral Large 2 | 100% | $0.0078 | 650ms | |
| Mistral Large | 100% | $0.0078 | 1.0s | |
| Minimax M2.5 | 78% | $0.0029 | 22.7s | |
| ByteDance Seed 1.6 | 100% | $0.0035 | 29.7s | |
| Cohere Command R+ (Aug. 2024) | 82% | $0.010 | 2.1s | |
Most Stable Models (Top 20)
Ranked by stability (median × consistency). Click a model name to view its detail page.
| Score | Consistency | Stability | ||
|---|---|---|---|---|
| GPT-5 Mini | 100% | 100% | 100% | |
| Z.AI GLM 5 | 100% | 100% | 100% | |
| Claude Sonnet 4.6 | 100% | 100% | 100% | |
| ByteDance Seed 1.6 | 100% | 100% | 100% | |
| o4 Mini High | 100% | 100% | 100% | |
| Claude Opus 4.5 | 100% | 100% | 100% | |
| Grok 4.1 Fast | 100% | 100% | 100% | |
| GPT-4.1 | 100% | 100% | 100% | |
| Grok 4 | 100% | 100% | 100% | |
| Gemini 2.5 Flash (Reasoning) | 100% | 100% | 100% | |
| Gemini 2.5 Flash Lite (Reasoning) | 100% | 100% | 100% | |
| Mistral Large 3 | 100% | 100% | 100% | |
| Mistral Large 2 | 100% | 100% | 100% | |
| Mistral Large | 100% | 100% | 100% | |
| Ministral 3 14B | 100% | 100% | 100% | |
| Ministral 3 8B | 100% | 100% | 100% | |
| Ministral 8B | 100% | 100% | 100% | |
| Claude Haiku 4.5 | 95% | 70% | 70% | |
| GPT-4.1 Mini | 93% | 59% | 59% | |
| GPT-5.1 | 93% | 55% | 55% | |
Top Overall Models (Top 20)
Ranked by composite score (performance, cost, speed & stability). Click a model name to view its detail page.
| Score | Cost | Speed | Stability | ||
|---|---|---|---|---|---|
| Ministral 8B | 100% | $0.0004 | 338ms | 100% | |
| Ministral 3 8B | 100% | $0.0006 | 339ms | 100% | |
| Ministral 3 14B | 100% | $0.0008 | 531ms | 100% | |
| Mistral Large 3 | 100% | $0.0020 | 774ms | 100% | |
| GPT-4.1 | 100% | $0.0035 | 613ms | 100% | |
| Grok 4.1 Fast | 100% | $0.0011 | 8.6s | 100% | |
| Gemini 2.5 Flash Lite (Reasoning) | 100% | $0.0021 | 15.1s | 100% | |
| Mistral Large 2 | 100% | $0.0078 | 650ms | 100% | |
| Mistral Large | 100% | $0.0078 | 1.0s | 100% | |
| Claude Sonnet 4.6 | 100% | $0.013 | 1.0s | 100% | |
| Gemini 2.5 Flash (Reasoning) | 100% | $0.0060 | 21.5s | 100% | |
| ByteDance Seed 1.6 | 100% | $0.0035 | 29.7s | 100% | |
| GPT-5 Mini | 100% | $0.0047 | 29.0s | 100% | |
| Claude Opus 4.5 | 100% | $0.021 | 2.0s | 100% | |
| Z.AI GLM 5 | 100% | $0.012 | 57.7s | 100% | |
| o4 Mini High | 100% | $0.023 | 51.7s | 100% | |
| Claude Haiku 4.5 | 95% | $0.0044 | 1.3s | 70% | |
| GPT-4.1 Mini | 93% | $0.0007 | 1.6s | 59% | |
| Grok 4 | 100% | $0.043 | 52.2s | 100% | |
| Grok 4 Fast | 93% | $0.0013 | 13.5s | 55% | |
| Median | Evaluator | Top 3 | Flop 3 |
|---|---|---|---|
| 50.0% | Correct "no violations" response | ||
| 67.5% | No hallucinated violations |