Codex Red Herring (False Positive Detection)
Tests whether models correctly report "no violations" when a codex is fully consistent with the prose passage. Models that hallucinate false violations (false positives) fail. Uses a 2×2 matrix of text length × codex size, with bare and detailed-entry variants.
Long text (~1594 words), big codex (51 entries)
Hallucination
Performance Score Distribution (Top 20)
Click a model name to view its detail page.
Price-Performance Score Distribution (Top 20)
Click a model name to view its detail page.
| Score | Cost | Time | ||
|---|---|---|---|---|
| GPT-4.1 Nano | 93% | $0.0003 | 2.2s | |
| Ministral 3 8B | 93% | $0.0006 | 1.9s | |
| ByteDance Seed 1.6 Flash | 93% | $0.0010 | 14.8s | |
| Gemini 2.5 Flash Lite (Reasoning) | 93% | $0.0033 | 22.0s | |
| Grok 4.1 Fast | 92% | $0.0028 | 27.0s | |
| Gemini 2.5 Flash (Reasoning) | 85% | $0.012 | 18.1s | |
| GPT-5.2 | 92% | $0.015 | 17.4s | |
| GPT-5 Nano | 100% | $0.0038 | 1.3m | |
| Aion 2.0 | 100% | $0.0086 | 1.3m | |
| o4 Mini | 85% | $0.021 | 41.4s | |
| GPT-5.1 | 100% | $0.029 | 28.6s | |
| Z.AI GLM 5 | 83% | $0.019 | 1.7m | |
| Claude Opus 4.6 | 94% | $0.048 | 16.8s | |
| MoonshotAI: Kimi K2.5 | 77% | $0.019 | 2.8m | |
| o4 Mini High | 85% | $0.045 | 1.5m | |
| GPT-5 | 92% | $0.074 | 2.5m | |
| Z.AI GLM 4.7 Flash | 77% | $0.0058 | 4.4m | |
| Claude Opus 4.6 (Reasoning) | 100% | $0.143 | 1.3m | |
| Minimax M2.5 | 59% | $0.0035 | 36.5s | |
| Arcee AI: Trinity Mini | 60% | $0.0004 | 10.5s | |
Most Stable Models (Top 20)
Ranked by stability (median × consistency). Click a model name to view its detail page.
| Score | Consistency | Stability | ||
|---|---|---|---|---|
| Claude Opus 4.6 (Reasoning) | 100% | 100% | 100% | |
| GPT-5.1 | 100% | 100% | 100% | |
| Aion 2.0 | 100% | 100% | 100% | |
| GPT-5 Nano | 100% | 100% | 100% | |
| Claude Opus 4.6 | 94% | 61% | 61% | |
| Ministral 3 8B | 93% | 56% | 56% | |
| GPT-4.1 Nano | 93% | 55% | 55% | |
| Gemini 2.5 Flash Lite (Reasoning) | 93% | 55% | 55% | |
| ByteDance Seed 1.6 Flash | 93% | 55% | 55% | |
| GPT-5 | 92% | 50% | 50% | |
| GPT-5.2 | 92% | 50% | 50% | |
| Grok 4.1 Fast | 92% | 50% | 50% | |
| o4 Mini High | 85% | 40% | 40% | |
| o4 Mini | 85% | 40% | 40% | |
| Gemini 2.5 Flash (Reasoning) | 85% | 40% | 40% | |
| Claude Sonnet 4.6 (Reasoning) | 83% | 33% | 33% | |
| Z.AI GLM 5 | 83% | 33% | 33% | |
| Claude Opus 4.5 | 33% | 72% | 30% | |
| MoonshotAI: Kimi K2.5 | 77% | 29% | 29% | |
| Z.AI GLM 4.7 Flash | 77% | 29% | 29% | |
Top Overall Models (Top 20)
Ranked by composite score (performance, cost, speed & stability). Click a model name to view its detail page.
| Score | Cost | Speed | Stability | ||
|---|---|---|---|---|---|
| GPT-5.1 | 100% | $0.029 | 28.6s | 100% | |
| GPT-5 Nano | 100% | $0.0038 | 1.3m | 100% | |
| Aion 2.0 | 100% | $0.0086 | 1.3m | 100% | |
| Ministral 3 8B | 93% | $0.0006 | 1.9s | 56% | |
| GPT-4.1 Nano | 93% | $0.0003 | 2.2s | 55% | |
| ByteDance Seed 1.6 Flash | 93% | $0.0010 | 14.8s | 55% | |
| Gemini 2.5 Flash Lite (Reasoning) | 93% | $0.0033 | 22.0s | 55% | |
| Grok 4.1 Fast | 92% | $0.0028 | 27.0s | 50% | |
| GPT-5.2 | 92% | $0.015 | 17.4s | 50% | |
| Claude Opus 4.6 | 94% | $0.048 | 16.8s | 61% | |
| Gemini 2.5 Flash (Reasoning) | 85% | $0.012 | 18.1s | 40% | |
| Claude Opus 4.6 (Reasoning) | 100% | $0.143 | 1.3m | 100% | |
| o4 Mini | 85% | $0.021 | 41.4s | 40% | |
| o4 Mini High | 85% | $0.045 | 1.5m | 40% | |
| Z.AI GLM 5 | 83% | $0.019 | 1.7m | 33% | |
| Arcee AI: Trinity Mini | 60% | $0.0004 | 10.5s | 16% | |
| GPT-5 | 92% | $0.074 | 2.5m | 50% | |
| Minimax M2.5 | 59% | $0.0035 | 36.5s | 11% | |
| Claude 3 Haiku | 29% | $0.0014 | 3.1s | 28% | |
| GPT-4.1 | 45% | $0.0053 | 2.7s | 14% | |
| Median | Evaluator | Top 3 | Flop 3 |
|---|---|---|---|
| 27.5% | Correct "no violations" response | ||
| 27.8% | No hallucinated violations |