Codex Red Herring (False Positive Detection)
Tests whether models correctly report "no violations" when a codex is fully consistent with the prose passage. Models that hallucinate false violations (false positives) fail. Uses a 2×2 matrix of text length × codex size, with bare and detailed-entry variants.
Short text (~524 words), small codex (11 entries)
Hallucination
Performance Score Distribution (Top 20)
Click a model name to view its detail page.
Price-Performance Score Distribution (Top 20)
Click a model name to view its detail page.
| Score | Cost | Time | ||
|---|---|---|---|---|
| Arcee AI: Trinity Mini | 81% | $0.0012 | 36.6s | |
| ByteDance Seed 1.6 Flash | 100% | $0.0004 | 5.3s | |
| Grok 4 Fast | 77% | $0.0008 | 8.7s | |
| Gemini 2.5 Flash Lite (Reasoning) | 93% | $0.0012 | 8.6s | |
| Grok 4.1 Fast | 100% | $0.0007 | 7.5s | |
| Llama 3.1 Nemotron 70B | 67% | $0.0021 | 9.0s | |
| Cohere Command R+ (Aug. 2024) | 65% | $0.0046 | 2.5s | |
| Minimax M2.5 | 93% | $0.0015 | 12.4s | |
| ByteDance Seed 1.6 | 100% | $0.0017 | 16.9s | |
| Gemini 2.5 Flash (Reasoning) | 100% | $0.0051 | 8.5s | |
| Z.AI GLM 4.5 | 78% | $0.0018 | 13.0s | |
| GPT-5.2 | 76% | $0.0091 | 10.8s | |
| Z.AI GLM 4.6 | 93% | $0.0029 | 18.0s | |
| Z.AI GLM 4.7 Flash | 100% | $0.0012 | 47.1s | |
| GPT-5 Mini | 100% | $0.0044 | 29.1s | |
| o4 Mini | 100% | $0.0097 | 18.9s | |
| GPT-5.1 | 93% | $0.018 | 22.3s | |
| Aion 2.0 | 78% | $0.0054 | 59.6s | |
| GPT-5 Nano | 85% | $0.0036 | 1.2m | |
| Claude Opus 4.6 | 100% | $0.022 | 9.7s | |
Most Stable Models (Top 20)
Ranked by stability (median × consistency). Click a model name to view its detail page.
| Score | Consistency | Stability | ||
|---|---|---|---|---|
| Claude Opus 4.6 (Reasoning) | 100% | 100% | 100% | |
| Gemini 3.1 Pro (Preview) | 100% | 100% | 100% | |
| GPT-5 Mini | 100% | 100% | 100% | |
| Claude Opus 4.6 | 100% | 100% | 100% | |
| ByteDance Seed 1.6 | 100% | 100% | 100% | |
| Grok 4.1 Fast | 100% | 100% | 100% | |
| o4 Mini | 100% | 100% | 100% | |
| Gemini 2.5 Flash (Reasoning) | 100% | 100% | 100% | |
| Z.AI GLM 4.7 Flash | 100% | 100% | 100% | |
| ByteDance Seed 1.6 Flash | 100% | 100% | 100% | |
| Z.AI GLM 4.6 | 93% | 56% | 56% | |
| GPT-5.1 | 93% | 55% | 55% | |
| Z.AI GLM 5 | 93% | 55% | 55% | |
| o4 Mini High | 93% | 55% | 55% | |
| Minimax M2.5 | 93% | 55% | 55% | |
| Gemini 2.5 Flash Lite (Reasoning) | 93% | 55% | 55% | |
| Claude Sonnet 4.6 (Reasoning) | 85% | 40% | 40% | |
| GPT-5 Nano | 85% | 40% | 40% | |
| Z.AI GLM 4.5 | 78% | 32% | 32% | |
| Aion 2.0 | 78% | 31% | 31% | |
Top Overall Models (Top 20)
Ranked by composite score (performance, cost, speed & stability). Click a model name to view its detail page.
| Score | Cost | Speed | Stability | ||
|---|---|---|---|---|---|
| ByteDance Seed 1.6 Flash | 100% | $0.0004 | 5.3s | 100% | |
| Grok 4.1 Fast | 100% | $0.0007 | 7.5s | 100% | |
| Gemini 2.5 Flash (Reasoning) | 100% | $0.0051 | 8.5s | 100% | |
| ByteDance Seed 1.6 | 100% | $0.0017 | 16.9s | 100% | |
| GPT-5 Mini | 100% | $0.0044 | 29.1s | 100% | |
| o4 Mini | 100% | $0.0097 | 18.9s | 100% | |
| Z.AI GLM 4.7 Flash | 100% | $0.0012 | 47.1s | 100% | |
| Claude Opus 4.6 | 100% | $0.022 | 9.7s | 100% | |
| Claude Opus 4.6 (Reasoning) | 100% | $0.033 | 20.7s | 100% | |
| Gemini 3.1 Pro (Preview) | 100% | $0.039 | 27.8s | 100% | |
| Gemini 2.5 Flash Lite (Reasoning) | 93% | $0.0012 | 8.6s | 55% | |
| Minimax M2.5 | 93% | $0.0015 | 12.4s | 55% | |
| Z.AI GLM 4.6 | 93% | $0.0029 | 18.0s | 56% | |
| GPT-5.1 | 93% | $0.018 | 22.3s | 55% | |
| o4 Mini High | 93% | $0.019 | 35.0s | 55% | |
| Z.AI GLM 5 | 93% | $0.013 | 1.4m | 55% | |
| Z.AI GLM 4.5 | 78% | $0.0018 | 13.0s | 32% | |
| Grok 4 Fast | 77% | $0.0008 | 8.7s | 29% | |
| GPT-5 Nano | 85% | $0.0036 | 1.2m | 40% | |
| GPT-5.2 | 76% | $0.0091 | 10.8s | 27% | |
| Median | Evaluator | Top 3 | Flop 3 |
|---|---|---|---|
| 42.5% | Correct "no violations" response | ||
| 50.0% | No hallucinated violations |