Codex Red Herring (False Positive Detection)
Tests whether models correctly report "no violations" when a codex is fully consistent with the prose passage. Models that hallucinate false violations (false positives) fail. Uses a 2×2 matrix of text length × codex size, with bare and detailed-entry variants.
Long text (~1594 words), small codex (11 detailed entries)
Hallucination
Performance Score Distribution (Top 20)
Click a model name to view its detail page.
Price-Performance Score Distribution (Top 20)
Click a model name to view its detail page.
| Score | Cost | Time | ||
|---|---|---|---|---|
| GPT-4.1 Nano | 78% | $0.0003 | 1.4s | |
| GPT-4o Mini (temp=1) | 100% | $0.0004 | 597ms | |
| GPT-4o Mini (temp=0) | 100% | $0.0005 | 670ms | |
| GPT-4.1 | 100% | $0.0045 | 728ms | |
| ByteDance Seed 1.6 Flash | 100% | $0.0008 | 9.1s | |
| Grok 4.1 Fast | 85% | $0.0022 | 20.0s | |
| Minimax M2.5 | 73% | $0.0029 | 22.1s | |
| Gemini 2.5 Flash Lite (Reasoning) | 78% | $0.0030 | 24.9s | |
| Cohere Command R+ (Aug. 2024) | 65% | $0.013 | 2.6s | |
| o4 Mini | 93% | $0.0094 | 17.8s | |
| GPT-5 Mini | 100% | $0.0060 | 40.1s | |
| GPT-5 Nano | 93% | $0.0044 | 1.4m | |
| Z.AI GLM 4.5 | 72% | $0.0047 | 42.7s | |
| Z.AI GLM 5 | 100% | $0.014 | 1.3m | |
| Claude Sonnet 4.6 | 86% | $0.029 | 15.0s | |
| o4 Mini High | 100% | $0.022 | 48.0s | |
| GPT-5.1 | 100% | $0.028 | 32.0s | |
| Z.AI GLM 4.7 Flash | 93% | $0.0038 | 2.5m | |
| GPT-5 | 93% | $0.045 | 1.3m | |
| Z.AI GLM 4.6 | 69% | $0.023 | 1.8m | |
Most Stable Models (Top 20)
Ranked by stability (median × consistency). Click a model name to view its detail page.
| Score | Consistency | Stability | ||
|---|---|---|---|---|
| Claude Opus 4.6 (Reasoning) | 100% | 100% | 100% | |
| Claude Sonnet 4.6 (Reasoning) | 100% | 100% | 100% | |
| GPT-5 Mini | 100% | 100% | 100% | |
| GPT-5.1 | 100% | 100% | 100% | |
| Z.AI GLM 5 | 100% | 100% | 100% | |
| o4 Mini High | 100% | 100% | 100% | |
| GPT-4.1 | 100% | 100% | 100% | |
| GPT-4o Mini (temp=1) | 100% | 100% | 100% | |
| GPT-4o Mini (temp=0) | 100% | 100% | 100% | |
| ByteDance Seed 1.6 Flash | 100% | 100% | 100% | |
| Gemini 3.1 Pro (Preview) | 93% | 55% | 55% | |
| GPT-5 | 93% | 55% | 55% | |
| Gemini 3 Pro (Preview) | 93% | 55% | 55% | |
| o4 Mini | 93% | 55% | 55% | |
| Z.AI GLM 4.7 Flash | 93% | 55% | 55% | |
| GPT-5 Nano | 93% | 55% | 55% | |
| Claude Haiku 4.5 | 46% | 89% | 44% | |
| Claude Sonnet 4.6 | 86% | 43% | 43% | |
| Grok 4.1 Fast | 85% | 40% | 40% | |
| Claude Opus 4.5 | 39% | 95% | 36% | |
Top Overall Models (Top 20)
Ranked by composite score (performance, cost, speed & stability). Click a model name to view its detail page.
| Score | Cost | Speed | Stability | ||
|---|---|---|---|---|---|
| GPT-4o Mini (temp=1) | 100% | $0.0004 | 597ms | 100% | |
| GPT-4o Mini (temp=0) | 100% | $0.0005 | 670ms | 100% | |
| ByteDance Seed 1.6 Flash | 100% | $0.0008 | 9.1s | 100% | |
| GPT-4.1 | 100% | $0.0045 | 728ms | 100% | |
| GPT-5 Mini | 100% | $0.0060 | 40.1s | 100% | |
| GPT-5.1 | 100% | $0.028 | 32.0s | 100% | |
| o4 Mini High | 100% | $0.022 | 48.0s | 100% | |
| Z.AI GLM 5 | 100% | $0.014 | 1.3m | 100% | |
| o4 Mini | 93% | $0.0094 | 17.8s | 55% | |
| GPT-5 Nano | 93% | $0.0044 | 1.4m | 55% | |
| Grok 4.1 Fast | 85% | $0.0022 | 20.0s | 40% | |
| Z.AI GLM 4.7 Flash | 93% | $0.0038 | 2.5m | 55% | |
| Claude Sonnet 4.6 | 86% | $0.029 | 15.0s | 43% | |
| GPT-4.1 Nano | 78% | $0.0003 | 1.4s | 29% | |
| GPT-5 | 93% | $0.045 | 1.3m | 55% | |
| Claude Opus 4.6 (Reasoning) | 100% | $0.144 | 1.3m | 100% | |
| Gemini 2.5 Flash Lite (Reasoning) | 78% | $0.0030 | 24.9s | 31% | |
| Minimax M2.5 | 73% | $0.0029 | 22.1s | 31% | |
| Z.AI GLM 4.5 | 72% | $0.0047 | 42.7s | 28% | |
| Claude Sonnet 4.6 (Reasoning) | 100% | $0.150 | 2.2m | 100% | |
| Median | Evaluator | Top 3 | Flop 3 |
|---|---|---|---|
| 32.5% | Correct "no violations" response | ||
| 38.8% | No hallucinated violations |