Codex Red Herring (False Positive Detection)
Tests whether models correctly report "no violations" when a codex is fully consistent with the prose passage. Models that hallucinate false violations (false positives) fail. Uses a 2×2 matrix of text length × codex size, with bare and detailed-entry variants.
Short text (~524 words), big codex (51 detailed entries)
Hallucination
Performance Score Distribution (Top 20)
Click a model name to view its detail page.
| Score | ||
|---|---|---|
| GPT-5 Mini | 100% | |
| Claude Opus 4.6 | 100% | |
| Claude Sonnet 4.6 | 100% | |
| ByteDance Seed 1.6 | 100% | |
| o4 Mini High | 100% | |
| GPT-5.2 | 100% | |
| Claude Opus 4.5 | 100% | |
| Grok 4.1 Fast | 100% | |
| GPT-4.1 | 100% | |
| o4 Mini | 100% | |
| Grok 4 | 100% | |
| Grok 4 Fast | 100% | |
| Mistral Large 3 | 100% | |
| GPT-5 Nano | 100% | |
| Mistral Large 2 | 100% | |
| Mistral Large | 100% | |
| Ministral 3 8B | 100% | |
| Arcee AI: Trinity Mini | 100% | |
| Ministral 8B | 100% | |
| Z.AI GLM 5 | 93% | |
Price-Performance Score Distribution (Top 20)
Click a model name to view its detail page.
| Score | Cost | Time | ||
|---|---|---|---|---|
| Ministral 8B | 100% | $0.0013 | 564ms | |
| Arcee AI: Trinity Mini | 100% | $0.0007 | 6.9s | |
| Ministral 3 8B | 100% | $0.0020 | 589ms | |
| ByteDance Seed 1.6 Flash | 93% | $0.0013 | 7.7s | |
| Grok 4 Fast | 100% | $0.0025 | 16.0s | |
| Grok 4.1 Fast | 100% | $0.0028 | 7.6s | |
| Gemini 2.5 Flash Lite (Reasoning) | 85% | $0.0023 | 14.2s | |
| GPT-4.1 | 100% | $0.0086 | 1.0s | |
| Mistral Large 3 | 100% | $0.0066 | 1.4s | |
| Z.AI GLM 4.6 | 90% | $0.0053 | 12.9s | |
| GPT-5 Nano | 100% | $0.0020 | 36.0s | |
| ByteDance Seed 1.6 | 100% | $0.0050 | 19.7s | |
| Minimax M2.5 | 93% | $0.0049 | 21.4s | |
| GPT-5 Mini | 100% | $0.0049 | 30.6s | |
| Gemini 2.5 Flash (Reasoning) | 93% | $0.0092 | 11.7s | |
| GPT-5.2 | 100% | $0.014 | 11.3s | |
| Z.AI GLM 4.5 | 93% | $0.0079 | 42.0s | |
| o4 Mini | 100% | $0.012 | 20.0s | |
| o4 Mini High | 100% | $0.020 | 33.9s | |
| Mistral Large | 100% | $0.026 | 1.6s | |
Most Stable Models (Top 20)
Ranked by stability (median × consistency). Click a model name to view its detail page.
| Score | Consistency | Stability | ||
|---|---|---|---|---|
| GPT-5 Mini | 100% | 100% | 100% | |
| Claude Opus 4.6 | 100% | 100% | 100% | |
| Claude Sonnet 4.6 | 100% | 100% | 100% | |
| ByteDance Seed 1.6 | 100% | 100% | 100% | |
| o4 Mini High | 100% | 100% | 100% | |
| GPT-5.2 | 100% | 100% | 100% | |
| Claude Opus 4.5 | 100% | 100% | 100% | |
| Grok 4.1 Fast | 100% | 100% | 100% | |
| GPT-4.1 | 100% | 100% | 100% | |
| o4 Mini | 100% | 100% | 100% | |
| Grok 4 | 100% | 100% | 100% | |
| Grok 4 Fast | 100% | 100% | 100% | |
| Mistral Large 3 | 100% | 100% | 100% | |
| GPT-5 Nano | 100% | 100% | 100% | |
| Mistral Large 2 | 100% | 100% | 100% | |
| Mistral Large | 100% | 100% | 100% | |
| Ministral 3 8B | 100% | 100% | 100% | |
| Arcee AI: Trinity Mini | 100% | 100% | 100% | |
| Ministral 8B | 100% | 100% | 100% | |
| Z.AI GLM 4.6 | 90% | 60% | 60% | |
Top Overall Models (Top 20)
Ranked by composite score (performance, cost, speed & stability). Click a model name to view its detail page.
| Score | Cost | Speed | Stability | ||
|---|---|---|---|---|---|
| Ministral 8B | 100% | $0.0013 | 564ms | 100% | |
| Ministral 3 8B | 100% | $0.0020 | 589ms | 100% | |
| Arcee AI: Trinity Mini | 100% | $0.0007 | 6.9s | 100% | |
| Mistral Large 3 | 100% | $0.0066 | 1.4s | 100% | |
| Grok 4.1 Fast | 100% | $0.0028 | 7.6s | 100% | |
| GPT-4.1 | 100% | $0.0086 | 1.0s | 100% | |
| Grok 4 Fast | 100% | $0.0025 | 16.0s | 100% | |
| ByteDance Seed 1.6 | 100% | $0.0050 | 19.7s | 100% | |
| GPT-5.2 | 100% | $0.014 | 11.3s | 100% | |
| o4 Mini | 100% | $0.012 | 20.0s | 100% | |
| GPT-5 Mini | 100% | $0.0049 | 30.6s | 100% | |
| Mistral Large 2 | 100% | $0.026 | 1.4s | 100% | |
| Mistral Large | 100% | $0.026 | 1.6s | 100% | |
| GPT-5 Nano | 100% | $0.0020 | 36.0s | 100% | |
| Claude Sonnet 4.6 | 100% | $0.043 | 1.3s | 100% | |
| o4 Mini High | 100% | $0.020 | 33.9s | 100% | |
| Claude Opus 4.5 | 100% | $0.072 | 2.1s | 100% | |
| Claude Opus 4.6 | 100% | $0.082 | 9.9s | 100% | |
| Grok 4 | 100% | $0.063 | 52.0s | 100% | |
| ByteDance Seed 1.6 Flash | 93% | $0.0013 | 7.7s | 55% | |
| Median | Evaluator | Top 3 | Flop 3 |
|---|---|---|---|
| 47.5% | Correct "no violations" response | ||
| 56.1% | No hallucinated violations |