Codex Red Herring (False Positive Detection)
Tests whether models correctly report "no violations" when a codex is fully consistent with the prose passage. Models that hallucinate false violations (false positives) fail. Uses a 2×2 matrix of text length × codex size, with bare and detailed-entry variants.
Long text (~1594 words), big codex (51 detailed entries)
Performance Score Distribution (Top 20)
Click a model name to view its detail page.
| Score | ||
|---|---|---|
| Claude Opus 4.6 (Reasoning) | 100% | |
| Z.AI GLM 5 Turbo | 100% | |
| GPT-5 Mini | 100% | |
| GPT-5.1 | 100% | |
| Claude Opus 4.6 | 100% | |
| GPT-5 | 100% | |
| Grok 4.20 (Beta, Reasoning) | 100% | |
| Z.AI GLM 5 | 100% | |
| o4 Mini High | 100% | |
| Grok 4.1 Fast | 100% | |
| Aion 2.0 | 100% | |
| GPT-4.1 | 100% | |
| o4 Mini | 100% | |
| Stealth: Hunter Alpha | 100% | |
| ByteDance Seed 2.0 Mini | 100% | |
| Stealth: Healer Alpha | 100% | |
| GPT-5.4 Mini (Reasoning, Low) | 100% | |
| ByteDance Seed 2.0 Lite | 100% | |
| Nemotron 3 Super | 100% | |
| GPT-5.4 Nano (Reasoning) | 100% | |
Price-Performance Score Distribution (Top 20)
Click a model name to view its detail page.
| Score | Cost | Time | ||
|---|---|---|---|---|
| GPT-5.4 Nano | 78% | $0.0008 | 1.0s | |
| Ministral 8B | 100% | $0.0014 | 525ms | |
| Ministral 3 8B | 100% | $0.0022 | 678ms | |
| Ministral 3 14B | 93% | $0.0029 | 2.7s | |
| GPT-5.4 Nano (Reasoning, Low) | 100% | $0.0014 | 6.3s | |
| Arcee AI: Trinity Mini | 90% | $0.0018 | 45.5s | |
| GPT-4.1 | 100% | $0.0097 | 972ms | |
| Grok 4.1 Fast | 100% | $0.0031 | 13.4s | |
| GPT-5.4 Mini (Reasoning, Low) | 100% | $0.0073 | 5.9s | |
| Inception Mercury 2 | 85% | $0.0056 | 7.7s | |
| ByteDance Seed 1.6 Flash | 84% | $0.0016 | 13.8s | |
| Grok 4 Fast | 93% | $0.0032 | 25.0s | |
| Inception Mercury | 100% | $0.0007 | 20.5s | |
| GPT-5.4 Nano (Reasoning) | 100% | $0.0035 | 40.9s | |
| Stealth: Healer Alpha | 100% | $0.0000 | 27.7s | |
| Gemini 2.5 Flash Lite (Reasoning) | 93% | $0.0038 | 27.3s | |
| ByteDance Seed 2.0 Lite | 100% | $0.0056 | 25.4s | |
| Gemini 2.5 Flash (Reasoning) | 85% | $0.015 | 19.0s | |
| GPT-5.4 (Reasoning, Low) | 92% | $0.019 | 14.5s | |
| GPT-5.2 | 93% | $0.021 | 20.9s | |
Most Stable Models (Top 20)
Ranked by stability (median × consistency). Click a model name to view its detail page.
| Score | Consistency | Stability | ||
|---|---|---|---|---|
| Claude Opus 4.6 (Reasoning) | 100% | 100% | 100% | |
| Z.AI GLM 5 Turbo | 100% | 100% | 100% | |
| GPT-5 Mini | 100% | 100% | 100% | |
| GPT-5.1 | 100% | 100% | 100% | |
| Claude Opus 4.6 | 100% | 100% | 100% | |
| GPT-5 | 100% | 100% | 100% | |
| Grok 4.20 (Beta, Reasoning) | 100% | 100% | 100% | |
| Z.AI GLM 5 | 100% | 100% | 100% | |
| o4 Mini High | 100% | 100% | 100% | |
| Grok 4.1 Fast | 100% | 100% | 100% | |
| Aion 2.0 | 100% | 100% | 100% | |
| GPT-4.1 | 100% | 100% | 100% | |
| o4 Mini | 100% | 100% | 100% | |
| Stealth: Hunter Alpha | 100% | 100% | 100% | |
| ByteDance Seed 2.0 Mini | 100% | 100% | 100% | |
| Stealth: Healer Alpha | 100% | 100% | 100% | |
| GPT-5.4 Mini (Reasoning, Low) | 100% | 100% | 100% | |
| ByteDance Seed 2.0 Lite | 100% | 100% | 100% | |
| Nemotron 3 Super | 100% | 100% | 100% | |
| GPT-5.4 Nano (Reasoning) | 100% | 100% | 100% | |
Top Overall Models (Top 20)
Ranked by composite score (performance, cost, speed & stability). Click a model name to view its detail page.
| Score | Cost | Speed | Stability | ||
|---|---|---|---|---|---|
| Ministral 8B | 100% | $0.0014 | 525ms | 100% | |
| Ministral 3 8B | 100% | $0.0022 | 678ms | 100% | |
| GPT-5.4 Nano (Reasoning, Low) | 100% | $0.0014 | 6.3s | 100% | |
| GPT-4.1 | 100% | $0.0097 | 972ms | 100% | |
| GPT-5.4 Mini (Reasoning, Low) | 100% | $0.0073 | 5.9s | 100% | |
| Grok 4.1 Fast | 100% | $0.0031 | 13.4s | 100% | |
| Inception Mercury | 100% | $0.0007 | 20.5s | 100% | |
| Stealth: Healer Alpha | 100% | $0.0000 | 27.7s | 100% | |
| ByteDance Seed 2.0 Lite | 100% | $0.0056 | 25.4s | 100% | |
| GPT-5.4 Nano (Reasoning) | 100% | $0.0035 | 40.9s | 100% | |
| Z.AI GLM 5 Turbo | 100% | $0.016 | 33.5s | 100% | |
| Grok 4.20 (Beta, Reasoning) | 100% | $0.033 | 18.8s | 100% | |
| GPT-5 Mini | 100% | $0.0090 | 53.0s | 100% | |
| Stealth: Hunter Alpha | 100% | $0.0000 | 1.1m | 100% | |
| o4 Mini | 100% | $0.026 | 43.6s | 100% | |
| GPT-5.1 | 100% | $0.051 | 44.5s | 100% | |
| Aion 2.0 | 100% | $0.017 | 1.7m | 100% | |
| o4 Mini High | 100% | $0.048 | 1.5m | 100% | |
| Nemotron 3 Super | 100% | $0.0000 | 2.6m | 100% | |
| Claude Opus 4.6 | 100% | $0.104 | 19.4s | 100% | |
| Median | Evaluator | Top 3 | Flop 3 |
|---|---|---|---|
| 50.0% | Correct "no violations" response | ||
| 57.5% | No hallucinated violations |