Codex Red Herring (False Positive Detection)
Tests whether models correctly report "no violations" when a codex is fully consistent with the prose passage. Models that hallucinate false violations (false positives) fail. Uses a 2×2 matrix of text length × codex size, with bare and detailed-entry variants.
Short text (~524 words), big codex (51 entries)
Performance Score Distribution (Top 20)
Click a model name to view its detail page.
Price-Performance Score Distribution (Top 20)
Click a model name to view its detail page.
| Score | Cost | Time | ||
|---|---|---|---|---|
| LFM2 24B | 100% | $0.0001 | 2.4s | |
| Ministral 8B | 68% | $0.0003 | 3.4s | |
| Ministral 3 8B | 93% | $0.0004 | 1.2s | |
| GPT-5.4 Nano | 73% | $0.0004 | 1.4s | |
| Inception Mercury | 93% | $0.0002 | 5.4s | |
| GPT-5.4 Nano (Reasoning, Low) | 90% | $0.0006 | 3.2s | |
| GPT-4.1 | 100% | $0.0020 | 737ms | |
| ByteDance Seed 1.6 Flash | 100% | $0.0005 | 6.7s | |
| GPT-5.4 Nano (Reasoning) | 100% | $0.0010 | 5.1s | |
| Arcee AI: Trinity Mini | 83% | $0.0003 | 15.8s | |
| Grok 4.1 Fast | 100% | $0.0012 | 9.6s | |
| Gemini 2.5 Flash Lite (Reasoning) | 100% | $0.0011 | 6.6s | |
| Hermes 3 405B | 75% | $0.0026 | 1.7s | |
| Qwen 3 32B | 93% | $0.0004 | 10.6s | |
| Inception Mercury 2 | 100% | $0.0024 | 3.6s | |
| GPT-5.4 Mini (Reasoning, Low) | 85% | $0.0023 | 3.5s | |
| Mistral Small 4 (Reasoning) | 93% | $0.0015 | 12.6s | |
| Z.AI GLM 5 Turbo | 100% | $0.0034 | 7.9s | |
| ByteDance Seed 1.6 | 100% | $0.0021 | 16.2s | |
| Gemini 2.5 Flash (Reasoning) | 93% | $0.0047 | 7.7s | |
Most Stable Models (Top 20)
Ranked by stability (median × consistency). Click a model name to view its detail page.
| Score | Consistency | Stability | ||
|---|---|---|---|---|
| Claude Opus 4.6 (Reasoning) | 100% | 100% | 100% | |
| Gemini 3.1 Pro (Preview) | 100% | 100% | 100% | |
| Z.AI GLM 5 Turbo | 100% | 100% | 100% | |
| Claude Sonnet 4.6 (Reasoning) | 100% | 100% | 100% | |
| GPT-5 Mini | 100% | 100% | 100% | |
| Claude Opus 4.6 | 100% | 100% | 100% | |
| ByteDance Seed 1.6 | 100% | 100% | 100% | |
| o4 Mini High | 100% | 100% | 100% | |
| Grok 4.1 Fast | 100% | 100% | 100% | |
| GPT-4.1 | 100% | 100% | 100% | |
| o4 Mini | 100% | 100% | 100% | |
| Gemini 2.5 Flash Lite (Reasoning) | 100% | 100% | 100% | |
| Nemotron 3 Super | 100% | 100% | 100% | |
| Inception Mercury 2 | 100% | 100% | 100% | |
| GPT-5.4 Nano (Reasoning) | 100% | 100% | 100% | |
| ByteDance Seed 1.6 Flash | 100% | 100% | 100% | |
| LFM2 24B | 100% | 100% | 100% | |
| Ministral 3 8B | 93% | 56% | 56% | |
| GPT-5.1 | 93% | 55% | 55% | |
| GPT-5.4 Mini (Reasoning) | 93% | 55% | 55% | |
Top Overall Models (Top 20)
Ranked by composite score (performance, cost, speed & stability). Click a model name to view its detail page.
| Score | Cost | Speed | Stability | ||
|---|---|---|---|---|---|
| LFM2 24B | 100% | $0.0001 | 2.4s | 100% | |
| GPT-4.1 | 100% | $0.0020 | 737ms | 100% | |
| GPT-5.4 Nano (Reasoning) | 100% | $0.0010 | 5.1s | 100% | |
| ByteDance Seed 1.6 Flash | 100% | $0.0005 | 6.7s | 100% | |
| Gemini 2.5 Flash Lite (Reasoning) | 100% | $0.0011 | 6.6s | 100% | |
| Inception Mercury 2 | 100% | $0.0024 | 3.6s | 100% | |
| Grok 4.1 Fast | 100% | $0.0012 | 9.6s | 100% | |
| Z.AI GLM 5 Turbo | 100% | $0.0034 | 7.9s | 100% | |
| ByteDance Seed 1.6 | 100% | $0.0021 | 16.2s | 100% | |
| GPT-5 Mini | 100% | $0.0044 | 28.8s | 100% | |
| o4 Mini | 100% | $0.010 | 20.6s | 100% | |
| Nemotron 3 Super | 100% | $0.0000 | 51.1s | 100% | |
| o4 Mini High | 100% | $0.020 | 41.1s | 100% | |
| Claude Opus 4.6 | 100% | $0.031 | 12.1s | 100% | |
| Claude Opus 4.6 (Reasoning) | 100% | $0.041 | 24.0s | 100% | |
| Gemini 3.1 Pro (Preview) | 100% | $0.045 | 32.9s | 100% | |
| Claude Sonnet 4.6 (Reasoning) | 100% | $0.044 | 35.6s | 100% | |
| Ministral 3 8B | 93% | $0.0004 | 1.2s | 56% | |
| Inception Mercury | 93% | $0.0002 | 5.4s | 55% | |
| Qwen 3 32B | 93% | $0.0004 | 10.6s | 55% | |
| Median | Evaluator | Top 3 | Flop 3 |
|---|---|---|---|
| 45.0% | Correct "no violations" response | ||
| 59.4% | No hallucinated violations |