Output Corruption

Subcategory of Hallucination. 118 models scored.

Model Leaderboard

All models ranked by their Output Corruption subcategory score.

# Model Output Corruption Hallucination Overall
1 Claude Opus 4.6 (Reasoning) 100.00% 98.13% 95.02%
2 Gemini 3.1 Pro (Preview) 100.00% 89.06% 94.37%
3 Claude Sonnet 4.6 (Reasoning) 100.00% 93.96% 93.66%
4 GPT-5.4 (Reasoning) 100.00% 90.43% 93.24%
5 GPT-5 Mini 100.00% 97.71% 92.62%
6 GPT-5.1 100.00% 98.44% 92.54%
7 Claude Opus 4.6 100.00% 93.60% 92.35%
8 GPT-5 100.00% 91.84% 91.93%
9 Qwen 3.5 122B 100.00% 86.58% 91.53%
10 Grok 4.20 (Beta, Reasoning) 100.00% 96.28% 91.49%
11 GPT-5.4 (Reasoning, Low) 100.00% 92.29% 91.41%
12 Claude Sonnet 4.6 100.00% 89.99% 91.15%
13 MoonshotAI: Kimi K2.5 100.00% 88.02% 91.04%
14 Qwen 3.5 27B 100.00% 86.58% 90.85%
15 ByteDance Seed 1.6 100.00% 95.14% 90.70%
16 o4 Mini High 100.00% 99.06% 90.29%
17 GPT-5.2 100.00% 95.09% 90.26%
18 Claude Opus 4.5 100.00% 82.06% 89.69%
19 Aion 2.0 100.00% 93.57% 89.21%
20 Gemini 3 Pro (Preview) 100.00% 88.23% 88.79%
21 Claude Sonnet 4 100.00% 80.08% 88.72%
22 Gemini 2.5 Pro 100.00% 86.11% 88.53%
23 o4 Mini 100.00% 98.75% 88.35%
24 Grok 4 100.00% 89.45% 88.12%
25 Claude Sonnet 4.5 100.00% 75.57% 88.03%
26 Qwen 3.5 35B 100.00% 80.87% 88.00%
27 Claude Opus 4 100.00% 75.68% 87.69%
28 Stealth: Hunter Alpha 100.00% 90.78% 87.34%
29 ByteDance Seed 2.0 Mini 100.00% 91.33% 86.91%
30 Grok 4 Fast 100.00% 91.09% 86.15%
31 Mistral Large 3 100.00% 78.17% 85.43%
32 GPT-4o, May 13th (temp=0) 100.00% 69.76% 85.36%
33 Claude Haiku 4.5 100.00% 83.86% 85.14%
34 DeepSeek-V2 Chat 100.00% 69.48% 84.83%
35 Z.AI GLM 4.7 Flash 100.00% 89.86% 84.82%
36 Nemotron 3 Super 100.00% 99.69% 84.56%
37 GPT-5.4 100.00% 73.78% 84.32%
38 Claude 3.5 Sonnet 100.00% 76.31% 84.24%
39 Grok 4.20 (Beta) 100.00% 78.28% 83.85%
40 GPT-4o, May 13th (temp=1) 100.00% 73.40% 83.80%
41 Claude 3.5 Haiku 100.00% 100.00% 83.73%
42 DeepSeek V3 (2024-12-26) 100.00% 73.11% 83.68%
43 Claude 3.7 Sonnet 100.00% 75.18% 83.39%
44 GPT-4.1 Mini 100.00% 81.14% 83.20%
45 Hermes 3 405B 100.00% 79.70% 82.86%
46 GPT-4o, Aug. 6th (temp=0) 100.00% 73.35% 82.45%
47 GPT-5.4 Mini 100.00% 78.40% 82.43%
48 Mistral Large 2 100.00% 77.87% 82.41%
49 Mistral Small 4 (Reasoning) 100.00% 92.98% 82.39%
50 DeepSeek V3.2 100.00% 72.50% 82.25%
51 GPT-5.4 Nano (Reasoning) 100.00% 97.95% 81.36%
52 Mistral Large 100.00% 77.50% 80.15%
53 Writer: Palmyra X5 100.00% 72.04% 79.57%
54 GPT-5.4 Nano (Reasoning, Low) 100.00% 99.03% 79.48%
55 GPT-4o Mini (temp=1) 100.00% 76.78% 79.08%
56 Gemma 3 12B 100.00% 69.15% 78.41%
57 GPT-4o Mini (temp=0) 100.00% 73.56% 78.29%
58 Gemma 3 27B 100.00% 68.74% 77.85%
59 Mistral Medium 3.1 100.00% 82.09% 77.83%
60 Mistral Small 4 100.00% 73.41% 76.46%
61 GPT-5.4 Nano 100.00% 89.29% 74.40%
62 Arcee AI: Trinity Large (Preview) 100.00% 76.47% 73.33%
63 Mistral Small Creative 100.00% 74.46% 73.27%
64 Ministral 3 14B 100.00% 79.99% 72.54%
65 Ministral 3 8B 100.00% 92.52% 71.76%
66 Claude 3 Haiku 100.00% 60.81% 71.19%
67 Arcee AI: Trinity Mini 100.00% 91.12% 70.90%
68 Cohere Command R+ (Aug. 2024) 100.00% 64.40% 69.03%
69 Gemma 3 4B 100.00% 67.60% 68.57%
70 Ministral 3 3B 100.00% 70.45% 67.22%
71 Ministral 8B 100.00% 89.19% 64.87%
72 Ministral 3B 100.00% 70.75% 61.29%
73 LFM2 24B 100.00% 90.53% 58.77%
74 GPT-5.4 Mini (Reasoning) 100.00% 96.22% 90.65%
75 Z.AI GLM 5 100.00% 97.74% 91.23%
76 Z.AI GLM 5 Turbo 99.99% 98.64% 94.27%
77 Grok 4.1 Fast 99.99% 99.02% 89.55%
78 Inception Mercury 2 99.99% 92.60% 83.85%
79 GPT-5 Nano 99.98% 93.53% 82.60%
80 Qwen 3.5 Flash 99.98% 80.63% 86.38%
81 GPT-5.4 Mini (Reasoning, Low) 99.98% 98.43% 85.75%
82 GPT-4.1 99.97% 95.24% 88.68%
83 Gemini 3 Flash (Preview, Reasoning) 99.97% 85.32% 90.50%
84 Qwen3 235B A22B Instruct 2507 99.97% 69.82% 80.10%
85 Mistral NeMO 99.97% 62.63% 65.04%
86 Gemini 3.1 Flash Lite (Preview) 99.97% 74.58% 85.87%
87 Gemini 2.5 Flash Lite (Reasoning) 99.96% 95.59% 85.75%
88 Z.AI GLM 4.7 99.96% 88.47% 88.69%
89 Gemini 2.5 Flash Lite 99.96% 76.17% 81.08%
90 Gemini 3 Flash (Preview) 99.95% 71.24% 85.35%
91 GPT-4.1 Nano 99.95% 87.73% 71.94%
92 Gemini 2.5 Flash (Reasoning) 99.94% 95.60% 86.51%
93 Stealth: Healer Alpha 99.93% 94.67% 85.93%
94 Stealth: Aurora Alpha 99.93% 99.93% 83.79%
95 Qwen 3.5 397B A17B 99.93% 82.10% 91.73%
96 DeepSeek V3.1 99.89% 72.80% 82.39%
97 GPT-4o, Aug. 6th (temp=1) 99.82% 79.53% 82.62%
98 Llama 3.1 Nemotron 70B 99.75% 74.99% 74.70%
99 MiniMax M2.7 99.72% 96.47% 89.10%
100 Qwen 3.5 Plus (2026-02-15) 99.65% 73.35% 85.96%
101 ByteDance Seed 1.6 Flash 99.57% 94.52% 73.27%
102 MiniMax M2.5 99.50% 92.94% 88.71%
103 Qwen 3 32B 99.39% 89.56% 82.21%
104 Gemini 2.5 Flash 99.08% 71.70% 80.60%
105 Z.AI GLM 4.6 98.89% 90.08% 89.11%
106 Z.AI GLM 4.5 98.85% 87.05% 86.27%
107 Mistral Small 3.2 24B 98.73% 75.83% 78.60%
108 Qwen 2.5 72B 98.28% 79.56% 75.46%
109 ByteDance Seed 2.0 Lite 97.78% 79.75% 84.80%
110 Nemotron 3 Nano 97.78% 89.17% 77.73%
111 Qwen 3.5 9B 97.67% 85.58% 86.05%
112 Hermes 3 70B 97.41% 67.05% 72.57%
113 WizardLM 2 8x22b 95.25% 70.24% 71.07%
114 DeepSeek V3 (2025-03-24) 94.58% 67.07% 81.99%
115 Inception Mercury 94.25% 95.08% 79.50%
116 Rocinante 12B 89.92% 52.09% 54.55%
117 Llama 3.1 70B 87.24% 69.78% 78.40%
118 Llama 3.1 8B 34.42% 43.41% 63.37%