Content Invention

Subcategory of Hallucination. 89 models scored.

Model Leaderboard

All models ranked by their Content Invention subcategory score.

# Model Content Invention Hallucination Overall
1 Claude Opus 4.6 (Reasoning) 100.00% 98.13% 95.02%
2 Gemini 3.1 Pro (Preview) 100.00% 89.06% 94.37%
3 Claude Sonnet 4.6 (Reasoning) 100.00% 93.96% 93.66%
4 GPT-5 Mini 100.00% 97.71% 92.62%
5 GPT-5.1 100.00% 98.44% 92.54%
6 Claude Opus 4.6 100.00% 93.60% 92.35%
7 GPT-5 100.00% 91.84% 91.93%
8 Qwen 3.5 397B A17B 100.00% 82.10% 91.73%
9 Qwen 3.5 122B 100.00% 86.58% 91.53%
10 Z.AI GLM 5 100.00% 97.74% 91.23%
11 Claude Sonnet 4.6 100.00% 89.99% 91.15%
12 MoonshotAI: Kimi K2.5 100.00% 88.02% 91.04%
13 Qwen 3.5 27B 100.00% 86.58% 90.85%
14 ByteDance Seed 1.6 100.00% 95.14% 90.70%
15 Gemini 3 Flash (Preview, Reasoning) 100.00% 85.32% 90.50%
16 o4 Mini High 100.00% 99.06% 90.29%
17 GPT-5.2 100.00% 95.09% 90.26%
18 Claude Opus 4.5 100.00% 82.06% 89.69%
19 Grok 4.1 Fast 100.00% 99.02% 89.55%
20 Z.AI GLM 4.6 100.00% 90.08% 89.11%
21 Gemini 3 Pro (Preview) 100.00% 88.23% 88.79%
22 Claude Sonnet 4 100.00% 80.08% 88.72%
23 Minimax M2.5 100.00% 92.94% 88.71%
24 Z.AI GLM 4.7 100.00% 88.47% 88.69%
25 GPT-4.1 100.00% 95.24% 88.68%
26 Gemini 2.5 Pro 100.00% 86.11% 88.53%
27 o4 Mini 100.00% 98.75% 88.35%
28 Grok 4 100.00% 89.45% 88.12%
29 Claude Sonnet 4.5 100.00% 75.57% 88.03%
30 Claude Opus 4 100.00% 75.68% 87.69%
31 Gemini 2.5 Flash (Reasoning) 100.00% 95.60% 86.51%
32 Z.AI GLM 4.5 100.00% 87.05% 86.27%
33 Grok 4 Fast 100.00% 91.09% 86.15%
34 Qwen 3.5 Plus (2026-02-15) 100.00% 73.35% 85.96%
35 Mistral Large 3 100.00% 78.17% 85.43%
36 GPT-4o, May 13th (temp=0) 100.00% 69.76% 85.36%
37 Gemini 3 Flash (Preview) 100.00% 71.24% 85.35%
38 Claude Haiku 4.5 100.00% 83.86% 85.14%
39 Claude 3.5 Sonnet 100.00% 76.31% 84.24%
40 GPT-4o, May 13th (temp=1) 100.00% 73.40% 83.80%
41 DeepSeek V3 (2024-12-26) 100.00% 73.11% 83.68%
42 Claude 3.7 Sonnet 100.00% 75.18% 83.39%
43 GPT-4.1 Mini 100.00% 81.14% 83.20%
44 Hermes 3 405B 100.00% 79.70% 82.86%
45 GPT-4o, Aug. 6th (temp=1) 100.00% 79.53% 82.62%
46 GPT-4o, Aug. 6th (temp=0) 100.00% 73.35% 82.45%
47 Mistral Large 2 100.00% 77.87% 82.41%
48 DeepSeek V3.1 100.00% 72.80% 82.39%
49 DeepSeek V3.2 100.00% 72.50% 82.25%
50 Gemini 2.5 Flash Lite 100.00% 76.17% 81.08%
51 Gemini 2.5 Flash 100.00% 71.70% 80.60%
52 Mistral Large 100.00% 77.50% 80.15%
53 GPT-4o Mini (temp=1) 100.00% 76.78% 79.08%
54 Mistral Small 3.2 24B 100.00% 75.83% 78.60%
55 Gemma 3 12B 100.00% 69.15% 78.41%
56 GPT-4o Mini (temp=0) 100.00% 73.56% 78.29%
57 Gemma 3 27B 100.00% 68.74% 77.85%
58 Mistral Medium 3.1 100.00% 82.09% 77.83%
59 Qwen 2.5 72B 100.00% 79.56% 75.46%
60 Arcee AI: Trinity Large (Preview) 100.00% 76.47% 73.33%
61 Mistral Small Creative 100.00% 74.46% 73.27%
62 Ministral 3 14B 100.00% 79.99% 72.54%
63 GPT-4.1 Nano 100.00% 87.73% 71.94%
64 Ministral 3 8B 100.00% 92.52% 71.76%
65 Arcee AI: Trinity Mini 100.00% 91.12% 70.90%
66 Gemma 3 4B 100.00% 67.60% 68.57%
67 Ministral 8B 100.00% 89.19% 64.87%
68 LFM2 24B 100.00% 90.53% 58.77%
69 Llama 3.1 Nemotron 70B 96.43% 74.99% 74.70%
70 Gemini 2.5 Flash Lite (Reasoning) 95.24% 95.59% 85.75%
71 Qwen 3.5 Flash 92.90% 80.63% 86.38%
72 Aion 2.0 92.89% 93.57% 89.21%
73 Qwen 3.5 35B 92.86% 80.87% 88.00%
74 Z.AI GLM 4.7 Flash 92.86% 89.86% 84.82%
75 Writer: Palmyra X5 92.86% 72.04% 79.57%
76 Llama 3.1 70B 92.86% 69.78% 78.40%
77 DeepSeek-V2 Chat 90.48% 69.48% 84.83%
78 ByteDance Seed 1.6 Flash 89.88% 94.52% 73.27%
79 Ministral 3 3B 89.29% 70.45% 67.22%
80 DeepSeek V3 (2025-03-24) 88.10% 67.07% 81.99%
81 GPT-5 Nano 88.10% 93.53% 82.60%
82 WizardLM 2 8x22b 85.73% 70.24% 71.07%
83 Ministral 3B 85.71% 70.75% 61.29%
84 Llama 3.1 8B 77.38% 43.41% 63.37%
85 Mistral NeMO 76.19% 62.63% 65.04%
86 Claude 3 Haiku 62.50% 60.81% 71.19%
87 Hermes 3 70B 60.79% 67.05% 72.57%
88 Rocinante 12B 38.95% 52.09% 54.55%
89 Cohere Command R+ (Aug. 2024) 37.90% 64.40% 69.03%