Deduction

Subcategory of Reasoning. 118 models scored.

Model Leaderboard

All models ranked by their Deduction subcategory score.

# Model Deduction Reasoning Overall
1 Gemini 3 Flash (Preview, Reasoning) 99.50% 98.05% 90.50%
2 Z.AI GLM 4.6 97.50% 95.12% 89.11%
3 Gemini 2.5 Pro 97.00% 96.91% 88.53%
4 Gemini 2.5 Flash Lite (Reasoning) 96.50% 93.86% 85.75%
5 ByteDance Seed 2.0 Lite 96.50% 95.50% 84.80%
6 Gemini 3.1 Pro (Preview) 96.00% 96.01% 94.37%
7 Z.AI GLM 5 96.00% 95.89% 91.23%
8 MoonshotAI: Kimi K2.5 96.00% 95.41% 91.04%
9 GPT-5.4 Mini 96.00% 88.04% 82.43%
10 Gemini 2.5 Flash (Reasoning) 95.89% 93.81% 86.51%
11 GPT-5 Mini 95.50% 94.36% 92.62%
12 GPT-5.2 95.50% 94.54% 90.26%
13 MiniMax M2.5 95.50% 92.42% 88.71%
14 o4 Mini 95.50% 94.45% 88.35%
15 Z.AI GLM 5 Turbo 95.00% 95.67% 94.27%
16 GPT-5.4 (Reasoning) 95.00% 94.78% 93.24%
17 GPT-5.1 95.00% 95.14% 92.54%
18 GPT-5 95.00% 95.67% 91.93%
19 Qwen 3.5 397B A17B 95.00% 95.06% 91.73%
20 Qwen 3.5 122B 95.00% 94.93% 91.53%
21 GPT-5.4 (Reasoning, Low) 95.00% 94.34% 91.41%
22 Qwen 3.5 27B 95.00% 92.73% 90.85%
23 GPT-5.4 Mini (Reasoning) 95.00% 94.56% 90.65%
24 o4 Mini High 95.00% 95.02% 90.29%
25 Gemini 3 Pro (Preview) 95.00% 95.24% 88.79%
26 Z.AI GLM 4.7 95.00% 94.99% 88.69%
27 Grok 4 95.00% 96.01% 88.12%
28 Qwen 3.5 35B 95.00% 94.88% 88.00%
29 Qwen 3.5 Flash 95.00% 94.66% 86.38%
30 Grok 4 Fast 95.00% 94.89% 86.15%
31 Qwen 3.5 9B 95.00% 92.93% 86.05%
32 GPT-5.4 Mini (Reasoning, Low) 95.00% 92.28% 85.75%
33 Gemini 3 Flash (Preview) 95.00% 94.79% 85.35%
34 Z.AI GLM 4.7 Flash 95.00% 89.50% 84.82%
35 Nemotron 3 Super 95.00% 93.11% 84.56%
36 GPT-5.4 95.00% 93.92% 84.32%
37 Inception Mercury 2 95.00% 92.03% 83.85%
38 Stealth: Aurora Alpha 95.00% 90.11% 83.79%
39 GPT-5 Nano 95.00% 89.61% 82.60%
40 Gemini 2.5 Flash Lite 95.00% 85.80% 81.08%
41 Gemini 2.5 Flash 95.00% 92.60% 80.60%
42 Inception Mercury 95.00% 85.96% 79.50%
43 GPT-5.4 Nano (Reasoning, Low) 95.00% 78.93% 79.48%
44 Gemma 3 12B 95.00% 79.42% 78.41%
45 Gemma 3 27B 95.00% 86.74% 77.85%
46 Nemotron 3 Nano 95.00% 89.91% 77.73%
47 Gemma 3 4B 95.00% 73.64% 68.57%
48 MiniMax M2.7 94.94% 93.28% 89.10%
49 Claude Sonnet 4 94.44% 94.48% 88.72%
50 Gemini 3.1 Flash Lite (Preview) 94.44% 92.15% 85.87%
51 GPT-4o Mini (temp=0) 94.44% 81.26% 78.29%
52 GPT-4.1 Nano 94.44% 70.24% 71.94%
53 Qwen 3.5 Plus (2026-02-15) 93.94% 93.45% 85.96%
54 Claude 3.5 Haiku 93.94% 82.23% 83.73%
55 GPT-5.4 Nano (Reasoning) 93.89% 88.48% 81.36%
56 GPT-4o Mini (temp=1) 93.44% 80.28% 79.08%
57 GPT-5.4 Nano 93.06% 75.66% 74.40%
58 Mistral Small Creative 92.94% 87.99% 73.27%
59 Aion 2.0 92.78% 94.13% 89.21%
60 Z.AI GLM 4.5 92.22% 91.03% 86.27%
61 Claude Opus 4 91.94% 92.59% 87.69%
62 ByteDance Seed 1.6 Flash 91.67% 86.52% 73.27%
63 Grok 4.1 Fast 91.11% 93.58% 89.55%
64 ByteDance Seed 2.0 Mini 91.11% 92.40% 86.91%
65 Mistral Small 4 (Reasoning) 90.44% 87.78% 82.39%
66 DeepSeek V3 (2024-12-26) 90.39% 88.71% 83.68%
67 DeepSeek V3.2 90.06% 89.46% 82.25%
68 ByteDance Seed 1.6 90.00% 91.49% 90.70%
69 Stealth: Healer Alpha 90.00% 91.67% 85.93%
70 GPT-4.1 89.94% 88.46% 88.68%
71 Stealth: Hunter Alpha 89.94% 91.67% 87.34%
72 DeepSeek V3 (2025-03-24) 89.61% 88.45% 81.99%
73 Mistral Small 4 89.50% 78.72% 76.46%
74 Claude Opus 4.6 (Reasoning) 89.44% 93.77% 95.02%
75 Claude Sonnet 4.6 (Reasoning) 89.44% 92.76% 93.66%
76 Claude Opus 4.5 89.44% 93.93% 89.69%
77 Claude Sonnet 4.5 89.44% 92.50% 88.03%
78 Mistral Large 3 89.44% 88.95% 85.43%
79 GPT-4o, May 13th (temp=0) 89.44% 88.58% 85.36%
80 Claude Haiku 4.5 89.44% 87.76% 85.14%
81 DeepSeek-V2 Chat 89.44% 88.70% 84.83%
82 Claude 3.5 Sonnet 89.44% 90.30% 84.24%
83 GPT-4o, May 13th (temp=1) 89.44% 85.98% 83.80%
84 Claude 3.7 Sonnet 89.44% 89.94% 83.39%
85 GPT-4.1 Mini 89.44% 85.83% 83.20%
86 GPT-4o, Aug. 6th (temp=1) 89.44% 86.91% 82.62%
87 GPT-4o, Aug. 6th (temp=0) 89.44% 87.59% 82.45%
88 Writer: Palmyra X5 89.44% 86.57% 79.57%
89 Mistral Medium 3.1 89.44% 89.32% 77.83%
90 Qwen 2.5 72B 89.44% 83.43% 75.46%
91 Ministral 3 14B 89.44% 83.24% 72.54%
92 Hermes 3 70B 89.39% 79.08% 72.57%
93 Claude Opus 4.6 88.89% 93.33% 92.35%
94 Qwen 3 32B 88.89% 86.35% 82.21%
95 Claude 3 Haiku 88.89% 77.94% 71.19%
96 Qwen3 235B A22B Instruct 2507 88.39% 85.82% 80.10%
97 Hermes 3 405B 88.28% 85.58% 82.86%
98 Mistral Large 2 87.78% 88.20% 82.41%
99 Mistral NeMO 87.33% 57.59% 65.04%
100 Llama 3.1 Nemotron 70B 87.22% 82.19% 74.70%
101 Llama 3.1 8B 86.78% 69.12% 63.37%
102 Grok 4.20 (Beta) 86.72% 87.05% 83.85%
103 Llama 3.1 70B 85.00% 79.31% 78.40%
104 DeepSeek V3.1 84.94% 83.95% 82.39%
105 Claude Sonnet 4.6 83.89% 88.48% 91.15%
106 Mistral Small 3.2 24B 83.89% 81.71% 78.60%
107 Arcee AI: Trinity Large (Preview) 83.89% 77.24% 73.33%
108 Arcee AI: Trinity Mini 82.28% 76.94% 70.90%
109 LFM2 24B 78.89% 54.88% 58.77%
110 Ministral 3 3B 77.78% 71.88% 67.22%
111 Ministral 8B 76.00% 73.78% 64.87%
112 Rocinante 12B 74.89% 54.31% 54.55%
113 Ministral 3B 72.83% 69.70% 61.29%
114 WizardLM 2 8x22b 69.94% 67.36% 71.07%
115 Grok 4.20 (Beta, Reasoning) 68.33% 82.64% 91.49%
116 Ministral 3 8B 68.33% 71.64% 71.76%
117 Mistral Large 66.06% 76.31% 80.15%
118 Cohere Command R+ (Aug. 2024) 63.83% 65.10% 69.03%