Deduction

Subcategory of Reasoning. 91 models scored.

Model Leaderboard

All models ranked by their Deduction subcategory score.

# Model Deduction Reasoning Overall
1 Gemini 3 Flash (Preview, Reasoning) 99.50% 98.05% 90.50%
2 Z.AI GLM 4.6 97.50% 95.12% 89.11%
3 Gemini 2.5 Pro 97.00% 96.91% 88.53%
4 Gemini 2.5 Flash Lite (Reasoning) 96.50% 93.86% 85.75%
5 Gemini 3.1 Pro (Preview) 96.00% 96.01% 94.37%
6 Z.AI GLM 5 96.00% 95.89% 91.23%
7 MoonshotAI: Kimi K2.5 96.00% 95.41% 91.04%
8 Gemini 2.5 Flash (Reasoning) 95.89% 93.81% 86.51%
9 GPT-5 Mini 95.50% 94.36% 92.62%
10 GPT-5.2 95.50% 94.54% 90.26%
11 Minimax M2.5 95.50% 92.42% 88.71%
12 o4 Mini 95.50% 94.45% 88.35%
13 GPT-5.1 95.00% 95.14% 92.54%
14 GPT-5 95.00% 95.67% 91.93%
15 Qwen 3.5 397B A17B 95.00% 95.06% 91.73%
16 Qwen 3.5 122B 95.00% 94.93% 91.53%
17 Qwen 3.5 27B 95.00% 92.73% 90.85%
18 o4 Mini High 95.00% 95.02% 90.29%
19 Gemini 3 Pro (Preview) 95.00% 95.24% 88.79%
20 Z.AI GLM 4.7 95.00% 94.99% 88.69%
21 Grok 4 95.00% 96.01% 88.12%
22 Qwen 3.5 35B 95.00% 94.88% 88.00%
23 Qwen 3.5 Flash 95.00% 94.66% 86.38%
24 Grok 4 Fast 95.00% 94.89% 86.15%
25 Gemini 3 Flash (Preview) 95.00% 94.79% 85.35%
26 Z.AI GLM 4.7 Flash 95.00% 89.50% 84.82%
27 Stealth: Aurora Alpha 95.00% 90.11% 83.79%
28 GPT-5 Nano 95.00% 89.61% 82.60%
29 Gemini 2.5 Flash Lite 95.00% 85.80% 81.08%
30 Gemini 2.5 Flash 95.00% 92.60% 80.60%
31 Gemma 3 12B 95.00% 79.42% 78.41%
32 Gemma 3 27B 95.00% 86.74% 77.85%
33 Gemma 3 4B 95.00% 73.64% 68.57%
34 Claude Sonnet 4 94.44% 94.48% 88.72%
35 GPT-4o Mini (temp=0) 94.44% 81.26% 78.29%
36 GPT-4.1 Nano 94.44% 70.24% 71.94%
37 Qwen 3.5 Plus (2026-02-15) 93.94% 93.45% 85.96%
38 Claude 3.5 Haiku 93.94% 82.23% 83.73%
39 GPT-4o Mini (temp=1) 93.44% 80.28% 79.08%
40 Mistral Small Creative 92.94% 87.99% 73.27%
41 Aion 2.0 92.78% 94.13% 89.21%
42 Z.AI GLM 4.5 92.22% 91.03% 86.27%
43 Claude Opus 4 91.94% 92.59% 87.69%
44 ByteDance Seed 1.6 Flash 91.67% 86.52% 73.27%
45 Grok 4.1 Fast 91.11% 93.58% 89.55%
46 DeepSeek V3 (2024-12-26) 90.39% 88.71% 83.68%
47 DeepSeek V3.2 90.06% 89.46% 82.25%
48 ByteDance Seed 1.6 90.00% 91.49% 90.70%
49 GPT-4.1 89.94% 88.46% 88.68%
50 DeepSeek V3 (2025-03-24) 89.61% 88.45% 81.99%
51 Claude Opus 4.6 (Reasoning) 89.44% 93.77% 95.02%
52 Claude Sonnet 4.6 (Reasoning) 89.44% 92.76% 93.66%
53 Claude Opus 4.5 89.44% 93.93% 89.69%
54 Claude Sonnet 4.5 89.44% 92.50% 88.03%
55 Mistral Large 3 89.44% 88.95% 85.43%
56 GPT-4o, May 13th (temp=0) 89.44% 88.58% 85.36%
57 Claude Haiku 4.5 89.44% 87.76% 85.14%
58 DeepSeek-V2 Chat 89.44% 88.70% 84.83%
59 Claude 3.5 Sonnet 89.44% 90.30% 84.24%
60 GPT-4o, May 13th (temp=1) 89.44% 85.98% 83.80%
61 Claude 3.7 Sonnet 89.44% 89.94% 83.39%
62 GPT-4.1 Mini 89.44% 85.83% 83.20%
63 GPT-4o, Aug. 6th (temp=1) 89.44% 86.91% 82.62%
64 GPT-4o, Aug. 6th (temp=0) 89.44% 87.59% 82.45%
65 Writer: Palmyra X5 89.44% 86.57% 79.57%
66 Mistral Medium 3.1 89.44% 89.32% 77.83%
67 Qwen 2.5 72B 89.44% 83.43% 75.46%
68 Ministral 3 14B 89.44% 83.24% 72.54%
69 Hermes 3 70B 89.39% 79.08% 72.57%
70 Claude Opus 4.6 88.89% 93.33% 92.35%
71 Claude 3 Haiku 88.89% 77.94% 71.19%
72 Hermes 3 405B 88.28% 85.58% 82.86%
73 Mistral Large 2 87.78% 88.20% 82.41%
74 Mistral NeMO 87.33% 57.59% 65.04%
75 Llama 3.1 Nemotron 70B 87.22% 82.19% 74.70%
76 Llama 3.1 8B 86.78% 69.12% 63.37%
77 Llama 3.1 70B 85.00% 79.31% 78.40%
78 DeepSeek V3.1 84.94% 83.95% 82.39%
79 Claude Sonnet 4.6 83.89% 88.48% 91.15%
80 Mistral Small 3.2 24B 83.89% 81.71% 78.60%
81 Arcee AI: Trinity Large (Preview) 83.89% 77.24% 73.33%
82 Arcee AI: Trinity Mini 82.28% 76.94% 70.90%
83 LFM2 24B 78.89% 54.88% 58.77%
84 Ministral 3 3B 77.78% 71.88% 67.22%
85 Ministral 8B 76.00% 73.78% 64.87%
86 Rocinante 12B 74.89% 54.31% 54.55%
87 Ministral 3B 72.83% 69.70% 61.29%
88 WizardLM 2 8x22b 69.94% 67.36% 71.07%
89 Ministral 3 8B 68.33% 71.64% 71.76%
90 Mistral Large 66.06% 76.31% 80.15%
91 Cohere Command R+ (Aug. 2024) 63.83% 65.10% 69.03%