Attention

Subcategory of Reasoning. 118 models scored.

Model Leaderboard

All models ranked by their Attention subcategory score.

# Model Attention Reasoning Overall
1 Claude Opus 4.5 98.41% 93.93% 89.69%
2 Claude Opus 4.6 (Reasoning) 98.10% 93.77% 95.02%
3 Claude Opus 4.6 97.78% 93.33% 92.35%
4 Grok 4 97.02% 96.01% 88.12%
5 Grok 4.20 (Beta, Reasoning) 96.94% 82.64% 91.49%
6 Gemini 2.5 Pro 96.82% 96.91% 88.53%
7 Gemini 3 Flash (Preview, Reasoning) 96.60% 98.05% 90.50%
8 Z.AI GLM 5 Turbo 96.34% 95.67% 94.27%
9 GPT-5 96.33% 95.67% 91.93%
10 Claude Sonnet 4.6 (Reasoning) 96.08% 92.76% 93.66%
11 Grok 4.1 Fast 96.05% 93.58% 89.55%
12 Gemini 3.1 Pro (Preview) 96.03% 96.01% 94.37%
13 Z.AI GLM 5 95.78% 95.89% 91.23%
14 Claude Sonnet 4.5 95.56% 92.50% 88.03%
15 Aion 2.0 95.49% 94.13% 89.21%
16 Gemini 3 Pro (Preview) 95.47% 95.24% 88.79%
17 GPT-5.1 95.29% 95.14% 92.54%
18 Qwen 3.5 397B A17B 95.13% 95.06% 91.73%
19 o4 Mini High 95.05% 95.02% 90.29%
20 Z.AI GLM 4.7 94.98% 94.99% 88.69%
21 Qwen 3.5 122B 94.87% 94.93% 91.53%
22 MoonshotAI: Kimi K2.5 94.83% 95.41% 91.04%
23 Grok 4 Fast 94.78% 94.89% 86.15%
24 Qwen 3.5 35B 94.75% 94.88% 88.00%
25 Gemini 3 Flash (Preview) 94.58% 94.79% 85.35%
26 GPT-5.4 (Reasoning) 94.55% 94.78% 93.24%
27 Claude Sonnet 4 94.52% 94.48% 88.72%
28 ByteDance Seed 2.0 Lite 94.49% 95.50% 84.80%
29 Qwen 3.5 Flash 94.31% 94.66% 86.38%
30 GPT-5.4 Mini (Reasoning) 94.13% 94.56% 90.65%
31 ByteDance Seed 2.0 Mini 93.68% 92.40% 86.91%
32 GPT-5.4 (Reasoning, Low) 93.67% 94.34% 91.41%
33 GPT-5.2 93.58% 94.54% 90.26%
34 o4 Mini 93.40% 94.45% 88.35%
35 Stealth: Hunter Alpha 93.39% 91.67% 87.34%
36 Stealth: Healer Alpha 93.35% 91.67% 85.93%
37 Claude Opus 4 93.24% 92.59% 87.69%
38 GPT-5 Mini 93.22% 94.36% 92.62%
39 Claude Sonnet 4.6 93.07% 88.48% 91.15%
40 ByteDance Seed 1.6 92.98% 91.49% 90.70%
41 Qwen 3.5 Plus (2026-02-15) 92.96% 93.45% 85.96%
42 GPT-5.4 92.85% 93.92% 84.32%
43 Z.AI GLM 4.6 92.74% 95.12% 89.11%
44 Gemini 2.5 Flash (Reasoning) 91.73% 93.81% 86.51%
45 MiniMax M2.7 91.61% 93.28% 89.10%
46 Gemini 2.5 Flash Lite (Reasoning) 91.22% 93.86% 85.75%
47 Nemotron 3 Super 91.22% 93.11% 84.56%
48 Claude 3.5 Sonnet 91.16% 90.30% 84.24%
49 Qwen 3.5 9B 90.87% 92.93% 86.05%
50 Qwen 3.5 27B 90.46% 92.73% 90.85%
51 Claude 3.7 Sonnet 90.43% 89.94% 83.39%
52 Gemini 2.5 Flash 90.21% 92.60% 80.60%
53 Gemini 3.1 Flash Lite (Preview) 89.85% 92.15% 85.87%
54 Z.AI GLM 4.5 89.84% 91.03% 86.27%
55 GPT-5.4 Mini (Reasoning, Low) 89.57% 92.28% 85.75%
56 MiniMax M2.5 89.33% 92.42% 88.71%
57 Mistral Medium 3.1 89.19% 89.32% 77.83%
58 Inception Mercury 2 89.05% 92.03% 83.85%
59 DeepSeek V3.2 88.87% 89.46% 82.25%
60 Mistral Large 2 88.62% 88.20% 82.41%
61 Mistral Large 3 88.45% 88.95% 85.43%
62 DeepSeek-V2 Chat 87.95% 88.70% 84.83%
63 GPT-4o, May 13th (temp=0) 87.71% 88.58% 85.36%
64 Grok 4.20 (Beta) 87.37% 87.05% 83.85%
65 DeepSeek V3 (2025-03-24) 87.29% 88.45% 81.99%
66 DeepSeek V3 (2024-12-26) 87.02% 88.71% 83.68%
67 GPT-4.1 86.98% 88.46% 88.68%
68 Mistral Large 86.57% 76.31% 80.15%
69 Claude Haiku 4.5 86.08% 87.76% 85.14%
70 GPT-4o, Aug. 6th (temp=0) 85.73% 87.59% 82.45%
71 Stealth: Aurora Alpha 85.21% 90.11% 83.79%
72 Mistral Small 4 (Reasoning) 85.12% 87.78% 82.39%
73 Nemotron 3 Nano 84.82% 89.91% 77.73%
74 GPT-4o, Aug. 6th (temp=1) 84.37% 86.91% 82.62%
75 GPT-5 Nano 84.22% 89.61% 82.60%
76 Z.AI GLM 4.7 Flash 83.99% 89.50% 84.82%
77 Qwen 3 32B 83.81% 86.35% 82.21%
78 Writer: Palmyra X5 83.69% 86.57% 79.57%
79 Qwen3 235B A22B Instruct 2507 83.25% 85.82% 80.10%
80 GPT-5.4 Nano (Reasoning) 83.07% 88.48% 81.36%
81 Mistral Small Creative 83.04% 87.99% 73.27%
82 DeepSeek V3.1 82.96% 83.95% 82.39%
83 Hermes 3 405B 82.88% 85.58% 82.86%
84 GPT-4o, May 13th (temp=1) 82.52% 85.98% 83.80%
85 GPT-4.1 Mini 82.22% 85.83% 83.20%
86 ByteDance Seed 1.6 Flash 81.38% 86.52% 73.27%
87 GPT-5.4 Mini 80.08% 88.04% 82.43%
88 Mistral Small 3.2 24B 79.53% 81.71% 78.60%
89 Gemma 3 27B 78.48% 86.74% 77.85%
90 Qwen 2.5 72B 77.42% 83.43% 75.46%
91 Llama 3.1 Nemotron 70B 77.15% 82.19% 74.70%
92 Ministral 3 14B 77.03% 83.24% 72.54%
93 Inception Mercury 76.92% 85.96% 79.50%
94 Gemini 2.5 Flash Lite 76.60% 85.80% 81.08%
95 Ministral 3 8B 74.95% 71.64% 71.76%
96 Llama 3.1 70B 73.63% 79.31% 78.40%
97 Arcee AI: Trinity Mini 71.60% 76.94% 70.90%
98 Ministral 8B 71.56% 73.78% 64.87%
99 Arcee AI: Trinity Large (Preview) 70.59% 77.24% 73.33%
100 Claude 3.5 Haiku 70.51% 82.23% 83.73%
101 Hermes 3 70B 68.76% 79.08% 72.57%
102 GPT-4o Mini (temp=0) 68.07% 81.26% 78.29%
103 Mistral Small 4 67.95% 78.72% 76.46%
104 GPT-4o Mini (temp=1) 67.12% 80.28% 79.08%
105 Claude 3 Haiku 66.98% 77.94% 71.19%
106 Ministral 3B 66.56% 69.70% 61.29%
107 Cohere Command R+ (Aug. 2024) 66.37% 65.10% 69.03%
108 Ministral 3 3B 65.98% 71.88% 67.22%
109 WizardLM 2 8x22b 64.77% 67.36% 71.07%
110 Gemma 3 12B 63.84% 79.42% 78.41%
111 GPT-5.4 Nano (Reasoning, Low) 62.85% 78.93% 79.48%
112 GPT-5.4 Nano 58.27% 75.66% 74.40%
113 Gemma 3 4B 52.28% 73.64% 68.57%
114 Llama 3.1 8B 51.46% 69.12% 63.37%
115 GPT-4.1 Nano 46.04% 70.24% 71.94%
116 Rocinante 12B 33.73% 54.31% 54.55%
117 LFM2 24B 30.88% 54.88% 58.77%
118 Mistral NeMO 27.84% 57.59% 65.04%