Comprehension

Subcategory of Language. 91 models scored.

Model Leaderboard

All models ranked by their Comprehension subcategory score.

# Model Comprehension Language Overall
1 Claude Opus 4.6 (Reasoning) 100.00% 96.12% 95.02%
2 Claude Sonnet 4.6 (Reasoning) 100.00% 97.58% 93.66%
3 Claude Opus 4.6 100.00% 96.13% 92.35%
4 Qwen 3.5 397B A17B 100.00% 95.01% 91.73%
5 Qwen 3.5 122B 100.00% 95.01% 91.53%
6 Claude Sonnet 4.6 100.00% 100.00% 91.15%
7 MoonshotAI: Kimi K2.5 100.00% 97.10% 91.04%
8 Qwen 3.5 27B 100.00% 95.52% 90.85%
9 ByteDance Seed 1.6 100.00% 95.63% 90.70%
10 Claude Opus 4.5 100.00% 99.66% 89.69%
11 Aion 2.0 100.00% 96.17% 89.21%
12 Z.AI GLM 4.6 100.00% 96.60% 89.11%
13 Qwen 3.5 35B 100.00% 91.95% 88.00%
14 Claude Opus 4 100.00% 93.01% 87.69%
15 Qwen 3.5 Plus (2026-02-15) 100.00% 95.10% 85.96%
16 Mistral Large 3 100.00% 92.02% 85.43%
17 GPT-4o, May 13th (temp=0) 100.00% 98.72% 85.36%
18 DeepSeek-V2 Chat 100.00% 100.00% 84.83%
19 DeepSeek V3 (2024-12-26) 100.00% 87.88% 83.68%
20 Hermes 3 405B 100.00% 99.57% 82.86%
21 Mistral Large 2 100.00% 85.22% 82.41%
22 DeepSeek V3 (2025-03-24) 100.00% 86.42% 81.99%
23 Ministral 3 3B 100.00% 68.10% 67.22%
24 Gemini 3.1 Pro (Preview) 95.00% 94.90% 94.37%
25 GPT-5 Mini 95.00% 96.49% 92.62%
26 Z.AI GLM 5 95.00% 92.06% 91.23%
27 Gemini 3 Flash (Preview, Reasoning) 95.00% 94.93% 90.50%
28 Gemini 3 Pro (Preview) 95.00% 89.64% 88.79%
29 Minimax M2.5 95.00% 96.05% 88.71%
30 Claude Sonnet 4.5 95.00% 92.39% 88.03%
31 Qwen 3.5 Flash 95.00% 91.94% 86.38%
32 Z.AI GLM 4.5 95.00% 97.33% 86.27%
33 Claude Haiku 4.5 95.00% 91.84% 85.14%
34 Z.AI GLM 4.7 Flash 95.00% 87.67% 84.82%
35 GPT-4o, May 13th (temp=1) 95.00% 92.52% 83.80%
36 DeepSeek V3.1 95.00% 96.87% 82.39%
37 WizardLM 2 8x22b 95.00% 78.05% 71.07%
38 Mistral NeMO 95.00% 80.80% 65.04%
39 GPT-5.1 90.00% 93.64% 92.54%
40 Grok 4.1 Fast 90.00% 88.76% 89.55%
41 Claude Sonnet 4 90.00% 91.31% 88.72%
42 GPT-4.1 90.00% 93.91% 88.68%
43 Gemini 2.5 Pro 90.00% 92.57% 88.53%
44 Gemini 3 Flash (Preview) 90.00% 95.00% 85.35%
45 Claude 3.5 Haiku 90.00% 82.12% 83.73%
46 Claude 3.7 Sonnet 90.00% 92.95% 83.39%
47 Mistral Large 90.00% 88.64% 80.15%
48 GPT-5 85.00% 91.50% 91.93%
49 GPT-5.2 85.00% 91.19% 90.26%
50 Grok 4 85.00% 90.61% 88.12%
51 Grok 4 Fast 85.00% 84.61% 86.15%
52 Stealth: Aurora Alpha 85.00% 92.50% 83.79%
53 Z.AI GLM 4.7 80.00% 85.46% 88.69%
54 GPT-4.1 Mini 80.00% 89.64% 83.20%
55 DeepSeek V3.2 80.00% 85.01% 82.25%
56 Gemini 2.5 Flash Lite 80.00% 82.75% 81.08%
57 Arcee AI: Trinity Large (Preview) 80.00% 78.38% 73.33%
58 Gemini 2.5 Flash (Reasoning) 75.00% 86.06% 86.51%
59 Claude 3.5 Sonnet 75.00% 85.62% 84.24%
60 Gemini 2.5 Flash 75.00% 86.23% 80.60%
61 Mistral Small 3.2 24B 75.00% 72.77% 78.60%
62 Llama 3.1 70B 75.00% 80.18% 78.40%
63 Hermes 3 70B 75.00% 81.66% 72.57%
64 Rocinante 12B 75.00% 63.45% 54.55%
65 Writer: Palmyra X5 70.00% 56.58% 79.57%
66 Gemma 3 12B 70.00% 80.10% 78.41%
67 Qwen 2.5 72B 70.00% 68.95% 75.46%
68 Gemma 3 4B 70.00% 72.28% 68.57%
69 Gemini 2.5 Flash Lite (Reasoning) 65.00% 74.36% 85.75%
70 GPT-4o, Aug. 6th (temp=1) 65.00% 82.21% 82.62%
71 Gemma 3 27B 65.00% 77.21% 77.85%
72 GPT-4.1 Nano 65.00% 78.95% 71.94%
73 LFM2 24B 65.00% 64.64% 58.77%
74 Claude 3 Haiku 65.00% 72.76% 71.19%
75 Llama 3.1 Nemotron 70B 60.00% 46.80% 74.70%
76 Arcee AI: Trinity Mini 60.00% 70.59% 70.90%
77 o4 Mini High 60.00% 79.76% 90.29%
78 o4 Mini 60.00% 80.00% 88.35%
79 Cohere Command R+ (Aug. 2024) 60.00% 66.58% 69.03%
80 GPT-5 Nano 55.00% 77.18% 82.60%
81 GPT-4o Mini (temp=1) 55.00% 77.50% 79.08%
82 Ministral 8B 55.00% 53.91% 64.87%
83 Llama 3.1 8B 55.00% 64.06% 63.37%
84 GPT-4o, Aug. 6th (temp=0) 50.00% 75.00% 82.45%
85 GPT-4o Mini (temp=0) 50.00% 75.00% 78.29%
86 Mistral Medium 3.1 50.00% 49.50% 77.83%
87 Mistral Small Creative 50.00% 41.85% 73.27%
88 Ministral 3 14B 50.00% 30.00% 72.54%
89 Ministral 3 8B 50.00% 48.96% 71.76%
90 ByteDance Seed 1.6 Flash 40.00% 61.23% 73.27%
91 Ministral 3B 25.00% 42.25% 61.29%