Comprehension

Subcategory of Language. 118 models scored.

Model Leaderboard

All models ranked by their Comprehension subcategory score.

# Model Comprehension Language Overall
1 Claude Opus 4.6 (Reasoning) 100.00% 96.12% 95.02%
2 Z.AI GLM 5 Turbo 100.00% 99.90% 94.27%
3 Claude Sonnet 4.6 (Reasoning) 100.00% 97.58% 93.66%
4 Claude Opus 4.6 100.00% 96.13% 92.35%
5 Qwen 3.5 397B A17B 100.00% 95.01% 91.73%
6 Qwen 3.5 122B 100.00% 95.01% 91.53%
7 Grok 4.20 (Beta, Reasoning) 100.00% 99.08% 91.49%
8 Claude Sonnet 4.6 100.00% 100.00% 91.15%
9 MoonshotAI: Kimi K2.5 100.00% 97.10% 91.04%
10 Qwen 3.5 27B 100.00% 95.52% 90.85%
11 ByteDance Seed 1.6 100.00% 95.63% 90.70%
12 GPT-5.4 Mini (Reasoning) 100.00% 98.12% 90.65%
13 Claude Opus 4.5 100.00% 99.66% 89.69%
14 Aion 2.0 100.00% 96.17% 89.21%
15 Z.AI GLM 4.6 100.00% 96.60% 89.11%
16 MiniMax M2.7 100.00% 84.80% 89.10%
17 Qwen 3.5 35B 100.00% 91.95% 88.00%
18 Claude Opus 4 100.00% 93.01% 87.69%
19 Qwen 3.5 Plus (2026-02-15) 100.00% 95.10% 85.96%
20 Mistral Large 3 100.00% 92.02% 85.43%
21 GPT-4o, May 13th (temp=0) 100.00% 98.72% 85.36%
22 DeepSeek-V2 Chat 100.00% 100.00% 84.83%
23 ByteDance Seed 2.0 Lite 100.00% 96.80% 84.80%
24 DeepSeek V3 (2024-12-26) 100.00% 87.88% 83.68%
25 Hermes 3 405B 100.00% 99.57% 82.86%
26 Mistral Large 2 100.00% 85.22% 82.41%
27 DeepSeek V3 (2025-03-24) 100.00% 86.42% 81.99%
28 Ministral 3 3B 100.00% 68.10% 67.22%
29 Gemini 3.1 Pro (Preview) 95.00% 94.90% 94.37%
30 GPT-5.4 (Reasoning) 95.00% 94.90% 93.24%
31 GPT-5 Mini 95.00% 96.49% 92.62%
32 Z.AI GLM 5 95.00% 92.06% 91.23%
33 Gemini 3 Flash (Preview, Reasoning) 95.00% 94.93% 90.50%
34 Gemini 3 Pro (Preview) 95.00% 89.64% 88.79%
35 MiniMax M2.5 95.00% 96.05% 88.71%
36 Claude Sonnet 4.5 95.00% 92.39% 88.03%
37 Stealth: Hunter Alpha 95.00% 93.35% 87.34%
38 Qwen 3.5 Flash 95.00% 91.94% 86.38%
39 Z.AI GLM 4.5 95.00% 97.33% 86.27%
40 Gemini 3.1 Flash Lite (Preview) 95.00% 94.98% 85.87%
41 Claude Haiku 4.5 95.00% 91.84% 85.14%
42 Z.AI GLM 4.7 Flash 95.00% 87.67% 84.82%
43 GPT-4o, May 13th (temp=1) 95.00% 92.52% 83.80%
44 DeepSeek V3.1 95.00% 96.87% 82.39%
45 WizardLM 2 8x22b 95.00% 78.05% 71.07%
46 Mistral NeMO 95.00% 80.80% 65.04%
47 GPT-5.1 90.00% 93.64% 92.54%
48 Grok 4.1 Fast 90.00% 88.76% 89.55%
49 Claude Sonnet 4 90.00% 91.31% 88.72%
50 GPT-4.1 90.00% 93.91% 88.68%
51 Gemini 2.5 Pro 90.00% 92.57% 88.53%
52 ByteDance Seed 2.0 Mini 90.00% 90.12% 86.91%
53 Qwen 3.5 9B 90.00% 88.18% 86.05%
54 Stealth: Healer Alpha 90.00% 88.45% 85.93%
55 Gemini 3 Flash (Preview) 90.00% 95.00% 85.35%
56 Claude 3.5 Haiku 90.00% 82.12% 83.73%
57 Claude 3.7 Sonnet 90.00% 92.95% 83.39%
58 Qwen 3 32B 90.00% 84.61% 82.21%
59 Mistral Large 90.00% 88.64% 80.15%
60 GPT-5 85.00% 91.50% 91.93%
61 GPT-5.2 85.00% 91.19% 90.26%
62 Grok 4.20 (Beta) 85.00% 91.17% 83.85%
63 GPT-5.4 (Reasoning, Low) 85.00% 90.79% 91.41%
64 Grok 4 85.00% 90.61% 88.12%
65 Grok 4 Fast 85.00% 84.61% 86.15%
66 GPT-5.4 Mini (Reasoning, Low) 85.00% 92.45% 85.75%
67 Stealth: Aurora Alpha 85.00% 92.50% 83.79%
68 Z.AI GLM 4.7 80.00% 85.46% 88.69%
69 GPT-4.1 Mini 80.00% 89.64% 83.20%
70 GPT-5.4 Mini 80.00% 88.75% 82.43%
71 DeepSeek V3.2 80.00% 85.01% 82.25%
72 Gemini 2.5 Flash Lite 80.00% 82.75% 81.08%
73 Nemotron 3 Nano 80.00% 87.63% 77.73%
74 Arcee AI: Trinity Large (Preview) 80.00% 78.38% 73.33%
75 Gemini 2.5 Flash (Reasoning) 75.00% 86.06% 86.51%
76 Claude 3.5 Sonnet 75.00% 85.62% 84.24%
77 Inception Mercury 2 75.00% 87.32% 83.85%
78 Gemini 2.5 Flash 75.00% 86.23% 80.60%
79 Qwen3 235B A22B Instruct 2507 75.00% 60.83% 80.10%
80 Mistral Small 3.2 24B 75.00% 72.77% 78.60%
81 Llama 3.1 70B 75.00% 80.18% 78.40%
82 Hermes 3 70B 75.00% 81.66% 72.57%
83 Rocinante 12B 75.00% 63.45% 54.55%
84 GPT-5.4 Nano (Reasoning) 70.00% 83.99% 81.36%
85 Writer: Palmyra X5 70.00% 56.58% 79.57%
86 GPT-5.4 Nano (Reasoning, Low) 70.00% 81.87% 79.48%
87 Gemma 3 12B 70.00% 80.10% 78.41%
88 Qwen 2.5 72B 70.00% 68.95% 75.46%
89 GPT-5.4 Nano 70.00% 80.82% 74.40%
90 Gemma 3 4B 70.00% 72.28% 68.57%
91 Gemini 2.5 Flash Lite (Reasoning) 65.00% 74.36% 85.75%
92 Nemotron 3 Super 65.00% 81.41% 84.56%
93 GPT-5.4 65.00% 81.49% 84.32%
94 GPT-4o, Aug. 6th (temp=1) 65.00% 82.21% 82.62%
95 Inception Mercury 65.00% 80.37% 79.50%
96 Gemma 3 27B 65.00% 77.21% 77.85%
97 GPT-4.1 Nano 65.00% 78.95% 71.94%
98 LFM2 24B 65.00% 64.64% 58.77%
99 Claude 3 Haiku 65.00% 72.76% 71.19%
100 Llama 3.1 Nemotron 70B 60.00% 46.80% 74.70%
101 Arcee AI: Trinity Mini 60.00% 70.59% 70.90%
102 o4 Mini High 60.00% 79.76% 90.29%
103 o4 Mini 60.00% 80.00% 88.35%
104 Cohere Command R+ (Aug. 2024) 60.00% 66.58% 69.03%
105 GPT-5 Nano 55.00% 77.18% 82.60%
106 GPT-4o Mini (temp=1) 55.00% 77.50% 79.08%
107 Mistral Small 4 55.00% 51.96% 76.46%
108 Ministral 8B 55.00% 53.91% 64.87%
109 Llama 3.1 8B 55.00% 64.06% 63.37%
110 GPT-4o, Aug. 6th (temp=0) 50.00% 75.00% 82.45%
111 Mistral Small 4 (Reasoning) 50.00% 60.53% 82.39%
112 GPT-4o Mini (temp=0) 50.00% 75.00% 78.29%
113 Mistral Medium 3.1 50.00% 49.50% 77.83%
114 Mistral Small Creative 50.00% 41.85% 73.27%
115 Ministral 3 14B 50.00% 30.00% 72.54%
116 Ministral 3 8B 50.00% 48.96% 71.76%
117 ByteDance Seed 1.6 Flash 40.00% 61.23% 73.27%
118 Ministral 3B 25.00% 42.25% 61.29%