Structural Counting

Subcategory of Utility. 118 models scored.

Model Leaderboard

All models ranked by their Structural Counting subcategory score.

# Model Structural Counting Utility Overall
1 Claude Opus 4.6 (Reasoning) 100.00% 98.93% 95.02%
2 Gemini 3.1 Pro (Preview) 100.00% 99.91% 94.37%
3 GPT-5.4 (Reasoning) 100.00% 96.89% 93.24%
4 Qwen 3.5 397B A17B 100.00% 97.50% 91.73%
5 Qwen 3.5 122B 100.00% 96.36% 91.53%
6 Grok 4.20 (Beta, Reasoning) 100.00% 95.41% 91.49%
7 GPT-5.4 (Reasoning, Low) 100.00% 95.32% 91.41%
8 MoonshotAI: Kimi K2.5 100.00% 96.63% 91.04%
9 Qwen 3.5 27B 100.00% 95.67% 90.85%
10 Gemini 3 Flash (Preview, Reasoning) 100.00% 97.20% 90.50%
11 Aion 2.0 100.00% 90.91% 89.21%
12 Gemini 3 Pro (Preview) 100.00% 96.14% 88.79%
13 Gemini 2.5 Pro 100.00% 92.18% 88.53%
14 Qwen 3.5 35B 100.00% 96.42% 88.00%
15 Qwen 3.5 Flash 100.00% 96.11% 86.38%
16 Claude Sonnet 4.6 (Reasoning) 99.00% 97.88% 93.66%
17 Z.AI GLM 4.6 98.00% 88.58% 89.11%
18 o4 Mini High 97.50% 98.67% 90.29%
19 GPT-5.2 97.00% 96.22% 90.26%
20 Gemini 2.5 Flash Lite (Reasoning) 96.50% 89.63% 85.75%
21 Z.AI GLM 5 Turbo 96.00% 96.36% 94.27%
22 Qwen 3.5 9B 95.50% 94.02% 86.05%
23 Z.AI GLM 5 95.00% 94.11% 91.23%
24 Qwen3 235B A22B Instruct 2507 95.00% 83.15% 80.10%
25 Llama 3.1 Nemotron 70B 94.50% 88.31% 74.70%
26 GPT-5 Mini 94.00% 98.39% 92.62%
27 Z.AI GLM 4.7 93.00% 94.31% 88.69%
28 o4 Mini 92.50% 96.31% 88.35%
29 GPT-5.1 91.50% 95.33% 92.54%
30 GPT-5.4 Nano (Reasoning) 91.00% 93.34% 81.36%
31 GPT-5.4 Mini (Reasoning) 90.50% 94.44% 90.65%
32 MiniMax M2.5 90.50% 90.42% 88.71%
33 GPT-5.4 Nano (Reasoning, Low) 90.50% 91.42% 79.48%
34 Gemini 2.5 Flash (Reasoning) 90.00% 82.25% 86.51%
35 Claude 3.5 Haiku 90.00% 82.57% 83.73%
36 ByteDance Seed 1.6 Flash 90.00% 84.16% 73.27%
37 Claude Opus 4 89.50% 88.81% 87.69%
38 Qwen 3.5 Plus (2026-02-15) 89.00% 86.65% 85.96%
39 MiniMax M2.7 88.00% 95.50% 89.10%
40 ByteDance Seed 2.0 Lite 87.50% 92.23% 84.80%
41 GPT-4.1 85.50% 90.57% 88.68%
42 ByteDance Seed 2.0 Mini 85.50% 91.88% 86.91%
43 Nemotron 3 Super 85.00% 95.29% 84.56%
44 Grok 4 85.00% 89.67% 88.12%
45 GPT-5 Nano 85.00% 93.91% 82.60%
46 Stealth: Hunter Alpha 83.00% 84.63% 87.34%
47 Z.AI GLM 4.5 83.00% 79.19% 86.27%
48 Writer: Palmyra X5 83.00% 79.71% 79.57%
49 Gemini 3.1 Flash Lite (Preview) 82.00% 94.00% 85.87%
50 Claude Sonnet 4 82.00% 84.02% 88.72%
51 Z.AI GLM 4.7 Flash 81.00% 88.98% 84.82%
52 Qwen 3 32B 80.50% 81.66% 82.21%
53 ByteDance Seed 1.6 80.00% 90.83% 90.70%
54 DeepSeek-V2 Chat 79.50% 83.82% 84.83%
55 GPT-4o, May 13th (temp=0) 78.00% 83.13% 85.36%
56 Stealth: Aurora Alpha 77.00% 92.59% 83.79%
57 GPT-5 76.50% 93.53% 91.93%
58 Claude Sonnet 4.6 76.00% 88.52% 91.15%
59 Mistral Small 4 (Reasoning) 76.00% 85.61% 82.39%
60 Gemini 2.5 Flash Lite 74.00% 80.14% 81.08%
61 Inception Mercury 2 73.50% 92.86% 83.85%
62 Claude Opus 4.5 72.50% 89.84% 89.69%
63 Mistral Large 3 72.00% 84.91% 85.43%
64 Stealth: Healer Alpha 71.00% 82.30% 85.93%
65 Claude Haiku 4.5 70.50% 72.48% 85.14%
66 Claude Opus 4.6 70.00% 90.72% 92.35%
67 DeepSeek V3.2 70.00% 81.58% 82.25%
68 Mistral Large 69.50% 73.04% 80.15%
69 Gemma 3 12B 69.50% 79.28% 78.41%
70 Claude Sonnet 4.5 68.00% 83.78% 88.03%
71 Llama 3.1 70B 68.00% 81.03% 78.40%
72 GPT-5.4 Mini (Reasoning, Low) 67.50% 88.49% 85.75%
73 Mistral Large 2 66.00% 69.19% 82.41%
74 GPT-4.1 Mini 65.50% 82.30% 83.20%
75 Grok 4.1 Fast 64.50% 84.12% 89.55%
76 Qwen 2.5 72B 64.00% 76.43% 75.46%
77 GPT-4o, May 13th (temp=1) 63.50% 80.69% 83.80%
78 DeepSeek V3 (2024-12-26) 62.50% 81.87% 83.68%
79 Grok 4.20 (Beta) 60.50% 82.15% 83.85%
80 GPT-4o, Aug. 6th (temp=1) 60.50% 82.44% 82.62%
81 Inception Mercury 60.00% 87.38% 79.50%
82 Ministral 3 8B 60.00% 74.43% 71.76%
83 GPT-4o, Aug. 6th (temp=0) 59.50% 82.11% 82.45%
84 DeepSeek V3.1 59.50% 76.65% 82.39%
85 Mistral Small 4 59.50% 78.28% 76.46%
86 Ministral 3 14B 55.50% 79.03% 72.54%
87 Mistral NeMO 54.50% 51.55% 65.04%
88 Gemini 2.5 Flash 54.00% 61.45% 80.60%
89 WizardLM 2 8x22b 53.50% 67.14% 71.07%
90 DeepSeek V3 (2025-03-24) 53.50% 80.62% 81.99%
91 GPT-4o Mini (temp=1) 53.00% 82.16% 79.08%
92 Hermes 3 70B 52.50% 61.15% 72.57%
93 GPT-4.1 Nano 52.50% 68.45% 71.94%
94 GPT-5.4 Nano 51.50% 78.57% 74.40%
95 Mistral Small Creative 51.00% 76.28% 73.27%
96 Nemotron 3 Nano 50.50% 86.00% 77.73%
97 Ministral 3 3B 50.00% 72.38% 67.22%
98 Arcee AI: Trinity Mini 49.50% 59.94% 70.90%
99 Gemini 3 Flash (Preview) 48.00% 86.39% 85.35%
100 Ministral 3B 47.50% 49.17% 61.29%
101 Claude 3.7 Sonnet 47.00% 62.54% 83.39%
102 Llama 3.1 8B 46.50% 74.82% 63.37%
103 GPT-5.4 45.50% 81.95% 84.32%
104 GPT-4o Mini (temp=0) 45.00% 81.43% 78.29%
105 Arcee AI: Trinity Large (Preview) 44.50% 60.74% 73.33%
106 Claude 3 Haiku 44.50% 68.47% 71.19%
107 Rocinante 12B 43.50% 48.47% 54.55%
108 Gemma 3 27B 43.00% 76.82% 77.85%
109 Cohere Command R+ (Aug. 2024) 42.00% 59.51% 69.03%
110 Mistral Medium 3.1 39.00% 80.13% 77.83%
111 LFM2 24B 38.50% 69.48% 58.77%
112 Claude 3.5 Sonnet 37.00% 76.75% 84.24%
113 Hermes 3 405B 37.00% 69.02% 82.86%
114 Ministral 8B 33.50% 46.82% 64.87%
115 Grok 4 Fast 32.50% 76.76% 86.15%
116 GPT-5.4 Mini 32.50% 79.37% 82.43%
117 Gemma 3 4B 31.00% 60.30% 68.57%
118 Mistral Small 3.2 24B 23.50% 73.17% 78.60%