Tooling

13 scenarios across 1 subcategory. 91 models scored.

Subcategories

Subcategory Avg Score Best Model Best Score
XML 95.16% Claude Opus 4.6 (Reasoning) 100.00%

Model Leaderboard

All models ranked by their Tooling category score.

# Model Tooling XML Overall
1 Claude Opus 4.6 (Reasoning) 100.00% 100.00% 95.02%
2 Claude Sonnet 4.6 (Reasoning) 100.00% 100.00% 93.66%
3 Claude Opus 4.6 100.00% 100.00% 92.35%
4 GPT-5 100.00% 100.00% 91.93%
5 Qwen 3.5 122B 100.00% 100.00% 91.53%
6 Z.AI GLM 5 100.00% 100.00% 91.23%
7 Claude Sonnet 4.6 100.00% 100.00% 91.15%
8 MoonshotAI: Kimi K2.5 100.00% 100.00% 91.04%
9 Gemini 3 Flash (Preview, Reasoning) 100.00% 100.00% 90.50%
10 o4 Mini High 100.00% 100.00% 90.29%
11 GPT-5.2 100.00% 100.00% 90.26%
12 Claude Opus 4.5 100.00% 100.00% 89.69%
13 Grok 4.1 Fast 100.00% 100.00% 89.55%
14 Claude Sonnet 4 100.00% 100.00% 88.72%
15 Z.AI GLM 4.7 100.00% 100.00% 88.69%
16 Gemini 2.5 Pro 100.00% 100.00% 88.53%
17 o4 Mini 100.00% 100.00% 88.35%
18 Claude Sonnet 4.5 100.00% 100.00% 88.03%
19 Claude Opus 4 100.00% 100.00% 87.69%
20 Gemini 2.5 Flash (Reasoning) 100.00% 100.00% 86.51%
21 Claude 3.5 Sonnet 100.00% 100.00% 84.24%
22 DeepSeek V3 (2024-12-26) 100.00% 100.00% 83.68%
23 Grok 4 99.99% 99.99% 88.12%
24 DeepSeek V3.2 99.99% 99.99% 82.25%
25 Z.AI GLM 4.6 99.99% 99.99% 89.11%
26 GPT-5 Mini 99.99% 99.99% 92.62%
27 Gemini 3 Pro (Preview) 99.98% 99.98% 88.79%
28 Gemini 2.5 Flash 99.96% 99.96% 80.60%
29 GPT-4o, Aug. 6th (temp=0) 99.95% 99.95% 82.45%
30 Gemini 3.1 Pro (Preview) 99.90% 99.90% 94.37%
31 Z.AI GLM 4.5 99.89% 99.89% 86.27%
32 Mistral Small 3.2 24B 99.89% 99.89% 78.60%
33 Gemma 3 27B 99.88% 99.88% 77.85%
34 Hermes 3 405B 99.78% 99.78% 82.86%
35 Mistral Large 2 99.78% 99.78% 82.41%
36 Qwen 3.5 397B A17B 99.77% 99.77% 91.73%
37 DeepSeek-V2 Chat 99.76% 99.76% 84.83%
38 Qwen 3.5 Plus (2026-02-15) 99.74% 99.74% 85.96%
39 GPT-4o, Aug. 6th (temp=1) 99.73% 99.73% 82.62%
40 Stealth: Aurora Alpha 99.69% 99.69% 83.79%
41 Claude 3.5 Haiku 99.69% 99.69% 83.73%
42 GPT-4o, May 13th (temp=1) 99.68% 99.68% 83.80%
43 Mistral Large 3 99.66% 99.66% 85.43%
44 Grok 4 Fast 99.65% 99.65% 86.15%
45 Gemini 2.5 Flash Lite (Reasoning) 99.54% 99.54% 85.75%
46 Aion 2.0 99.53% 99.53% 89.21%
47 Claude 3 Haiku 99.47% 99.47% 71.19%
48 Ministral 3 8B 99.42% 99.42% 71.76%
49 Qwen 2.5 72B 99.38% 99.38% 75.46%
50 Writer: Palmyra X5 99.34% 99.34% 79.57%
51 Claude 3.7 Sonnet 99.32% 99.32% 83.39%
52 GPT-4o Mini (temp=1) 99.26% 99.26% 79.08%
53 GPT-4o, May 13th (temp=0) 99.22% 99.22% 85.36%
54 GPT-5 Nano 99.21% 99.21% 82.60%
55 Claude Haiku 4.5 99.10% 99.10% 85.14%
56 Qwen 3.5 27B 99.00% 99.00% 90.85%
57 GPT-4.1 98.86% 98.86% 88.68%
58 Mistral Large 98.67% 98.67% 80.15%
59 GPT-4o Mini (temp=0) 98.54% 98.54% 78.29%
60 GPT-5.1 98.00% 98.00% 92.54%
61 ByteDance Seed 1.6 98.00% 98.00% 90.70%
62 DeepSeek V3.1 97.96% 97.96% 82.39%
63 Minimax M2.5 97.93% 97.93% 88.71%
64 GPT-4.1 Mini 97.92% 97.92% 83.20%
65 Gemma 3 4B 97.88% 97.88% 68.57%
66 Hermes 3 70B 97.86% 97.86% 72.57%
67 Gemma 3 12B 97.69% 97.69% 78.41%
68 Gemini 3 Flash (Preview) 97.64% 97.64% 85.35%
69 Mistral Medium 3.1 97.50% 97.50% 77.83%
70 Arcee AI: Trinity Mini 97.16% 97.16% 70.90%
71 Gemini 2.5 Flash Lite 96.60% 96.60% 81.08%
72 Llama 3.1 Nemotron 70B 95.74% 95.74% 74.70%
73 Qwen 3.5 35B 93.98% 93.98% 88.00%
74 Ministral 3 3B 93.79% 93.79% 67.22%
75 Z.AI GLM 4.7 Flash 93.74% 93.74% 84.82%
76 DeepSeek V3 (2025-03-24) 93.53% 93.53% 81.99%
77 Arcee AI: Trinity Large (Preview) 93.42% 93.42% 73.33%
78 Ministral 3 14B 91.91% 91.91% 72.54%
79 Cohere Command R+ (Aug. 2024) 91.84% 91.84% 69.03%
80 WizardLM 2 8x22b 90.27% 90.27% 71.07%
81 Llama 3.1 70B 88.59% 88.59% 78.40%
82 Qwen 3.5 Flash 87.87% 87.87% 86.38%
83 Ministral 3B 87.64% 87.64% 61.29%
84 Mistral Small Creative 86.85% 86.85% 73.27%
85 Ministral 8B 85.58% 85.58% 64.87%
86 Mistral NeMO 83.21% 83.21% 65.04%
87 GPT-4.1 Nano 81.37% 81.37% 71.94%
88 Llama 3.1 8B 69.49% 69.49% 63.37%
89 ByteDance Seed 1.6 Flash 39.46% 39.46% 73.27%
90 Rocinante 12B 38.30% 38.30% 54.55%
91 LFM2 24B 16.85% 16.85% 58.77%