Tooling

13 scenarios across 1 subcategory. 118 models scored.

Subcategories

Subcategory Avg Score Best Model Best Score
XML 95.66% Claude Opus 4.6 (Reasoning) 100.00%

Model Leaderboard

All models ranked by their Tooling category score.

# Model Tooling XML Overall
1 Claude Opus 4.6 (Reasoning) 100.00% 100.00% 95.02%
2 Claude Sonnet 4.6 (Reasoning) 100.00% 100.00% 93.66%
3 GPT-5.4 (Reasoning) 100.00% 100.00% 93.24%
4 Claude Opus 4.6 100.00% 100.00% 92.35%
5 GPT-5 100.00% 100.00% 91.93%
6 Qwen 3.5 122B 100.00% 100.00% 91.53%
7 Grok 4.20 (Beta, Reasoning) 100.00% 100.00% 91.49%
8 Z.AI GLM 5 100.00% 100.00% 91.23%
9 Claude Sonnet 4.6 100.00% 100.00% 91.15%
10 MoonshotAI: Kimi K2.5 100.00% 100.00% 91.04%
11 GPT-5.4 Mini (Reasoning) 100.00% 100.00% 90.65%
12 Gemini 3 Flash (Preview, Reasoning) 100.00% 100.00% 90.50%
13 o4 Mini High 100.00% 100.00% 90.29%
14 GPT-5.2 100.00% 100.00% 90.26%
15 Claude Opus 4.5 100.00% 100.00% 89.69%
16 Grok 4.1 Fast 100.00% 100.00% 89.55%
17 Claude Sonnet 4 100.00% 100.00% 88.72%
18 Z.AI GLM 4.7 100.00% 100.00% 88.69%
19 Gemini 2.5 Pro 100.00% 100.00% 88.53%
20 o4 Mini 100.00% 100.00% 88.35%
21 Claude Sonnet 4.5 100.00% 100.00% 88.03%
22 Claude Opus 4 100.00% 100.00% 87.69%
23 Gemini 2.5 Flash (Reasoning) 100.00% 100.00% 86.51%
24 Stealth: Healer Alpha 100.00% 100.00% 85.93%
25 GPT-5.4 Mini (Reasoning, Low) 100.00% 100.00% 85.75%
26 Claude 3.5 Sonnet 100.00% 100.00% 84.24%
27 Grok 4.20 (Beta) 100.00% 100.00% 83.85%
28 DeepSeek V3 (2024-12-26) 100.00% 100.00% 83.68%
29 Stealth: Hunter Alpha 99.99% 99.99% 87.34%
30 Grok 4 99.99% 99.99% 88.12%
31 DeepSeek V3.2 99.99% 99.99% 82.25%
32 Z.AI GLM 4.6 99.99% 99.99% 89.11%
33 GPT-5 Mini 99.99% 99.99% 92.62%
34 MiniMax M2.7 99.98% 99.98% 89.10%
35 Gemini 3.1 Flash Lite (Preview) 99.98% 99.98% 85.87%
36 Gemini 3 Pro (Preview) 99.98% 99.98% 88.79%
37 GPT-5.4 (Reasoning, Low) 99.96% 99.96% 91.41%
38 Gemini 2.5 Flash 99.96% 99.96% 80.60%
39 GPT-4o, Aug. 6th (temp=0) 99.95% 99.95% 82.45%
40 ByteDance Seed 2.0 Lite 99.91% 99.91% 84.80%
41 Gemini 3.1 Pro (Preview) 99.90% 99.90% 94.37%
42 Z.AI GLM 4.5 99.89% 99.89% 86.27%
43 Mistral Small 3.2 24B 99.89% 99.89% 78.60%
44 Gemma 3 27B 99.88% 99.88% 77.85%
45 GPT-5.4 Mini 99.86% 99.86% 82.43%
46 Hermes 3 405B 99.78% 99.78% 82.86%
47 Mistral Large 2 99.78% 99.78% 82.41%
48 Qwen 3.5 397B A17B 99.77% 99.77% 91.73%
49 DeepSeek-V2 Chat 99.76% 99.76% 84.83%
50 Qwen 3.5 Plus (2026-02-15) 99.74% 99.74% 85.96%
51 GPT-4o, Aug. 6th (temp=1) 99.73% 99.73% 82.62%
52 Mistral Small 4 (Reasoning) 99.73% 99.73% 82.39%
53 Stealth: Aurora Alpha 99.69% 99.69% 83.79%
54 Claude 3.5 Haiku 99.69% 99.69% 83.73%
55 GPT-4o, May 13th (temp=1) 99.68% 99.68% 83.80%
56 Mistral Large 3 99.66% 99.66% 85.43%
57 Grok 4 Fast 99.65% 99.65% 86.15%
58 ByteDance Seed 2.0 Mini 99.64% 99.64% 86.91%
59 Gemini 2.5 Flash Lite (Reasoning) 99.54% 99.54% 85.75%
60 Aion 2.0 99.53% 99.53% 89.21%
61 Claude 3 Haiku 99.47% 99.47% 71.19%
62 Ministral 3 8B 99.42% 99.42% 71.76%
63 Qwen 2.5 72B 99.38% 99.38% 75.46%
64 Writer: Palmyra X5 99.34% 99.34% 79.57%
65 Claude 3.7 Sonnet 99.32% 99.32% 83.39%
66 GPT-4o Mini (temp=1) 99.26% 99.26% 79.08%
67 Qwen3 235B A22B Instruct 2507 99.23% 99.23% 80.10%
68 GPT-4o, May 13th (temp=0) 99.22% 99.22% 85.36%
69 GPT-5 Nano 99.21% 99.21% 82.60%
70 Claude Haiku 4.5 99.10% 99.10% 85.14%
71 Qwen 3.5 27B 99.00% 99.00% 90.85%
72 GPT-4.1 98.86% 98.86% 88.68%
73 Mistral Large 98.67% 98.67% 80.15%
74 GPT-4o Mini (temp=0) 98.54% 98.54% 78.29%
75 GPT-5.1 98.00% 98.00% 92.54%
76 ByteDance Seed 1.6 98.00% 98.00% 90.70%
77 Inception Mercury 2 98.00% 98.00% 83.85%
78 Inception Mercury 97.98% 97.98% 79.50%
79 DeepSeek V3.1 97.96% 97.96% 82.39%
80 MiniMax M2.5 97.93% 97.93% 88.71%
81 GPT-4.1 Mini 97.92% 97.92% 83.20%
82 Gemma 3 4B 97.88% 97.88% 68.57%
83 Hermes 3 70B 97.86% 97.86% 72.57%
84 Gemma 3 12B 97.69% 97.69% 78.41%
85 GPT-5.4 97.65% 97.65% 84.32%
86 Gemini 3 Flash (Preview) 97.64% 97.64% 85.35%
87 Mistral Medium 3.1 97.50% 97.50% 77.83%
88 Qwen 3 32B 97.43% 97.43% 82.21%
89 Arcee AI: Trinity Mini 97.16% 97.16% 70.90%
90 Qwen 3.5 9B 97.00% 97.00% 86.05%
91 Gemini 2.5 Flash Lite 96.60% 96.60% 81.08%
92 Llama 3.1 Nemotron 70B 95.74% 95.74% 74.70%
93 GPT-5.4 Nano (Reasoning) 95.71% 95.71% 81.36%
94 Mistral Small 4 95.02% 95.02% 76.46%
95 Qwen 3.5 35B 93.98% 93.98% 88.00%
96 Z.AI GLM 5 Turbo 93.95% 93.95% 94.27%
97 Ministral 3 3B 93.79% 93.79% 67.22%
98 Z.AI GLM 4.7 Flash 93.74% 93.74% 84.82%
99 DeepSeek V3 (2025-03-24) 93.53% 93.53% 81.99%
100 Nemotron 3 Super 93.49% 93.49% 84.56%
101 Arcee AI: Trinity Large (Preview) 93.42% 93.42% 73.33%
102 Ministral 3 14B 91.91% 91.91% 72.54%
103 Cohere Command R+ (Aug. 2024) 91.84% 91.84% 69.03%
104 WizardLM 2 8x22b 90.27% 90.27% 71.07%
105 GPT-5.4 Nano 90.22% 90.22% 74.40%
106 GPT-5.4 Nano (Reasoning, Low) 89.75% 89.75% 79.48%
107 Llama 3.1 70B 88.59% 88.59% 78.40%
108 Qwen 3.5 Flash 87.87% 87.87% 86.38%
109 Ministral 3B 87.64% 87.64% 61.29%
110 Mistral Small Creative 86.85% 86.85% 73.27%
111 Ministral 8B 85.58% 85.58% 64.87%
112 Nemotron 3 Nano 83.98% 83.98% 77.73%
113 Mistral NeMO 83.21% 83.21% 65.04%
114 GPT-4.1 Nano 81.37% 81.37% 71.94%
115 Llama 3.1 8B 69.49% 69.49% 63.37%
116 ByteDance Seed 1.6 Flash 39.46% 39.46% 73.27%
117 Rocinante 12B 38.30% 38.30% 54.55%
118 LFM2 24B 16.85% 16.85% 58.77%