Mechanical Style

Subcategory of Creative Writing. 91 models scored.

Model Leaderboard

All models ranked by their Mechanical Style subcategory score.

# Model Mechanical Style Creative Writing Overall
1 Qwen 3.5 397B A17B 97.50% 86.93% 91.73%
2 Gemini 3.1 Pro (Preview) 97.27% 85.44% 94.37%
3 GPT-4o, May 13th (temp=0) 95.74% 74.89% 85.36%
4 Qwen 3.5 122B 95.52% 83.02% 91.53%
5 Qwen 2.5 72B 95.34% 75.16% 75.46%
6 Qwen 3.5 Flash 95.18% 83.81% 86.38%
7 Gemini 2.5 Flash (Reasoning) 94.83% 76.30% 86.51%
8 Gemini 2.5 Flash 94.77% 77.57% 80.60%
9 Mistral NeMO 94.70% 76.72% 65.04%
10 Qwen 3.5 27B 94.50% 82.54% 90.85%
11 GPT-5.1 93.87% 87.20% 92.54%
12 Gemini 2.5 Flash Lite 93.78% 75.05% 81.08%
13 GPT-5 93.65% 86.87% 91.93%
14 Qwen 3.5 35B 93.58% 83.51% 88.00%
15 GPT-4o, Aug. 6th (temp=0) 93.39% 73.65% 82.45%
16 Qwen 3.5 Plus (2026-02-15) 92.52% 77.07% 85.96%
17 GPT-4o Mini (temp=0) 92.33% 73.10% 78.29%
18 Hermes 3 405B 92.33% 80.92% 82.86%
19 GPT-5.2 90.94% 80.36% 90.26%
20 Gemini 2.5 Pro 90.61% 81.03% 88.53%
21 Cohere Command R+ (Aug. 2024) 89.16% 77.70% 69.03%
22 Gemma 3 27B 88.98% 78.79% 77.85%
23 Claude 3 Haiku 88.96% 74.53% 71.19%
24 Stealth: Aurora Alpha 88.68% 67.54% 83.79%
25 o4 Mini High 88.52% 82.72% 90.29%
26 Mistral Small 3.2 24B 88.45% 71.87% 78.60%
27 Hermes 3 70B 88.06% 77.41% 72.57%
28 Z.AI GLM 4.6 87.76% 78.86% 89.11%
29 o4 Mini 87.50% 82.04% 88.35%
30 DeepSeek V3.2 87.42% 79.95% 82.25%
31 Gemini 2.5 Flash Lite (Reasoning) 87.33% 71.64% 85.75%
32 Claude Sonnet 4.5 86.97% 84.19% 88.03%
33 GPT-4o, May 13th (temp=1) 86.96% 75.88% 83.80%
34 WizardLM 2 8x22b 86.84% 79.06% 71.07%
35 Aion 2.0 86.75% 80.24% 89.21%
36 MoonshotAI: Kimi K2.5 86.00% 81.35% 91.04%
37 Claude 3.5 Sonnet 85.61% 78.69% 84.24%
38 Claude Opus 4 84.86% 83.79% 87.69%
39 GPT-5 Mini 84.52% 80.48% 92.62%
40 DeepSeek V3.1 84.49% 77.45% 82.39%
41 Z.AI GLM 4.7 84.23% 78.89% 88.69%
42 Grok 4.1 Fast 84.23% 82.14% 89.55%
43 Gemma 3 12B 84.22% 75.38% 78.41%
44 Claude Sonnet 4 84.00% 79.21% 88.72%
45 Llama 3.1 Nemotron 70B 83.82% 71.71% 74.70%
46 Z.AI GLM 5 83.75% 83.63% 91.23%
47 LFM2 24B 83.73% 78.10% 58.77%
48 Arcee AI: Trinity Large (Preview) 83.61% 75.26% 73.33%
49 Claude Opus 4.5 83.45% 81.71% 89.69%
50 DeepSeek V3 (2024-12-26) 83.43% 77.88% 83.68%
51 Arcee AI: Trinity Mini 83.32% 74.01% 70.90%
52 Z.AI GLM 4.5 83.00% 76.56% 86.27%
53 Llama 3.1 70B 82.99% 72.78% 78.40%
54 DeepSeek-V2 Chat 82.94% 77.20% 84.83%
55 Claude Opus 4.6 (Reasoning) 82.92% 84.55% 95.02%
56 Claude Opus 4.6 82.67% 83.59% 92.35%
57 Rocinante 12B 82.57% 81.94% 54.55%
58 Gemini 3 Flash (Preview, Reasoning) 81.52% 75.87% 90.50%
59 GPT-4o Mini (temp=1) 81.11% 74.37% 79.08%
60 Mistral Large 81.07% 82.02% 80.15%
61 Llama 3.1 8B 81.03% 76.54% 63.37%
62 GPT-4.1 80.93% 81.24% 88.68%
63 Grok 4 80.92% 77.34% 88.12%
64 GPT-4o, Aug. 6th (temp=1) 80.80% 75.50% 82.62%
65 Mistral Large 2 80.68% 81.86% 82.41%
66 GPT-4.1 Mini 80.59% 74.52% 83.20%
67 Claude 3.7 Sonnet 80.55% 76.31% 83.39%
68 Gemini 3 Pro (Preview) 80.39% 77.77% 88.79%
69 Grok 4 Fast 80.12% 77.03% 86.15%
70 Mistral Large 3 80.06% 81.21% 85.43%
71 Writer: Palmyra X5 79.92% 83.95% 79.57%
72 DeepSeek V3 (2025-03-24) 79.89% 82.34% 81.99%
73 Claude Sonnet 4.6 (Reasoning) 78.68% 83.09% 93.66%
74 Z.AI GLM 4.7 Flash 78.53% 77.36% 84.82%
75 Minimax M2.5 78.48% 81.21% 88.71%
76 Claude Haiku 4.5 78.41% 78.96% 85.14%
77 Claude Sonnet 4.6 78.39% 83.31% 91.15%
78 Gemma 3 4B 78.08% 72.10% 68.57%
79 GPT-5 Nano 77.89% 67.04% 82.60%
80 Claude 3.5 Haiku 77.74% 75.28% 83.73%
81 Ministral 3 3B 77.57% 75.45% 67.22%
82 ByteDance Seed 1.6 Flash 77.28% 81.51% 73.27%
83 Mistral Medium 3.1 77.16% 81.70% 77.83%
84 Mistral Small Creative 76.63% 80.29% 73.27%
85 Gemini 3 Flash (Preview) 76.32% 75.04% 85.35%
86 GPT-4.1 Nano 76.30% 71.81% 71.94%
87 ByteDance Seed 1.6 74.93% 78.43% 90.70%
88 Ministral 3 8B 74.38% 77.26% 71.76%
89 Ministral 8B 73.33% 76.87% 64.87%
90 Ministral 3B 73.22% 75.49% 61.29%
91 Ministral 3 14B 73.18% 79.11% 72.54%