Structural Integrity

Subcategory of Text Editing. 116 models scored.

Model Leaderboard

All models ranked by their Structural Integrity subcategory score.

# Model Structural Integrity Text Editing Overall
1 Claude Opus 4.6 (Reasoning) 100.00% 98.86% 95.02%
2 Gemini 3.1 Pro (Preview) 100.00% 98.51% 94.37%
3 Z.AI GLM 5 Turbo 100.00% 98.17% 94.27%
4 Claude Sonnet 4.6 (Reasoning) 100.00% 98.30% 93.66%
5 GPT-5.4 (Reasoning) 100.00% 98.42% 93.24%
6 GPT-5 Mini 100.00% 97.13% 92.62%
7 GPT-5.1 100.00% 98.54% 92.54%
8 Claude Opus 4.6 100.00% 98.35% 92.35%
9 GPT-5 100.00% 98.90% 91.93%
10 Qwen 3.5 397B A17B 100.00% 98.05% 91.73%
11 Grok 4.20 (Beta, Reasoning) 100.00% 98.69% 91.49%
12 GPT-5.4 (Reasoning, Low) 100.00% 98.01% 91.41%
13 Z.AI GLM 5 100.00% 98.59% 91.23%
14 Claude Sonnet 4.6 100.00% 96.37% 91.15%
15 Qwen 3.5 27B 100.00% 98.69% 90.85%
16 ByteDance Seed 1.6 100.00% 98.40% 90.70%
17 Gemini 3 Flash (Preview, Reasoning) 100.00% 98.12% 90.50%
18 GPT-5.2 100.00% 97.54% 90.26%
19 Claude Opus 4.5 100.00% 97.69% 89.69%
20 Grok 4.1 Fast 100.00% 97.87% 89.55%
21 Z.AI GLM 4.6 100.00% 97.78% 89.11%
22 Gemini 3 Pro (Preview) 100.00% 98.86% 88.79%
23 Claude Sonnet 4 100.00% 99.13% 88.72%
24 MiniMax M2.5 100.00% 96.02% 88.71%
25 Z.AI GLM 4.7 100.00% 98.22% 88.69%
26 GPT-4.1 100.00% 94.40% 88.68%
27 Gemini 2.5 Pro 100.00% 98.58% 88.53%
28 Grok 4 100.00% 98.76% 88.12%
29 Claude Sonnet 4.5 100.00% 99.02% 88.03%
30 Claude Opus 4 100.00% 97.25% 87.69%
31 Gemini 2.5 Flash (Reasoning) 100.00% 98.12% 86.51%
32 Grok 4 Fast 100.00% 97.26% 86.15%
33 Qwen 3.5 Plus (2026-02-15) 100.00% 98.10% 85.96%
34 Stealth: Healer Alpha 100.00% 96.04% 85.93%
35 Gemini 3.1 Flash Lite (Preview) 100.00% 96.46% 85.87%
36 GPT-5.4 Mini (Reasoning, Low) 100.00% 92.63% 85.75%
37 Gemini 2.5 Flash Lite (Reasoning) 100.00% 94.54% 85.75%
38 Mistral Large 3 100.00% 94.09% 85.43%
39 GPT-4o, May 13th (temp=0) 100.00% 95.35% 85.36%
40 Gemini 3 Flash (Preview) 100.00% 97.54% 85.35%
41 Claude Haiku 4.5 100.00% 96.81% 85.14%
42 GPT-5.4 100.00% 96.73% 84.32%
43 Claude 3.5 Sonnet 100.00% 96.57% 84.24%
44 Grok 4.20 (Beta) 100.00% 95.49% 83.85%
45 GPT-4o, May 13th (temp=1) 100.00% 92.41% 83.80%
46 Claude 3.7 Sonnet 100.00% 97.12% 83.39%
47 GPT-4.1 Mini 100.00% 95.62% 83.20%
48 Hermes 3 405B 100.00% 89.14% 82.86%
49 GPT-5.4 Mini 100.00% 90.60% 82.43%
50 Mistral Large 2 100.00% 94.16% 82.41%
51 DeepSeek V3.2 100.00% 95.78% 82.25%
52 GPT-5.4 Nano (Reasoning) 100.00% 83.32% 81.36%
53 Gemini 2.5 Flash Lite 100.00% 92.13% 81.08%
54 Gemini 2.5 Flash 100.00% 97.83% 80.60%
55 Mistral Large 100.00% 95.14% 80.15%
56 Qwen3 235B A22B Instruct 2507 100.00% 91.75% 80.10%
57 Writer: Palmyra X5 100.00% 91.20% 79.57%
58 GPT-5.4 Nano (Reasoning, Low) 100.00% 82.23% 79.48%
59 GPT-4o Mini (temp=1) 100.00% 85.78% 79.08%
60 Mistral Small 3.2 24B 100.00% 89.48% 78.60%
61 Gemma 3 12B 100.00% 85.18% 78.41%
62 GPT-4o Mini (temp=0) 100.00% 84.62% 78.29%
63 Gemma 3 27B 100.00% 86.63% 77.85%
64 Mistral Medium 3.1 100.00% 93.77% 77.83%
65 Mistral Small 4 100.00% 91.00% 76.46%
66 Qwen 2.5 72B 100.00% 89.18% 75.46%
67 GPT-5.4 Nano 100.00% 79.22% 74.40%
68 Arcee AI: Trinity Large (Preview) 100.00% 86.62% 73.33%
69 Mistral Small Creative 100.00% 90.31% 73.27%
70 Ministral 3 14B 100.00% 86.20% 72.54%
71 GPT-4.1 Nano 100.00% 76.06% 71.94%
72 Ministral 3 8B 100.00% 78.52% 71.76%
73 Gemma 3 4B 100.00% 78.38% 68.57%
74 Ministral 8B 100.00% 77.52% 64.87%
75 Qwen 3.5 122B 98.81% 96.31% 91.53%
76 o4 Mini High 98.81% 94.36% 90.29%
77 Qwen 3.5 35B 98.81% 94.95% 88.00%
78 Z.AI GLM 4.5 98.81% 95.32% 86.27%
79 DeepSeek V3 (2024-12-26) 98.81% 93.58% 83.68%
80 Mistral Small 4 (Reasoning) 98.81% 90.58% 82.39%
81 Qwen 3 32B 98.81% 89.95% 82.21%
82 DeepSeek V3 (2025-03-24) 98.81% 89.57% 81.99%
83 MoonshotAI: Kimi K2.5 98.81% 97.79% 91.04%
84 GPT-5.4 Mini (Reasoning) 98.81% 95.78% 90.65%
85 Stealth: Hunter Alpha 98.81% 95.53% 87.34%
86 ByteDance Seed 2.0 Lite 98.81% 95.03% 84.80%
87 MiniMax M2.7 97.62% 92.14% 89.10%
88 DeepSeek-V2 Chat 97.62% 90.90% 84.83%
89 Llama 3.1 70B 97.62% 92.10% 78.40%
90 o4 Mini 97.62% 90.61% 88.35%
91 ByteDance Seed 1.6 Flash 97.62% 91.64% 73.27%
92 Ministral 3B 97.62% 70.91% 61.29%
93 ByteDance Seed 2.0 Mini 96.43% 91.08% 86.91%
94 Qwen 3.5 Flash 96.43% 92.80% 86.38%
95 Z.AI GLM 4.7 Flash 96.43% 85.82% 84.82%
96 GPT-4o, Aug. 6th (temp=0) 96.43% 93.77% 82.45%
97 Ministral 3 3B 96.43% 69.80% 67.22%
98 Cohere Command R+ (Aug. 2024) 95.39% 68.40% 69.03%
99 GPT-5 Nano 95.24% 82.74% 82.60%
100 Aion 2.0 95.24% 95.34% 89.21%
101 DeepSeek V3.1 95.24% 87.27% 82.39%
102 Llama 3.1 Nemotron 70B 95.04% 87.26% 74.70%
103 Inception Mercury 2 94.05% 85.26% 83.85%
104 Llama 3.1 8B 94.05% 75.45% 63.37%
105 Inception Mercury 93.32% 79.53% 79.50%
106 Mistral NeMO 93.21% 73.69% 65.04%
107 WizardLM 2 8x22b 93.12% 88.13% 71.07%
108 GPT-4o, Aug. 6th (temp=1) 92.86% 86.72% 82.62%
109 Qwen 3.5 9B 91.67% 85.35% 86.05%
110 LFM2 24B 90.94% 71.56% 58.77%
111 Nemotron 3 Super 90.48% 86.34% 84.56%
112 Nemotron 3 Nano 87.00% 75.81% 77.73%
113 Arcee AI: Trinity Mini 85.71% 73.88% 70.90%
114 Claude 3 Haiku 81.03% 64.36% 71.19%
115 Hermes 3 70B 78.57% 63.34% 72.57%
116 Rocinante 12B 72.78% 56.31% 54.55%