Transformation

Subcategory of Text Editing. 116 models scored.

Model Leaderboard

All models ranked by their Transformation subcategory score.

# Model Transformation Text Editing Overall
1 Gemini 3 Pro (Preview) 98.12% 98.86% 88.79%
2 GPT-5 98.11% 98.90% 91.93%
3 Claude Opus 4.6 (Reasoning) 97.85% 98.86% 95.02%
4 GPT-5.4 (Reasoning) 97.84% 98.42% 93.24%
5 Qwen 3.5 397B A17B 97.78% 98.05% 91.73%
6 Claude Sonnet 4.5 97.74% 99.02% 88.03%
7 Claude Sonnet 4 97.70% 99.13% 88.72%
8 ByteDance Seed 1.6 97.62% 98.40% 90.70%
9 Qwen 3.5 27B 97.53% 98.69% 90.85%
10 GPT-5.1 97.43% 98.54% 92.54%
11 Grok 4.20 (Beta, Reasoning) 97.27% 98.69% 91.49%
12 Z.AI GLM 5 97.15% 98.59% 91.23%
13 Grok 4 97.06% 98.76% 88.12%
14 Gemini 2.5 Flash (Reasoning) 96.95% 98.12% 86.51%
15 Gemini 2.5 Pro 96.93% 98.58% 88.53%
16 GPT-5.4 (Reasoning, Low) 96.83% 98.01% 91.41%
17 Z.AI GLM 5 Turbo 96.68% 98.17% 94.27%
18 Gemini 3.1 Pro (Preview) 96.51% 98.51% 94.37%
19 MoonshotAI: Kimi K2.5 96.37% 97.79% 91.04%
20 Grok 4.1 Fast 96.32% 97.87% 89.55%
21 GPT-5.2 96.25% 97.54% 90.26%
22 Gemini 3 Flash (Preview, Reasoning) 96.17% 98.12% 90.50%
23 Z.AI GLM 4.7 95.98% 98.22% 88.69%
24 Claude Sonnet 4.6 (Reasoning) 95.84% 98.30% 93.66%
25 DeepSeek V3.2 95.62% 95.78% 82.25%
26 Z.AI GLM 4.5 95.45% 95.32% 86.27%
27 Claude Opus 4.6 95.06% 98.35% 92.35%
28 Z.AI GLM 4.6 95.02% 97.78% 89.11%
29 Qwen 3.5 Plus (2026-02-15) 94.91% 98.10% 85.96%
30 Gemini 2.5 Flash 94.82% 97.83% 80.60%
31 Gemini 2.5 Flash Lite (Reasoning) 94.57% 94.54% 85.75%
32 GPT-5.4 94.09% 96.73% 84.32%
33 GPT-5 Mini 93.87% 97.13% 92.62%
34 Gemini 3 Flash (Preview) 93.70% 97.54% 85.35%
35 Claude 3.7 Sonnet 93.64% 97.12% 83.39%
36 Aion 2.0 93.58% 95.34% 89.21%
37 Claude Opus 4 93.42% 97.25% 87.69%
38 Grok 4 Fast 93.19% 97.26% 86.15%
39 Claude Opus 4.5 93.08% 97.69% 89.69%
40 GPT-5.4 Mini (Reasoning) 93.03% 95.78% 90.65%
41 Claude 3.5 Sonnet 92.76% 96.57% 84.24%
42 Grok 4.20 (Beta) 92.06% 95.49% 83.85%
43 Qwen 3.5 122B 91.84% 96.31% 91.53%
44 Qwen 3.5 35B 91.82% 94.95% 88.00%
45 Stealth: Hunter Alpha 91.32% 95.53% 87.34%
46 Gemini 3.1 Flash Lite (Preview) 91.27% 96.46% 85.87%
47 Mistral Large 2 90.87% 94.16% 82.41%
48 Claude Haiku 4.5 90.77% 96.81% 85.14%
49 Mistral Large 90.77% 95.14% 80.15%
50 Mistral Large 3 90.68% 94.09% 85.43%
51 Stealth: Healer Alpha 90.26% 96.04% 85.93%
52 MiniMax M2.5 90.05% 96.02% 88.71%
53 DeepSeek V3 (2024-12-26) 89.33% 93.58% 83.68%
54 Claude Sonnet 4.6 89.29% 96.37% 91.15%
55 GPT-4.1 Mini 89.26% 95.62% 83.20%
56 Qwen 3.5 Flash 88.93% 92.80% 86.38%
57 ByteDance Seed 2.0 Lite 88.71% 95.03% 84.80%
58 GPT-4o, Aug. 6th (temp=0) 88.11% 93.77% 82.45%
59 GPT-4o, May 13th (temp=0) 87.89% 95.35% 85.36%
60 o4 Mini High 87.18% 94.36% 90.29%
61 ByteDance Seed 2.0 Mini 86.72% 91.08% 86.91%
62 GPT-4.1 86.53% 94.40% 88.68%
63 GPT-5.4 Mini (Reasoning, Low) 85.94% 92.63% 85.75%
64 Mistral Medium 3.1 84.76% 93.77% 77.83%
65 Gemini 2.5 Flash Lite 84.21% 92.13% 81.08%
66 Llama 3.1 70B 84.01% 92.10% 78.40%
67 DeepSeek-V2 Chat 83.56% 90.90% 84.83%
68 GPT-5.4 Mini 83.34% 90.60% 82.43%
69 MiniMax M2.7 82.95% 92.14% 89.10%
70 ByteDance Seed 1.6 Flash 81.97% 91.64% 73.27%
71 DeepSeek V3.1 81.88% 87.27% 82.39%
72 WizardLM 2 8x22b 81.62% 88.13% 71.07%
73 Mistral Small 4 81.54% 91.00% 76.46%
74 GPT-4o, May 13th (temp=1) 79.68% 92.41% 83.80%
75 Qwen3 235B A22B Instruct 2507 79.66% 91.75% 80.10%
76 Mistral Small 4 (Reasoning) 79.45% 90.58% 82.39%
77 DeepSeek V3 (2025-03-24) 79.33% 89.57% 81.99%
78 Writer: Palmyra X5 79.31% 91.20% 79.57%
79 Nemotron 3 Super 79.06% 86.34% 84.56%
80 o4 Mini 77.98% 90.61% 88.35%
81 Qwen 3 32B 77.20% 89.95% 82.21%
82 Arcee AI: Trinity Large (Preview) 76.70% 86.62% 73.33%
83 Mistral Small 3.2 24B 76.07% 89.48% 78.60%
84 Mistral Small Creative 73.96% 90.31% 73.27%
85 Llama 3.1 Nemotron 70B 73.30% 87.26% 74.70%
86 GPT-4o, Aug. 6th (temp=1) 72.28% 86.72% 82.62%
87 Qwen 3.5 9B 71.91% 85.35% 86.05%
88 Qwen 2.5 72B 71.74% 89.18% 75.46%
89 Z.AI GLM 4.7 Flash 70.63% 85.82% 84.82%
90 Inception Mercury 2 70.42% 85.26% 83.85%
91 GPT-4o Mini (temp=1) 70.40% 85.78% 79.08%
92 Hermes 3 405B 69.94% 89.14% 82.86%
93 Ministral 3 14B 69.48% 86.20% 72.54%
94 Gemma 3 12B 67.87% 85.18% 78.41%
95 GPT-4o Mini (temp=0) 67.00% 84.62% 78.29%
96 LFM2 24B 66.33% 71.56% 58.77%
97 Gemma 3 27B 64.41% 86.63% 77.85%
98 Nemotron 3 Nano 62.02% 75.81% 77.73%
99 GPT-5.4 Nano (Reasoning, Low) 61.68% 82.23% 79.48%
100 GPT-5 Nano 61.06% 82.74% 82.60%
101 GPT-5.4 Nano (Reasoning) 59.85% 83.32% 81.36%
102 Inception Mercury 58.12% 79.53% 79.50%
103 GPT-5.4 Nano 53.44% 79.22% 74.40%
104 Ministral 3 8B 52.05% 78.52% 71.76%
105 Llama 3.1 8B 51.53% 75.45% 63.37%
106 Arcee AI: Trinity Mini 50.86% 73.88% 70.90%
107 GPT-4.1 Nano 50.14% 76.06% 71.94%
108 Ministral 8B 48.88% 77.52% 64.87%
109 Claude 3 Haiku 48.58% 64.36% 71.19%
110 Mistral NeMO 45.51% 73.69% 65.04%
111 Hermes 3 70B 44.57% 63.34% 72.57%
112 Gemma 3 4B 40.80% 78.38% 68.57%
113 Ministral 3B 40.57% 70.91% 61.29%
114 Ministral 3 3B 39.84% 69.80% 67.22%
115 Cohere Command R+ (Aug. 2024) 39.15% 68.40% 69.03%
116 Rocinante 12B 32.19% 56.31% 54.55%