Prose Variety

Subcategory of Creative Writing. 118 models scored.

Model Leaderboard

All models ranked by their Prose Variety subcategory score.

# Model Prose Variety Creative Writing Overall
1 Writer: Palmyra X5 83.01% 83.95% 79.57%
2 Qwen3 235B A22B Instruct 2507 81.37% 84.81% 80.10%
3 GPT-5.4 81.26% 90.94% 84.32%
4 GPT-5.4 (Reasoning, Low) 80.88% 90.51% 91.41%
5 GPT-5.4 (Reasoning) 79.74% 91.17% 93.24%
6 Rocinante 12B 79.52% 81.94% 54.55%
7 Claude Sonnet 4.5 78.05% 84.19% 88.03%
8 Llama 3.1 8B 77.72% 76.54% 63.37%
9 Claude Opus 4 77.58% 83.79% 87.69%
10 Z.AI GLM 5 77.33% 83.63% 91.23%
11 Mistral Small 4 (Reasoning) 77.22% 81.67% 82.39%
12 Claude Sonnet 4 76.88% 79.21% 88.72%
13 Claude Opus 4.5 76.11% 81.71% 89.69%
14 GPT-5.4 Mini 75.74% 88.10% 82.43%
15 Mistral Small Creative 75.44% 80.29% 73.27%
16 Z.AI GLM 5 Turbo 75.38% 84.66% 94.27%
17 Mistral Small 4 75.37% 81.12% 76.46%
18 Claude Haiku 4.5 75.25% 78.96% 85.14%
19 Claude Opus 4.6 (Reasoning) 75.04% 84.55% 95.02%
20 GPT-5.4 Mini (Reasoning, Low) 74.97% 87.72% 85.75%
21 GPT-5.4 Mini (Reasoning) 74.97% 88.66% 90.65%
22 Ministral 3 14B 74.94% 79.11% 72.54%
23 Mistral Medium 3.1 74.30% 81.70% 77.83%
24 Claude Opus 4.6 74.14% 83.59% 92.35%
25 DeepSeek V3 (2025-03-24) 73.86% 82.34% 81.99%
26 GPT-5.1 73.76% 87.20% 92.54%
27 Hermes 3 70B 73.70% 77.41% 72.57%
28 MiniMax M2.5 73.63% 81.21% 88.71%
29 Mistral Large 73.53% 82.02% 80.15%
30 Grok 4.20 (Beta) 73.10% 82.80% 83.85%
31 GPT-4o, Aug. 6th (temp=1) 73.05% 75.50% 82.62%
32 Mistral Large 2 72.75% 81.86% 82.41%
33 Claude 3.5 Haiku 72.74% 75.28% 83.73%
34 Claude Sonnet 4.6 72.52% 83.31% 91.15%
35 Mistral Large 3 72.38% 81.21% 85.43%
36 MiniMax M2.7 72.16% 81.70% 89.10%
37 Claude Sonnet 4.6 (Reasoning) 71.90% 83.09% 93.66%
38 GPT-5.4 Nano 71.90% 80.50% 74.40%
39 Grok 4.1 Fast 71.82% 82.14% 89.55%
40 Qwen 3 32B 71.76% 81.30% 82.21%
41 Gemma 3 27B 71.72% 78.79% 77.85%
42 GPT-5.4 Nano (Reasoning, Low) 71.57% 80.93% 79.48%
43 Gemma 3 12B 71.52% 75.38% 78.41%
44 GPT-4o Mini (temp=1) 71.50% 74.37% 79.08%
45 Cohere Command R+ (Aug. 2024) 71.05% 77.70% 69.03%
46 Llama 3.1 Nemotron 70B 70.84% 71.71% 74.70%
47 MoonshotAI: Kimi K2.5 70.84% 81.35% 91.04%
48 Claude 3.7 Sonnet 70.66% 76.31% 83.39%
49 Ministral 3 8B 70.61% 77.26% 71.76%
50 ByteDance Seed 1.6 Flash 70.46% 81.51% 73.27%
51 Grok 4.20 (Beta, Reasoning) 70.03% 84.50% 91.49%
52 Hermes 3 405B 69.94% 80.92% 82.86%
53 LFM2 24B 69.90% 78.10% 58.77%
54 GPT-4o, May 13th (temp=1) 69.83% 75.88% 83.80%
55 GPT-4.1 69.77% 81.24% 88.68%
56 Ministral 8B 69.62% 76.87% 64.87%
57 GPT-5.4 Nano (Reasoning) 69.53% 80.97% 81.36%
58 Claude 3 Haiku 69.39% 74.53% 71.19%
59 Z.AI GLM 4.5 68.98% 76.56% 86.27%
60 Gemma 3 4B 68.92% 72.10% 68.57%
61 Stealth: Hunter Alpha 68.88% 79.18% 87.34%
62 Qwen 3.5 397B A17B 68.63% 86.93% 91.73%
63 Arcee AI: Trinity Large (Preview) 68.56% 75.26% 73.33%
64 Grok 4 Fast 68.43% 77.03% 86.15%
65 Claude 3.5 Sonnet 68.25% 78.69% 84.24%
66 GPT-4.1 Mini 68.20% 74.52% 83.20%
67 GPT-5.2 67.99% 80.36% 90.26%
68 Qwen 3.5 Plus (2026-02-15) 67.63% 77.07% 85.96%
69 Grok 4 67.50% 77.34% 88.12%
70 Ministral 3B 67.17% 75.49% 61.29%
71 GPT-4.1 Nano 67.03% 71.81% 71.94%
72 DeepSeek-V2 Chat 66.87% 77.20% 84.83%
73 Gemini 2.5 Flash (Reasoning) 66.66% 76.30% 86.51%
74 Gemini 2.5 Flash Lite 66.56% 75.05% 81.08%
75 Stealth: Healer Alpha 66.33% 78.28% 85.93%
76 DeepSeek V3 (2024-12-26) 66.31% 77.88% 83.68%
77 DeepSeek V3.2 66.11% 79.95% 82.25%
78 Gemini 2.5 Flash Lite (Reasoning) 65.69% 71.64% 85.75%
79 Llama 3.1 70B 65.53% 72.78% 78.40%
80 Gemini 2.5 Flash 65.37% 77.57% 80.60%
81 WizardLM 2 8x22b 65.28% 79.06% 71.07%
82 Ministral 3 3B 65.17% 75.45% 67.22%
83 Aion 2.0 64.90% 80.24% 89.21%
84 o4 Mini 63.95% 82.04% 88.35%
85 Z.AI GLM 4.6 63.85% 78.86% 89.11%
86 o4 Mini High 63.78% 82.72% 90.29%
87 Gemini 3 Pro (Preview) 63.57% 77.77% 88.79%
88 Gemini 3.1 Pro (Preview) 63.42% 85.44% 94.37%
89 DeepSeek V3.1 62.92% 77.45% 82.39%
90 Gemini 2.5 Pro 62.91% 81.03% 88.53%
91 GPT-5 62.58% 86.87% 91.93%
92 Gemini 3 Flash (Preview, Reasoning) 61.94% 75.87% 90.50%
93 Z.AI GLM 4.7 Flash 61.69% 77.36% 84.82%
94 Z.AI GLM 4.7 61.49% 78.89% 88.69%
95 Gemini 3 Flash (Preview) 61.35% 75.04% 85.35%
96 GPT-5 Mini 60.91% 80.48% 92.62%
97 GPT-5 Nano 60.85% 67.04% 82.60%
98 Qwen 2.5 72B 60.17% 75.16% 75.46%
99 GPT-4o, May 13th (temp=0) 60.13% 74.89% 85.36%
100 Mistral NeMO 59.98% 76.72% 65.04%
101 Gemini 3.1 Flash Lite (Preview) 59.92% 75.78% 85.87%
102 GPT-4o Mini (temp=0) 59.43% 73.10% 78.29%
103 ByteDance Seed 2.0 Mini 59.35% 80.11% 86.91%
104 GPT-4o, Aug. 6th (temp=0) 58.95% 73.65% 82.45%
105 Qwen 3.5 Flash 58.83% 83.81% 86.38%
106 ByteDance Seed 2.0 Lite 58.28% 82.35% 84.80%
107 Qwen 3.5 35B 58.12% 83.51% 88.00%
108 Nemotron 3 Super 57.81% 69.75% 84.56%
109 ByteDance Seed 1.6 57.70% 78.43% 90.70%
110 Arcee AI: Trinity Mini 56.69% 74.01% 70.90%
111 Qwen 3.5 122B 56.46% 83.02% 91.53%
112 Qwen 3.5 27B 54.98% 82.54% 90.85%
113 Qwen 3.5 9B 53.95% 84.35% 86.05%
114 Nemotron 3 Nano 52.17% 65.87% 77.73%
115 Inception Mercury 2 50.37% 68.31% 83.85%
116 Stealth: Aurora Alpha 49.43% 67.54% 83.79%
117 Mistral Small 3.2 24B 47.89% 71.87% 78.60%
118 Inception Mercury 40.36% 69.99% 79.50%