Purple Prose

Subcategory of Creative Writing. 91 models scored.

Model Leaderboard

All models ranked by their Purple Prose subcategory score.

# Model Purple Prose Creative Writing Overall
1 Grok 4.1 Fast 97.96% 82.14% 89.55%
2 o4 Mini High 97.56% 82.72% 90.29%
3 o4 Mini 97.30% 82.04% 88.35%
4 DeepSeek V3 (2025-03-24) 95.33% 82.34% 81.99%
5 ByteDance Seed 1.6 Flash 95.11% 81.51% 73.27%
6 Gemini 3.1 Pro (Preview) 95.06% 85.44% 94.37%
7 Writer: Palmyra X5 94.98% 83.95% 79.57%
8 Qwen 3.5 397B A17B 94.84% 86.93% 91.73%
9 Qwen 3.5 Flash 94.71% 83.81% 86.38%
10 Mistral Medium 3.1 94.59% 81.70% 77.83%
11 Qwen 3.5 27B 94.46% 82.54% 90.85%
12 Mistral Large 94.33% 82.02% 80.15%
13 Qwen 3.5 122B 94.27% 83.02% 91.53%
14 Mistral Small Creative 94.23% 80.29% 73.27%
15 Hermes 3 405B 93.61% 80.92% 82.86%
16 Qwen 3.5 35B 93.58% 83.51% 88.00%
17 GPT-5.1 93.57% 87.20% 92.54%
18 GPT-4.1 93.57% 81.24% 88.68%
19 MoonshotAI: Kimi K2.5 93.06% 81.35% 91.04%
20 Grok 4 Fast 92.79% 77.03% 86.15%
21 Ministral 3 14B 92.74% 79.11% 72.54%
22 Mistral Large 3 92.61% 81.21% 85.43%
23 Rocinante 12B 92.33% 81.94% 54.55%
24 GPT-5 92.06% 86.87% 91.93%
25 Mistral Large 2 91.94% 81.86% 82.41%
26 Claude Opus 4.6 (Reasoning) 91.88% 84.55% 95.02%
27 Gemini 3 Pro (Preview) 91.81% 77.77% 88.79%
28 DeepSeek-V2 Chat 91.52% 77.20% 84.83%
29 LFM2 24B 91.50% 78.10% 58.77%
30 Grok 4 91.44% 77.34% 88.12%
31 DeepSeek V3 (2024-12-26) 91.42% 77.88% 83.68%
32 Z.AI GLM 4.7 91.38% 78.89% 88.69%
33 GPT-4o Mini (temp=1) 91.21% 74.37% 79.08%
34 Claude Opus 4.6 91.07% 83.59% 92.35%
35 Ministral 3 8B 90.74% 77.26% 71.76%
36 DeepSeek V3.2 90.65% 79.95% 82.25%
37 Claude Opus 4 90.42% 83.79% 87.69%
38 Ministral 3B 90.17% 75.49% 61.29%
39 Claude Sonnet 4.6 90.14% 83.31% 91.15%
40 Gemini 2.5 Pro 90.00% 81.03% 88.53%
41 Mistral NeMO 89.84% 76.72% 65.04%
42 Aion 2.0 89.80% 80.24% 89.21%
43 Z.AI GLM 4.7 Flash 89.60% 77.36% 84.82%
44 Ministral 3 3B 89.60% 75.45% 67.22%
45 Claude Sonnet 4.5 89.39% 84.19% 88.03%
46 Ministral 8B 89.29% 76.87% 64.87%
47 Claude Sonnet 4.6 (Reasoning) 89.19% 83.09% 93.66%
48 GPT-4o Mini (temp=0) 88.83% 73.10% 78.29%
49 Claude 3 Haiku 88.64% 74.53% 71.19%
50 Z.AI GLM 5 88.49% 83.63% 91.23%
51 Gemma 3 27B 88.37% 78.79% 77.85%
52 GPT-4o, Aug. 6th (temp=1) 87.96% 75.50% 82.62%
53 GPT-4.1 Mini 87.91% 74.52% 83.20%
54 GPT-5.2 87.55% 80.36% 90.26%
55 Qwen 3.5 Plus (2026-02-15) 87.42% 77.07% 85.96%
56 Cohere Command R+ (Aug. 2024) 87.40% 77.70% 69.03%
57 GPT-4o, May 13th (temp=1) 87.39% 75.88% 83.80%
58 Hermes 3 70B 87.04% 77.41% 72.57%
59 ByteDance Seed 1.6 87.01% 78.43% 90.70%
60 Gemma 3 12B 86.76% 75.38% 78.41%
61 DeepSeek V3.1 86.19% 77.45% 82.39%
62 GPT-4o, May 13th (temp=0) 86.01% 74.89% 85.36%
63 Arcee AI: Trinity Mini 85.69% 74.01% 70.90%
64 WizardLM 2 8x22b 85.19% 79.06% 71.07%
65 GPT-4o, Aug. 6th (temp=0) 84.90% 73.65% 82.45%
66 Minimax M2.5 84.71% 81.21% 88.71%
67 Mistral Small 3.2 24B 84.70% 71.87% 78.60%
68 Qwen 2.5 72B 84.37% 75.16% 75.46%
69 Z.AI GLM 4.6 84.37% 78.86% 89.11%
70 Gemini 3 Flash (Preview) 84.15% 75.04% 85.35%
71 Gemini 2.5 Flash 83.37% 77.57% 80.60%
72 Claude Opus 4.5 83.27% 81.71% 89.69%
73 GPT-5 Mini 83.16% 80.48% 92.62%
74 Gemma 3 4B 83.05% 72.10% 68.57%
75 GPT-4.1 Nano 82.65% 71.81% 71.94%
76 Claude 3.5 Sonnet 82.20% 78.69% 84.24%
77 Claude 3.7 Sonnet 81.90% 76.31% 83.39%
78 Claude Haiku 4.5 81.79% 78.96% 85.14%
79 Z.AI GLM 4.5 81.73% 76.56% 86.27%
80 Gemini 2.5 Flash (Reasoning) 81.71% 76.30% 86.51%
81 Arcee AI: Trinity Large (Preview) 81.59% 75.26% 73.33%
82 Gemini 2.5 Flash Lite 79.16% 75.05% 81.08%
83 Gemini 3 Flash (Preview, Reasoning) 78.95% 75.87% 90.50%
84 Claude Sonnet 4 78.57% 79.21% 88.72%
85 Claude 3.5 Haiku 77.31% 75.28% 83.73%
86 Gemini 2.5 Flash Lite (Reasoning) 72.59% 71.64% 85.75%
87 Llama 3.1 70B 71.74% 72.78% 78.40%
88 Llama 3.1 Nemotron 70B 69.97% 71.71% 74.70%
89 Llama 3.1 8B 69.60% 76.54% 63.37%
90 GPT-5 Nano 66.77% 67.04% 82.60%
91 Stealth: Aurora Alpha 65.58% 67.54% 83.79%