Clichés

Subcategory of Creative Writing. 118 models scored.

Model Leaderboard

All models ranked by their Clichés subcategory score.

# Model Clichés Creative Writing Overall
1 o4 Mini High 90.38% 82.72% 90.29%
2 GPT-5 89.59% 86.87% 91.93%
3 Gemini 3.1 Pro (Preview) 88.95% 85.44% 94.37%
4 Grok 4.20 (Beta, Reasoning) 87.27% 84.50% 91.49%
5 Gemini 2.5 Pro 87.21% 81.03% 88.53%
6 o4 Mini 86.60% 82.04% 88.35%
7 GPT-5.4 (Reasoning) 86.52% 91.17% 93.24%
8 GPT-5.1 86.39% 87.20% 92.54%
9 Grok 4.20 (Beta) 85.46% 82.80% 83.85%
10 GPT-5.4 (Reasoning, Low) 84.51% 90.51% 91.41%
11 GPT-5.4 84.30% 90.94% 84.32%
12 DeepSeek V3.1 83.91% 77.45% 82.39%
13 Z.AI GLM 4.6 83.84% 78.86% 89.11%
14 Gemma 3 27B 83.82% 78.79% 77.85%
15 GPT-4.1 83.80% 81.24% 88.68%
16 GPT-5.4 Mini (Reasoning) 83.22% 88.66% 90.65%
17 Qwen 3.5 397B A17B 83.12% 86.93% 91.73%
18 Gemma 3 12B 82.85% 75.38% 78.41%
19 Rocinante 12B 82.68% 81.94% 54.55%
20 Hermes 3 405B 82.60% 80.92% 82.86%
21 Qwen 3.5 Flash 82.05% 83.81% 86.38%
22 Qwen3 235B A22B Instruct 2507 82.05% 84.81% 80.10%
23 ByteDance Seed 2.0 Mini 81.90% 80.11% 86.91%
24 Grok 4.1 Fast 81.78% 82.14% 89.55%
25 Qwen 3 32B 81.72% 81.30% 82.21%
26 Z.AI GLM 4.7 81.62% 78.89% 88.69%
27 Aion 2.0 81.53% 80.24% 89.21%
28 Mistral Large 2 81.36% 81.86% 82.41%
29 DeepSeek V3.2 81.32% 79.95% 82.25%
30 Gemini 3 Flash (Preview, Reasoning) 81.18% 75.87% 90.50%
31 Qwen 3.5 35B 81.11% 83.51% 88.00%
32 Z.AI GLM 4.7 Flash 81.09% 77.36% 84.82%
33 Mistral Large 3 80.74% 81.21% 85.43%
34 GPT-4.1 Nano 80.71% 71.81% 71.94%
35 Gemini 3 Pro (Preview) 80.50% 77.77% 88.79%
36 DeepSeek V3 (2025-03-24) 80.49% 82.34% 81.99%
37 GPT-4o, May 13th (temp=1) 80.44% 75.88% 83.80%
38 GPT-4.1 Mini 80.38% 74.52% 83.20%
39 Gemini 3 Flash (Preview) 80.32% 75.04% 85.35%
40 Qwen 3.5 122B 80.31% 83.02% 91.53%
41 GPT-4o, Aug. 6th (temp=1) 80.25% 75.50% 82.62%
42 Gemma 3 4B 80.17% 72.10% 68.57%
43 Mistral Large 80.04% 82.02% 80.15%
44 Mistral Medium 3.1 79.96% 81.70% 77.83%
45 GPT-5.4 Mini 79.92% 88.10% 82.43%
46 Qwen 3.5 9B 79.82% 84.35% 86.05%
47 DeepSeek V3 (2024-12-26) 79.81% 77.88% 83.68%
48 Arcee AI: Trinity Mini 79.76% 74.01% 70.90%
49 Gemini 3.1 Flash Lite (Preview) 79.66% 75.78% 85.87%
50 WizardLM 2 8x22b 79.40% 79.06% 71.07%
51 Llama 3.1 8B 79.32% 76.54% 63.37%
52 DeepSeek-V2 Chat 79.10% 77.20% 84.83%
53 Z.AI GLM 5 78.86% 83.63% 91.23%
54 GPT-5.4 Mini (Reasoning, Low) 78.74% 87.72% 85.75%
55 Ministral 3 14B 78.70% 79.11% 72.54%
56 GPT-5 Mini 78.48% 80.48% 92.62%
57 Qwen 3.5 27B 78.44% 82.54% 90.85%
58 Hermes 3 70B 78.42% 77.41% 72.57%
59 Writer: Palmyra X5 78.38% 83.95% 79.57%
60 Z.AI GLM 5 Turbo 78.37% 84.66% 94.27%
61 Claude Opus 4 78.30% 83.79% 87.69%
62 GPT-4o Mini (temp=1) 78.20% 74.37% 79.08%
63 Gemini 2.5 Flash 78.05% 77.57% 80.60%
64 Cohere Command R+ (Aug. 2024) 77.80% 77.70% 69.03%
65 ByteDance Seed 1.6 77.64% 78.43% 90.70%
66 Grok 4 77.48% 77.34% 88.12%
67 Gemini 2.5 Flash (Reasoning) 77.43% 76.30% 86.51%
68 Claude 3.5 Sonnet 77.15% 78.69% 84.24%
69 Claude Sonnet 4.6 76.43% 83.31% 91.15%
70 Claude Sonnet 4.5 76.41% 84.19% 88.03%
71 Gemini 2.5 Flash Lite 76.25% 75.05% 81.08%
72 Ministral 3 8B 76.24% 77.26% 71.76%
73 Mistral Small 4 76.23% 81.12% 76.46%
74 ByteDance Seed 2.0 Lite 76.22% 82.35% 84.80%
75 Mistral Small Creative 76.20% 80.29% 73.27%
76 Mistral Small 4 (Reasoning) 75.97% 81.67% 82.39%
77 Mistral NeMO 75.80% 76.72% 65.04%
78 Claude Opus 4.6 (Reasoning) 75.75% 84.55% 95.02%
79 ByteDance Seed 1.6 Flash 75.52% 81.51% 73.27%
80 MiniMax M2.5 75.41% 81.21% 88.71%
81 Claude Sonnet 4.6 (Reasoning) 75.13% 83.09% 93.66%
82 Stealth: Hunter Alpha 74.96% 79.18% 87.34%
83 LFM2 24B 74.86% 78.10% 58.77%
84 Grok 4 Fast 74.82% 77.03% 86.15%
85 Claude 3.5 Haiku 74.78% 75.28% 83.73%
86 MiniMax M2.7 74.72% 81.70% 89.10%
87 Stealth: Healer Alpha 74.72% 78.28% 85.93%
88 Claude Opus 4.5 73.89% 81.71% 89.69%
89 Arcee AI: Trinity Large (Preview) 73.85% 75.26% 73.33%
90 Ministral 8B 73.81% 76.87% 64.87%
91 Claude Sonnet 4 73.80% 79.21% 88.72%
92 Qwen 2.5 72B 72.84% 75.16% 75.46%
93 GPT-5.4 Nano (Reasoning) 72.62% 80.97% 81.36%
94 Claude Opus 4.6 72.59% 83.59% 92.35%
95 Z.AI GLM 4.5 72.34% 76.56% 86.27%
96 Claude Haiku 4.5 71.93% 78.96% 85.14%
97 Qwen 3.5 Plus (2026-02-15) 71.92% 77.07% 85.96%
98 Llama 3.1 Nemotron 70B 71.70% 71.71% 74.70%
99 Llama 3.1 70B 70.83% 72.78% 78.40%
100 Ministral 3B 70.80% 75.49% 61.29%
101 Ministral 3 3B 70.63% 75.45% 67.22%
102 GPT-4o Mini (temp=0) 70.56% 73.10% 78.29%
103 GPT-5.4 Nano (Reasoning, Low) 70.47% 80.93% 79.48%
104 GPT-4o, May 13th (temp=0) 69.79% 74.89% 85.36%
105 GPT-5.4 Nano 69.68% 80.50% 74.40%
106 GPT-4o, Aug. 6th (temp=0) 69.33% 73.65% 82.45%
107 Claude 3.7 Sonnet 69.12% 76.31% 83.39%
108 Inception Mercury 68.89% 69.99% 79.50%
109 Claude 3 Haiku 68.71% 74.53% 71.19%
110 Mistral Small 3.2 24B 68.55% 71.87% 78.60%
111 Stealth: Aurora Alpha 68.10% 67.54% 83.79%
112 Inception Mercury 2 67.53% 68.31% 83.85%
113 MoonshotAI: Kimi K2.5 67.36% 81.35% 91.04%
114 Gemini 2.5 Flash Lite (Reasoning) 66.75% 71.64% 85.75%
115 GPT-5.2 66.36% 80.36% 90.26%
116 Nemotron 3 Super 64.41% 69.75% 84.56%
117 Nemotron 3 Nano 64.30% 65.87% 77.73%
118 GPT-5 Nano 50.07% 67.04% 82.60%