Dialogue

Subcategory of Creative Writing. 118 models scored.

Model Leaderboard

All models ranked by their Dialogue subcategory score.

# Model Dialogue Creative Writing Overall
1 Claude Sonnet 4.6 95.20% 83.31% 91.15%
2 Claude Sonnet 4.6 (Reasoning) 94.92% 83.09% 93.66%
3 GPT-5.4 94.79% 90.94% 84.32%
4 GPT-5.4 (Reasoning) 93.69% 91.17% 93.24%
5 Claude Opus 4.6 (Reasoning) 93.56% 84.55% 95.02%
6 Z.AI GLM 5 Turbo 93.23% 84.66% 94.27%
7 Qwen 3.5 397B A17B 92.54% 86.93% 91.73%
8 Claude Opus 4.6 92.37% 83.59% 92.35%
9 GPT-5.4 (Reasoning, Low) 92.18% 90.51% 91.41%
10 Qwen 3.5 9B 91.95% 84.35% 86.05%
11 GPT-5 91.30% 86.87% 91.93%
12 GPT-5.4 Mini 90.45% 88.10% 82.43%
13 GPT-5.4 Mini (Reasoning) 90.04% 88.66% 90.65%
14 GPT-5.4 Mini (Reasoning, Low) 89.66% 87.72% 85.75%
15 Claude Sonnet 4.5 89.63% 84.19% 88.03%
16 Qwen 3.5 35B 89.61% 83.51% 88.00%
17 Claude Opus 4 89.08% 83.79% 87.69%
18 MiniMax M2.5 88.93% 81.21% 88.71%
19 MiniMax M2.7 88.91% 81.70% 89.10%
20 Claude Opus 4.5 88.91% 81.71% 89.69%
21 Qwen 3.5 27B 88.67% 82.54% 90.85%
22 Z.AI GLM 5 88.46% 83.63% 91.23%
23 GPT-5.1 87.44% 87.20% 92.54%
24 Qwen3 235B A22B Instruct 2507 87.23% 84.81% 80.10%
25 Gemini 3.1 Pro (Preview) 87.20% 85.44% 94.37%
26 Qwen 3.5 Flash 87.13% 83.81% 86.38%
27 Qwen 3.5 122B 86.76% 83.02% 91.53%
28 MoonshotAI: Kimi K2.5 86.21% 81.35% 91.04%
29 ByteDance Seed 2.0 Lite 85.96% 82.35% 84.80%
30 GPT-5 Mini 85.94% 80.48% 92.62%
31 ByteDance Seed 1.6 85.62% 78.43% 90.70%
32 Mistral Small 4 (Reasoning) 85.39% 81.67% 82.39%
33 GPT-5.4 Nano (Reasoning) 85.05% 80.97% 81.36%
34 Writer: Palmyra X5 84.42% 83.95% 79.57%
35 DeepSeek V3 (2025-03-24) 83.31% 82.34% 81.99%
36 GPT-5.4 Nano 83.18% 80.50% 74.40%
37 GPT-5.4 Nano (Reasoning, Low) 83.10% 80.93% 79.48%
38 Mistral Large 2 83.10% 81.86% 82.41%
39 Claude Sonnet 4 82.84% 79.21% 88.72%
40 Mistral Small 4 82.49% 81.12% 76.46%
41 Claude Haiku 4.5 82.25% 78.96% 85.14%
42 Mistral Large 81.53% 82.02% 80.15%
43 Mistral Medium 3.1 81.51% 81.70% 77.83%
44 ByteDance Seed 2.0 Mini 81.36% 80.11% 86.91%
45 Grok 4.20 (Beta, Reasoning) 80.80% 84.50% 91.49%
46 GPT-5.2 80.71% 80.36% 90.26%
47 Mistral Large 3 80.22% 81.21% 85.43%
48 Claude 3.5 Sonnet 80.13% 78.69% 84.24%
49 ByteDance Seed 1.6 Flash 79.96% 81.51% 73.27%
50 Grok 4.1 Fast 78.96% 82.14% 89.55%
51 o4 Mini 78.87% 82.04% 88.35%
52 Qwen 3 32B 78.83% 81.30% 82.21%
53 Mistral Small Creative 78.30% 80.29% 73.27%
54 o4 Mini High 78.26% 82.72% 90.29%
55 Claude 3.5 Haiku 77.75% 75.28% 83.73%
56 WizardLM 2 8x22b 77.45% 79.06% 71.07%
57 Gemini 2.5 Pro 77.40% 81.03% 88.53%
58 Stealth: Healer Alpha 77.16% 78.28% 85.93%
59 GPT-4.1 76.96% 81.24% 88.68%
60 Aion 2.0 76.51% 80.24% 89.21%
61 Stealth: Hunter Alpha 76.41% 79.18% 87.34%
62 Ministral 8B 76.29% 76.87% 64.87%
63 Gemini 3.1 Flash Lite (Preview) 75.91% 75.78% 85.87%
64 Ministral 3B 75.29% 75.49% 61.29%
65 Claude 3.7 Sonnet 75.00% 76.31% 83.39%
66 Ministral 3 14B 74.59% 79.11% 72.54%
67 Ministral 3 3B 74.49% 75.45% 67.22%
68 Z.AI GLM 4.6 74.32% 78.86% 89.11%
69 Grok 4.20 (Beta) 74.11% 82.80% 83.85%
70 DeepSeek V3.2 73.33% 79.95% 82.25%
71 Inception Mercury 73.29% 69.99% 79.50%
72 Z.AI GLM 4.5 73.22% 76.56% 86.27%
73 Llama 3.1 8B 73.18% 76.54% 63.37%
74 Ministral 3 8B 72.81% 77.26% 71.76%
75 Rocinante 12B 72.56% 81.94% 54.55%
76 LFM2 24B 72.22% 78.10% 58.77%
77 Z.AI GLM 4.7 71.01% 78.89% 88.69%
78 Llama 3.1 70B 70.76% 72.78% 78.40%
79 Gemini 2.5 Flash 70.64% 77.57% 80.60%
80 DeepSeek V3 (2024-12-26) 69.79% 77.88% 83.68%
81 Z.AI GLM 4.7 Flash 69.49% 77.36% 84.82%
82 Gemini 3 Flash (Preview, Reasoning) 69.48% 75.87% 90.50%
83 Grok 4 Fast 69.13% 77.03% 86.15%
84 Mistral Small 3.2 24B 68.93% 71.87% 78.60%
85 Hermes 3 405B 68.75% 80.92% 82.86%
86 Grok 4 68.65% 77.34% 88.12%
87 DeepSeek V3.1 68.61% 77.45% 82.39%
88 Nemotron 3 Super 67.38% 69.75% 84.56%
89 Mistral NeMO 67.00% 76.72% 65.04%
90 Gemini 3 Pro (Preview) 67.00% 77.77% 88.79%
91 DeepSeek-V2 Chat 66.95% 77.20% 84.83%
92 Gemini 2.5 Flash Lite (Reasoning) 65.73% 71.64% 85.75%
93 Gemini 2.5 Flash (Reasoning) 65.33% 76.30% 86.51%
94 Arcee AI: Trinity Large (Preview) 65.33% 75.26% 73.33%
95 GPT-4o, May 13th (temp=0) 64.81% 74.89% 85.36%
96 Gemini 3 Flash (Preview) 64.78% 75.04% 85.35%
97 Gemma 3 27B 64.44% 78.79% 77.85%
98 Qwen 2.5 72B 64.34% 75.16% 75.46%
99 Cohere Command R+ (Aug. 2024) 64.24% 77.70% 69.03%
100 Arcee AI: Trinity Mini 63.42% 74.01% 70.90%
101 GPT-4o, Aug. 6th (temp=0) 63.20% 73.65% 82.45%
102 Gemini 2.5 Flash Lite 62.67% 75.05% 81.08%
103 GPT-5 Nano 62.32% 67.04% 82.60%
104 Llama 3.1 Nemotron 70B 62.12% 71.71% 74.70%
105 Qwen 3.5 Plus (2026-02-15) 61.06% 77.07% 85.96%
106 Nemotron 3 Nano 60.70% 65.87% 77.73%
107 Claude 3 Haiku 60.58% 74.53% 71.19%
108 Inception Mercury 2 59.94% 68.31% 83.85%
109 GPT-4o, Aug. 6th (temp=1) 59.42% 75.50% 82.62%
110 Hermes 3 70B 59.25% 77.41% 72.57%
111 Stealth: Aurora Alpha 59.16% 67.54% 83.79%
112 GPT-4o, May 13th (temp=1) 58.51% 75.88% 83.80%
113 GPT-4.1 Mini 57.72% 74.52% 83.20%
114 GPT-4o Mini (temp=0) 56.91% 73.10% 78.29%
115 GPT-4.1 Nano 55.61% 71.81% 71.94%
116 Gemma 3 12B 54.87% 75.38% 78.41%
117 GPT-4o Mini (temp=1) 53.50% 74.37% 79.08%
118 Gemma 3 4B 52.95% 72.10% 68.57%