Rule Following

12 scenarios across 1 subcategory. 91 models scored.

Subcategories

Subcategory Avg Score Best Model Best Score
Constraint Adherence 60.47% Gemini 3.1 Pro (Preview) 91.21%

Model Leaderboard

All models ranked by their Rule Following category score.

# Model Rule Following Constraint Adherence Overall
1 Gemini 3.1 Pro (Preview) 91.21% 91.21% 94.37%
2 Claude Opus 4.6 (Reasoning) 89.78% 89.78% 95.02%
3 Claude Sonnet 4.6 (Reasoning) 85.73% 85.73% 93.66%
4 Claude Opus 4.6 83.11% 83.11% 92.35%
5 Claude Sonnet 4.6 82.50% 82.50% 91.15%
6 Claude Sonnet 4 81.52% 81.52% 88.72%
7 Qwen 3.5 122B 80.00% 80.00% 91.53%
8 Qwen 3.5 397B A17B 79.39% 79.39% 91.73%
9 ByteDance Seed 1.6 77.71% 77.71% 90.70%
10 GPT-5 77.13% 77.13% 91.93%
11 Claude Sonnet 4.5 76.80% 76.80% 88.03%
12 GPT-5 Mini 76.44% 76.44% 92.62%
13 Qwen 3.5 27B 76.04% 76.04% 90.85%
14 Gemini 3 Flash (Preview, Reasoning) 74.48% 74.48% 90.50%
15 GPT-4o, Aug. 6th (temp=0) 74.19% 74.19% 82.45%
16 GPT-5.1 74.05% 74.05% 92.54%
17 Claude 3.7 Sonnet 73.78% 73.78% 83.39%
18 GPT-4o, May 13th (temp=0) 73.24% 73.24% 85.36%
19 o4 Mini High 72.70% 72.70% 90.29%
20 Claude Opus 4.5 72.61% 72.61% 89.69%
21 MoonshotAI: Kimi K2.5 72.03% 72.03% 91.04%
22 Grok 4.1 Fast 70.87% 70.87% 89.55%
23 Claude Opus 4 70.37% 70.37% 87.69%
24 Claude Haiku 4.5 70.35% 70.35% 85.14%
25 GPT-4o, May 13th (temp=1) 69.88% 69.88% 83.80%
26 Claude 3.5 Sonnet 69.67% 69.67% 84.24%
27 Z.AI GLM 4.7 69.16% 69.16% 88.69%
28 DeepSeek-V2 Chat 68.78% 68.78% 84.83%
29 DeepSeek V3 (2025-03-24) 67.94% 67.94% 81.99%
30 Grok 4 Fast 67.91% 67.91% 86.15%
31 GPT-4o, Aug. 6th (temp=1) 67.91% 67.91% 82.62%
32 Z.AI GLM 5 67.78% 67.78% 91.23%
33 Qwen 3.5 35B 67.42% 67.42% 88.00%
34 Writer: Palmyra X5 67.19% 67.19% 79.57%
35 GPT-5.2 67.10% 67.10% 90.26%
36 Gemini 2.5 Flash Lite (Reasoning) 66.81% 66.81% 85.75%
37 GPT-4.1 66.78% 66.78% 88.68%
38 DeepSeek V3 (2024-12-26) 66.39% 66.39% 83.68%
39 DeepSeek V3.1 66.15% 66.15% 82.39%
40 Z.AI GLM 4.6 65.85% 65.85% 89.11%
41 Z.AI GLM 4.7 Flash 65.63% 65.63% 84.82%
42 Gemini 3 Flash (Preview) 65.14% 65.14% 85.35%
43 o4 Mini 64.61% 64.61% 88.35%
44 Gemini 3 Pro (Preview) 64.47% 64.47% 88.79%
45 Mistral Large 3 64.41% 64.41% 85.43%
46 Qwen 3.5 Plus (2026-02-15) 64.21% 64.21% 85.96%
47 Claude 3.5 Haiku 64.18% 64.18% 83.73%
48 Mistral Small 3.2 24B 64.08% 64.08% 78.60%
49 Z.AI GLM 4.5 63.79% 63.79% 86.27%
50 Aion 2.0 63.77% 63.77% 89.21%
51 Llama 3.1 70B 63.45% 63.45% 78.40%
52 Qwen 3.5 Flash 63.19% 63.19% 86.38%
53 Grok 4 63.09% 63.09% 88.12%
54 Mistral Large 2 63.05% 63.05% 82.41%
55 Minimax M2.5 62.69% 62.69% 88.71%
56 Gemma 3 12B 61.05% 61.05% 78.41%
57 Gemini 2.5 Pro 60.89% 60.89% 88.53%
58 Gemini 2.5 Flash (Reasoning) 59.97% 59.97% 86.51%
59 Gemini 2.5 Flash Lite 59.96% 59.96% 81.08%
60 Hermes 3 405B 59.17% 59.17% 82.86%
61 GPT-4o Mini (temp=0) 58.84% 58.84% 78.29%
62 Cohere Command R+ (Aug. 2024) 58.70% 58.70% 69.03%
63 GPT-4.1 Mini 58.59% 58.59% 83.20%
64 GPT-5 Nano 57.57% 57.57% 82.60%
65 Gemini 2.5 Flash 57.47% 57.47% 80.60%
66 GPT-4o Mini (temp=1) 56.50% 56.50% 79.08%
67 DeepSeek V3.2 53.75% 53.75% 82.25%
68 Hermes 3 70B 53.00% 53.00% 72.57%
69 Claude 3 Haiku 51.15% 51.15% 71.19%
70 Ministral 3 14B 50.83% 50.83% 72.54%
71 Llama 3.1 Nemotron 70B 50.62% 50.62% 74.70%
72 Mistral Large 49.87% 49.87% 80.15%
73 Mistral Medium 3.1 48.60% 48.60% 77.83%
74 Mistral Small Creative 48.15% 48.15% 73.27%
75 Gemma 3 27B 47.98% 47.98% 77.85%
76 ByteDance Seed 1.6 Flash 47.15% 47.15% 73.27%
77 Stealth: Aurora Alpha 44.19% 44.19% 83.79%
78 Rocinante 12B 41.51% 41.51% 54.55%
79 GPT-4.1 Nano 40.88% 40.88% 71.94%
80 Arcee AI: Trinity Large (Preview) 38.52% 38.52% 73.33%
81 Mistral NeMO 34.11% 34.11% 65.04%
82 Llama 3.1 8B 34.03% 34.03% 63.37%
83 Qwen 2.5 72B 31.55% 31.55% 75.46%
84 Ministral 3 8B 31.34% 31.34% 71.76%
85 WizardLM 2 8x22b 28.27% 28.27% 71.07%
86 Gemma 3 4B 26.37% 26.37% 68.57%
87 Ministral 3B 24.45% 24.45% 61.29%
88 LFM2 24B 24.12% 24.12% 58.77%
89 Arcee AI: Trinity Mini 23.57% 23.57% 70.90%
90 Ministral 3 3B 15.87% 15.87% 67.22%
91 Ministral 8B 15.27% 15.27% 64.87%