| 1 | Mistral Small Creative | 99.8% | $0.0007 | 9.1s | 97% |
| 2 | Ministral 3 14B | 99.8% | $0.0007 | 11.7s | 97% |
| 3 | Mistral Small 4 | 99.8% | $0.0014 | 18.2s | 98% |
| 4 | Writer: Palmyra X5 | 100.0% | $0.011 | 22.0s | 99% |
| 5 | Mistral Medium 3.1 | 99.6% | $0.0048 | 36.5s | 97% |
| 6 | Mistral Large 3 | 98.8% | $0.0033 | 30.3s | 92% |
| 7 | Mistral Small 4 (Reasoning) | 98.9% | $0.0022 | 30.2s | 91% |
| 8 | Qwen3 235B A22B Instruct 2507 | 99.8% | $0.0011 | 59.2s | 96% |
| 9 | o4 Mini | 99.1% | $0.015 | 25.7s | 91% |
| 10 | GPT-5.4 Nano (Reasoning, Low) | 97.5% | $0.0055 | 20.6s | 88% |
| 11 | GPT-5.4 Mini | 97.8% | $0.015 | 16.8s | 88% |
| 12 | Mistral Large 2 | 98.7% | $0.013 | 29.4s | 90% |
| 13 | GPT-5.4 Mini (Reasoning, Low) | 98.3% | $0.015 | 16.8s | 87% |
| 14 | Mistral Large | 98.6% | $0.014 | 30.9s | 90% |
| 15 | DeepSeek V3 (2025-03-24) | 98.2% | $0.0014 | 39.4s | 86% |
| 16 | Ministral 3 8B | 97.2% | $0.0008 | 19.6s | 81% |
| 17 | GPT-5.4 Nano (Reasoning) | 97.1% | $0.0061 | 24.5s | 85% |
| 18 | GPT-5.4 Nano | 97.3% | $0.0057 | 26.3s | 84% |
| 19 | Qwen 3.5 9B | 99.5% | $0.0011 | 1.4m | 95% |
| 20 | ByteDance Seed 1.6 Flash | 97.1% | $0.0013 | 27.3s | 81% |
| 21 | o4 Mini High | 99.2% | $0.025 | 47.2s | 94% |
| 22 | GPT-4o Mini (temp=1) | 95.9% | $0.0012 | 34.8s | 82% |
| 23 | Stealth: Hunter Alpha | 97.4% | $0.0000 | 55.0s | 85% |
| 24 | Grok 4.20 (Beta) | 96.0% | $0.018 | 15.8s | 83% |
| 25 | GPT-4.1 | 98.2% | $0.018 | 44.7s | 88% |
| 26 | Ministral 8B | 96.1% | $0.0004 | 10.4s | 69% |
| 27 | GPT-4.1 Mini | 95.5% | $0.0027 | 19.0s | 73% |
| 28 | GPT-5.4 Mini (Reasoning) | 96.9% | $0.022 | 28.1s | 83% |
| 29 | Z.AI GLM 5 Turbo | 95.1% | $0.0081 | 33.2s | 80% |
| 30 | Qwen 3 32B | 96.5% | $0.0015 | 54.6s | 79% |
| 31 | GPT-5 Mini | 95.6% | $0.0100 | 57.4s | 84% |
| 32 | Grok 4 Fast | 94.9% | $0.0017 | 24.1s | 68% |
| 33 | Ministral 3B | 93.1% | $0.0001 | 8.1s | 64% |
| 34 | Z.AI GLM 4.7 Flash | 96.1% | $0.0017 | 1.2m | 80% |
| 35 | Qwen 3.5 Flash | 95.0% | $0.0025 | 47.5s | 73% |
| 36 | Z.AI GLM 4.7 | 97.0% | $0.010 | 1.4m | 85% |
| 37 | Grok 4.1 Fast | 96.6% | $0.0018 | 37.8s | 67% |
| 38 | Claude Sonnet 4.5 | 96.4% | $0.035 | 38.1s | 83% |
| 39 | Qwen 3.5 122B | 96.3% | $0.025 | 1.1m | 85% |
| 40 | Claude Sonnet 4.6 | 96.6% | $0.031 | 39.3s | 79% |
| 41 | GPT-5.4 | 99.8% | $0.049 | 1.4m | 97% |
| 42 | Grok 4.20 (Beta, Reasoning) | 94.4% | $0.039 | 34.0s | 82% |
| 43 | LFM2 24B | 92.2% | $0.0002 | 28.4s | 65% |
| 44 | GPT-5.4 (Reasoning, Low) | 99.7% | $0.055 | 1.4m | 97% |
| 45 | Gemma 3 27B | 93.1% | $0.0006 | 52.6s | 70% |
| 46 | Ministral 3 3B | 92.0% | $0.0005 | 11.1s | 59% |
| 47 | GPT-4o, Aug. 6th (temp=1) | 92.3% | $0.018 | 24.4s | 70% |
| 48 | Aion 2.0 | 93.8% | $0.0064 | 1.3m | 77% |
| 49 | Stealth: Healer Alpha | 90.7% | $0.0000 | 23.7s | 63% |
| 50 | GPT-4o Mini (temp=0) | 89.2% | $0.0012 | 34.8s | 65% |
| 51 | GPT-4.1 Nano | 89.6% | $0.0007 | 13.3s | 57% |
| 52 | GPT-5.1 | 99.7% | $0.054 | 1.8m | 96% |
| 53 | Gemini 3 Pro (Preview) | 97.1% | $0.055 | 54.4s | 84% |
| 54 | GPT-5.2 | 98.8% | $0.056 | 1.5m | 93% |
| 55 | DeepSeek-V2 Chat | 92.2% | $0.0021 | 53.3s | 63% |
| 56 | Gemini 2.5 Pro | 93.3% | $0.036 | 36.2s | 72% |
| 57 | Qwen 3.5 35B | 93.9% | $0.018 | 1.0m | 70% |
| 58 | DeepSeek V3 (2024-12-26) | 91.6% | $0.0021 | 54.6s | 63% |
| 59 | Gemini 2.5 Flash (Reasoning) | 87.6% | $0.011 | 21.5s | 60% |
| 60 | Z.AI GLM 5 | 90.7% | $0.0084 | 1.2m | 66% |
| 61 | Claude Opus 4.6 | 98.2% | $0.078 | 1.2m | 88% |
| 62 | Mistral NeMO | 86.4% | $0.0005 | 10.1s | 48% |
| 63 | Claude Sonnet 4.6 (Reasoning) | 97.5% | $0.060 | 1.2m | 79% |
| 64 | Qwen 3.5 Plus (2026-02-15) | 85.2% | $0.0060 | 31.5s | 58% |
| 65 | Rocinante 12B | 88.9% | $0.0014 | 38.4s | 52% |
| 66 | DeepSeek V3.2 | 91.9% | $0.0014 | 1.9m | 69% |
| 67 | Qwen 3.5 27B | 94.3% | $0.020 | 1.6m | 70% |
| 68 | Claude Haiku 4.5 | 86.4% | $0.011 | 21.6s | 53% |
| 69 | Gemma 3 12B | 85.3% | $0.0004 | 41.3s | 55% |
| 70 | Hermes 3 405B | 90.1% | $0.0032 | 53.2s | 52% |
| 71 | Gemini 2.5 Flash | 83.7% | $0.0052 | 10.6s | 46% |
| 72 | Claude Opus 4.6 (Reasoning) | 97.7% | $0.088 | 1.4m | 85% |
| 73 | Gemini 3 Flash (Preview) | 81.3% | $0.0078 | 19.6s | 48% |
| 74 | Arcee AI: Trinity Mini | 80.8% | $0.0003 | 9.2s | 42% |
| 75 | Gemma 3 4B | 79.9% | $0.0002 | 20.0s | 46% |
| 76 | MiniMax M2.5 | 86.4% | $0.0034 | 1.3m | 54% |
| 77 | Z.AI GLM 4.6 | 84.1% | $0.0065 | 51.5s | 51% |
| 78 | Claude 3 Haiku | 81.4% | $0.0025 | 14.9s | 41% |
| 79 | DeepSeek V3.1 | 87.8% | $0.0020 | 1.8m | 60% |
| 80 | MiniMax M2.7 | 84.4% | $0.0040 | 1.1m | 52% |
| 81 | GPT-5.4 (Reasoning) | 99.8% | $0.089 | 2.6m | 98% |
| 82 | Grok 4 | 95.0% | $0.048 | 1.7m | 68% |
| 83 | GPT-4o, May 13th (temp=1) | 84.3% | $0.033 | 14.4s | 47% |
| 84 | GPT-5 | 98.3% | $0.065 | 2.8m | 90% |
| 85 | Gemini 2.5 Flash Lite | 75.4% | $0.0009 | 9.5s | 39% |
| 86 | Qwen 3.5 397B A17B | 92.6% | $0.014 | 3.0m | 70% |
| 87 | Gemini 3.1 Pro (Preview) | 97.7% | $0.107 | 1.8m | 83% |
| 88 | Cohere Command R+ (Aug. 2024) | 81.8% | $0.020 | 52.5s | 38% |
| 89 | Gemini 3.1 Flash Lite (Preview) | 70.4% | $0.0030 | 8.4s | 32% |
| 90 | GPT-4o, May 13th (temp=0) | 79.2% | $0.035 | 14.1s | 37% |
| 91 | Gemini 3 Flash (Preview, Reasoning) | 74.2% | $0.012 | 30.1s | 37% |
| 92 | GPT-4o, Aug. 6th (temp=0) | 77.8% | $0.023 | 22.7s | 34% |
| 93 | Claude Opus 4.5 | 85.4% | $0.070 | 53.4s | 55% |
| 94 | MoonshotAI: Kimi K2.5 | 91.0% | $0.019 | 3.2m | 63% |
| 95 | ByteDance Seed 2.0 Lite | 83.2% | $0.012 | 2.2m | 46% |
| 96 | Arcee AI: Trinity Large (Preview) | 73.3% | $0.0000 | 43.6s | 25% |
| 97 | Claude 3.5 Haiku | 67.2% | $0.0035 | 10.8s | 23% |
| 98 | Hermes 3 70B | 73.8% | $0.0010 | 1.2m | 31% |
| 99 | Z.AI GLM 4.5 | 70.5% | $0.0051 | 42.1s | 26% |
| 100 | WizardLM 2 8x22b | 79.3% | $0.0026 | 1.8m | 31% |
| 101 | Qwen 2.5 72B | 66.2% | $0.0010 | 36.7s | 24% |
| 102 | Claude 3.7 Sonnet | 73.0% | $0.042 | 46.7s | 34% |
| 103 | Gemini 2.5 Flash Lite (Reasoning) | 62.8% | $0.0028 | 30.8s | 21% |
| 104 | Claude 3.5 Sonnet | 71.2% | $0.048 | 35.5s | 27% |
| 105 | ByteDance Seed 1.6 | 74.2% | $0.013 | 2.5m | 33% |
| 106 | GPT-5 Nano | 61.0% | $0.0042 | 1.4m | 25% |
| 107 | Claude Sonnet 4 | 64.0% | $0.032 | 43.7s | 18% |
| 108 | Claude Opus 4 | 93.7% | $0.209 | 1.4m | 72% |
| 109 | Inception Mercury 2 | 40.8% | $0.0032 | 7.0s | 10% |
| 110 | Llama 3.1 70B | 44.9% | $0.0015 | 29.4s | 10% |
| 111 | Nemotron 3 Super | 51.5% | $0.0000 | 1.4m | 15% |
| 112 | Inception Mercury | 47.2% | $0.011 | 17.6s | 2% |
| 113 | Stealth: Aurora Alpha | 33.9% | $0.0000 | 9.8s | 4% |
| 114 | ByteDance Seed 2.0 Mini | 75.4% | $0.0045 | 4.9m | 36% |
| 115 | Llama 3.1 Nemotron 70B | 37.5% | $0.0038 | 31.7s | 6% |
| 116 | Llama 3.1 8B | 40.0% | $0.0003 | 1.3m | 9% |
| 117 | Mistral Small 3.2 24B | 77.6% | $0.0068 | 5.6m | 26% |
| 118 | Nemotron 3 Nano | 25.6% | $0.0010 | 1.1m | 2% |