| 1 | Inception Mercury 2 | 95.0% | $0.0030 | 4.6s | 87% |
| 2 | Stealth: Healer Alpha | 97.5% | $0.0000 | 21.8s | 90% |
| 3 | Grok 4.1 Fast | 98.0% | $0.0021 | 21.1s | 90% |
| 4 | Gemini 2.5 Flash Lite (Reasoning) | 96.8% | $0.0020 | 17.6s | 90% |
| 5 | Z.AI GLM 5 Turbo | 97.4% | $0.0072 | 17.1s | 91% |
| 6 | Gemini 2.5 Flash (Reasoning) | 96.9% | $0.0079 | 12.5s | 89% |
| 7 | ByteDance Seed 1.6 Flash | 94.1% | $0.0009 | 12.0s | 85% |
| 8 | GPT-5.4 Mini (Reasoning, Low) | 95.2% | $0.0055 | 6.7s | 85% |
| 9 | Gemini 3 Flash (Preview) | 93.9% | $0.0031 | 4.5s | 83% |
| 10 | Gemini 3 Flash (Preview, Reasoning) | 96.7% | $0.011 | 18.0s | 91% |
| 11 | GPT-5.4 | 95.7% | $0.013 | 8.8s | 88% |
| 12 | MiniMax M2.7 | 95.5% | $0.0022 | 29.4s | 88% |
| 13 | Grok 4 Fast | 96.8% | $0.0018 | 12.8s | 76% |
| 14 | Gemini 2.5 Flash | 91.4% | $0.0025 | 2.8s | 77% |
| 15 | MiniMax M2.5 | 93.5% | $0.0020 | 25.5s | 84% |
| 16 | Grok 4.20 (Beta, Reasoning) | 99.2% | $0.026 | 16.8s | 95% |
| 17 | Stealth: Hunter Alpha | 95.4% | $0.0000 | 44.0s | 88% |
| 18 | Mistral Small 4 (Reasoning) | 92.2% | $0.0020 | 16.3s | 78% |
| 19 | Claude Sonnet 4.5 | 96.3% | $0.024 | 8.9s | 88% |
| 20 | Claude Sonnet 4 | 95.6% | $0.023 | 9.0s | 88% |
| 21 | Gemini 3.1 Flash Lite (Preview) | 89.2% | $0.0019 | 2.2s | 73% |
| 22 | Claude Opus 4.5 | 99.8% | $0.041 | 9.7s | 98% |
| 23 | o4 Mini | 97.4% | $0.019 | 28.1s | 90% |
| 24 | GPT-5.4 (Reasoning, Low) | 94.9% | $0.020 | 13.5s | 84% |
| 25 | ByteDance Seed 2.0 Lite | 98.1% | $0.0067 | 1.1m | 93% |
| 26 | Mistral Large 3 | 87.5% | $0.0030 | 10.2s | 74% |
| 27 | Stealth: Aurora Alpha | 92.9% | — | 6.2s | 74% |
| 28 | Claude Opus 4.6 | 98.5% | $0.040 | 10.2s | 94% |
| 29 | GPT-5.4 Mini (Reasoning) | 96.0% | $0.020 | 28.6s | 87% |
| 30 | GPT-5.2 | 95.7% | $0.024 | 24.2s | 89% |
| 31 | Qwen 3.5 Flash | 97.1% | $0.0038 | 1.0m | 88% |
| 32 | ByteDance Seed 1.6 | 96.8% | $0.0067 | 1.0m | 91% |
| 33 | Gemini 2.5 Pro | 98.4% | $0.035 | 23.2s | 93% |
| 34 | DeepSeek V3 (2024-12-26) | 89.6% | $0.0019 | 15.6s | 70% |
| 35 | Z.AI GLM 5 | 97.9% | $0.013 | 56.4s | 90% |
| 36 | Claude 3.5 Haiku | 89.8% | $0.0049 | 6.2s | 68% |
| 37 | Mistral Large | 89.9% | $0.012 | 9.1s | 75% |
| 38 | GPT-5 Mini | 94.8% | $0.0092 | 51.7s | 86% |
| 39 | Z.AI GLM 4.7 | 96.2% | $0.0091 | 1.0m | 89% |
| 40 | Qwen 3.5 35B | 97.5% | $0.017 | 54.8s | 91% |
| 41 | DeepSeek-V2 Chat | 89.5% | $0.0020 | 13.1s | 66% |
| 42 | Z.AI GLM 4.7 Flash | 94.0% | $0.0018 | 1.1m | 86% |
| 43 | GPT-4.1 Mini | 84.4% | $0.0015 | 5.8s | 66% |
| 44 | Z.AI GLM 4.5 | 88.7% | $0.0026 | 18.5s | 68% |
| 45 | Mistral Large 2 | 87.9% | $0.012 | 8.8s | 72% |
| 46 | Claude Sonnet 4.6 | 91.5% | $0.024 | 9.9s | 77% |
| 47 | Mistral Medium 3.1 | 85.4% | $0.0029 | 7.7s | 64% |
| 48 | DeepSeek V3.2 | 86.6% | $0.0013 | 17.2s | 66% |
| 49 | Qwen 3 32B | 89.4% | $0.0010 | 27.8s | 67% |
| 50 | Claude 3.5 Sonnet | 94.4% | $0.042 | 10.5s | 85% |
| 51 | GPT-4o, Aug. 6th (temp=1) | 87.6% | $0.011 | 3.6s | 62% |
| 52 | Writer: Palmyra X5 | 84.3% | $0.0062 | 12.1s | 66% |
| 53 | GPT-4.1 | 89.1% | $0.0081 | 10.0s | 61% |
| 54 | Qwen3 235B A22B Instruct 2507 | 83.6% | $0.0007 | 21.1s | 65% |
| 55 | Inception Mercury | 83.3% | $0.0005 | 9.5s | 58% |
| 56 | Qwen 3.5 Plus (2026-02-15) | 89.3% | $0.0041 | 34.2s | 66% |
| 57 | Grok 4.20 (Beta) | 81.4% | $0.0059 | 2.5s | 60% |
| 58 | o4 Mini High | 97.1% | $0.033 | 51.3s | 90% |
| 59 | Gemma 3 27B | 78.6% | $0.0005 | 13.3s | 60% |
| 60 | GPT-4o, Aug. 6th (temp=0) | 87.3% | $0.015 | 5.0s | 60% |
| 61 | DeepSeek V3 (2025-03-24) | 86.8% | $0.0015 | 22.9s | 56% |
| 62 | GPT-5.1 | 97.3% | $0.037 | 52.8s | 90% |
| 63 | MoonshotAI: Kimi K2.5 | 95.9% | $0.015 | 1.5m | 88% |
| 64 | Gemini 3 Pro (Preview) | 97.1% | $0.050 | 34.0s | 91% |
| 65 | Qwen 3.5 122B | 96.2% | $0.026 | 1.2m | 88% |
| 66 | Mistral Small Creative | 74.7% | $0.0006 | 4.3s | 56% |
| 67 | Z.AI GLM 4.6 | 91.9% | $0.0049 | 1.3m | 76% |
| 68 | GPT-4o, May 13th (temp=1) | 85.5% | $0.026 | 3.7s | 64% |
| 69 | Hermes 3 405B | 82.2% | $0.0044 | 17.2s | 53% |
| 70 | Arcee AI: Trinity Mini | 80.9% | $0.0004 | 10.3s | 47% |
| 71 | GPT-5.4 (Reasoning) | 94.5% | $0.042 | 43.1s | 84% |
| 72 | Qwen 3.5 27B | 97.2% | $0.021 | 1.7m | 91% |
| 73 | Aion 2.0 | 95.9% | $0.0084 | 1.3m | 68% |
| 74 | Claude Haiku 4.5 | 81.8% | $0.0078 | 5.8s | 49% |
| 75 | Claude 3.7 Sonnet | 88.5% | $0.023 | 10.2s | 56% |
| 76 | GPT-5.4 Mini | 75.5% | $0.0033 | 3.1s | 47% |
| 77 | Mistral Small 3.2 24B | 72.3% | $0.0006 | 10.4s | 49% |
| 78 | Claude Opus 4.6 (Reasoning) | 99.2% | $0.076 | 32.0s | 96% |
| 79 | GPT-4o, May 13th (temp=0) | 88.6% | $0.030 | 5.4s | 54% |
| 80 | Gemini 3.1 Pro (Preview) | 99.3% | $0.068 | 52.3s | 96% |
| 81 | Qwen 2.5 72B | 78.0% | $0.0008 | 13.9s | 41% |
| 82 | ByteDance Seed 2.0 Mini | 98.0% | $0.0029 | 2.7m | 92% |
| 83 | Llama 3.1 Nemotron 70B | 78.2% | $0.0055 | 18.5s | 45% |
| 84 | Grok 4 | 97.4% | $0.051 | 1.2m | 87% |
| 85 | DeepSeek V3.1 | 79.4% | $0.0014 | 31.1s | 44% |
| 86 | Ministral 3 14B | 67.1% | $0.0010 | 6.3s | 41% |
| 87 | Qwen 3.5 9B | 95.5% | $0.0020 | 2.4m | 76% |
| 88 | GPT-4o Mini (temp=1) | 67.1% | $0.0006 | 7.4s | 34% |
| 89 | Claude Sonnet 4.6 (Reasoning) | 95.9% | $0.076 | 51.3s | 87% |
| 90 | Gemini 2.5 Flash Lite | 64.3% | $0.0005 | 2.3s | 31% |
| 91 | Ministral 3 8B | 58.4% | $0.0007 | 4.8s | 36% |
| 92 | Nemotron 3 Super | 90.9% | $0.0000 | 2.3m | 65% |
| 93 | Llama 3.1 70B | 72.2% | $0.0021 | 24.3s | 32% |
| 94 | Gemma 3 12B | 62.3% | $0.0003 | 12.0s | 34% |
| 95 | GPT-5.4 Nano (Reasoning) | 77.0% | $0.0035 | 16.6s | 23% |
| 96 | GPT-4o Mini (temp=0) | 69.1% | $0.0006 | 25.0s | 32% |
| 97 | Mistral Small 4 | 56.5% | $0.0008 | 4.3s | 35% |
| 98 | GPT-5 | 96.2% | $0.061 | 1.6m | 87% |
| 99 | GPT-5 Nano | 88.5% | $0.0049 | 1.9m | 57% |
| 100 | Ministral 3 3B | 56.1% | $0.0005 | 3.3s | 29% |
| 101 | Ministral 8B | 52.4% | $0.0005 | 5.6s | 31% |
| 102 | Ministral 3B | 53.5% | $0.0002 | 2.9s | 28% |
| 103 | Claude 3 Haiku | 55.9% | $0.0015 | 3.6s | 26% |
| 104 | Hermes 3 70B | 60.7% | $0.0013 | 29.2s | 31% |
| 105 | Nemotron 3 Nano | 93.5% | $0.0031 | 3.5m | 81% |
| 106 | Qwen 3.5 397B A17B | 95.5% | $0.026 | 2.9m | 76% |
| 107 | Cohere Command R+ (Aug. 2024) | 60.3% | $0.014 | 10.0s | 24% |
| 108 | Claude Opus 4 | 90.4% | $0.116 | 15.8s | 77% |
| 109 | Mistral NeMO | 44.0% | $0.0007 | 13.3s | 21% |
| 110 | GPT-5.4 Nano | 35.5% | $0.0008 | 2.9s | 13% |
| 111 | GPT-5.4 Nano (Reasoning, Low) | 43.0% | $0.0013 | 6.1s | 4% |
| 112 | Llama 3.1 8B | 34.1% | $0.0002 | 16.1s | 13% |
| 113 | Gemma 3 4B | 20.8% | $0.0002 | 12.5s | 14% |
| 114 | WizardLM 2 8x22b | 32.2% | $0.0036 | 15.7s | 6% |
| 115 | Rocinante 12B | 21.8% | $0.0009 | 6.0s | 3% |
| 116 | GPT-4.1 Nano | 20.4% | $0.0004 | 3.9s | 0% |
| 117 | Arcee AI: Trinity Large (Preview) | 60.2% | $0.0000 | 3.1m | 29% |
| 118 | LFM2 24B | 1.7% | $0.0008 | 1.9m | 0% |