Compare Claude, GPT, Gemini, DeepSeek, Groq, and Together AI pricing. Prices verified from official docs, March 2026.
Cost Calculator
| Model | Provider | Input / 1M | Output / 1M | Context | Cost / request ↑ |
|---|---|---|---|---|---|
| Llama 3.1 8B (Groq) | Groq | $0.05 | $0.08 | 128K | $0.0001cheapest |
| Mistral Small 3 (Together) | Mistral | $0.10 | $0.30 | 128K | $0.0003 |
| Llama 4 Scout (Groq) | Groq | $0.11 | $0.34 | 128K | $0.0003 |
| DeepSeek V3.2 | DeepSeek | $0.28 | $0.42 | 128K | $0.0005 |
| DeepSeek R1 | DeepSeek | $0.28 | $0.42 | 128K | $0.0005 |
| GPT-5.4 nano | OpenAI | $0.20 | $1.25 | 400K | $0.0008 |
| Llama 3.3 70B (Groq) | Groq | $0.59 | $0.79 | 128K | $0.0010 |
| Gemini 3.1 Flash-Lite | $0.25 | $1.50 | 1M | $0.0010 | |
| Llama 4 Maverick (Together) | Together AI | $0.88 | $0.88 | 1M | $0.0013 |
| Gemini 2.5 Flash | $0.30 | $2.50 | 1M | $0.0015 | |
| GPT-5.4 mini | OpenAI | $0.75 | $4.50 | 400K | $0.0030 |
| Claude Haiku 4.5 | Anthropic | $1.00 | $5.00 | 200K | $0.0035 |
| Gemini 3.1 Pro | $2.00 | $8.00 | 1M | $0.0060 | |
| Gemini 2.5 Pro | $1.25 | $10.00 | 1M | $0.0063 | |
| DeepSeek R1 (Together) | Together AI | $3.00 | $7.00 | 128K | $0.0065 |
| GPT-5.4 | OpenAI | $2.50 | $15.00 | 1M | $0.010 |
| Claude Sonnet 4.6 | Anthropic | $3.00 | $15.00 | 1M | $0.011 |
| Claude Opus 4.6 | Anthropic | $5.00 | $25.00 | 1M | $0.018 |
* DeepSeek R1: Thinking mode
* Llama 3.3 70B (Groq): Ultra-fast inference
* Gemini 3.1 Flash-Lite: Preview
* Gemini 3.1 Pro: Preview
* Gemini 2.5 Pro: $2.50/$15 for >200K
Prices from official provider docs. Last verified April 2026. Cached input pricing not shown.
Pro tip
Using AI APIs for code review? Git AutoReview runs Claude, Gemini, and GPT in parallel with BYOK — you control costs, we handle the orchestration.
AI providers charge based on the number of tokens processed. A token is roughly 4 characters or 0.75 words in English. Input tokens (your prompt) are cheaper than output tokens (the model's response) because generation requires more compute. Most providers list prices per 1 million tokens.
For a typical code review, you send 2,000-5,000 input tokens (the PR diff plus system prompt) and receive 500-1,500 output tokens (review comments). At Claude Sonnet 4.6 rates, that is about $0.01-$0.03 per review. Running 100 reviews per day costs $1-$3 — far less than the developer time saved.
For code review specifically, Claude Opus 4.6 catches the deepest architectural issues but costs more. Gemini 2.5 Pro offers strong value with its 1M token context at $1.25/1M input, though the newer Gemini 3.1 Pro brings better performance at $2.00/1M. GPT-5.4 nano at $0.20/$1.25 is the budget option for high-volume reviews.
Git AutoReview lets you run multiple models in parallel and compare findings. This catches more issues than any single model because different models have different strengths — Claude excels at logic bugs, GPT at security patterns, Gemini at understanding large codebases.
Context window determines how much code the model can see at once. For reviewing a small PR (under 500 lines), any model works. For large PRs or when you need the model to understand surrounding code, the 1M token context of Claude, GPT-5.4, and Gemini 2.5/3.1 is essential.
DeepSeek's 128K context is enough for most individual file reviews but may truncate large monorepo diffs. Groq's speed makes it ideal for quick checks where latency matters more than context size.
If your system prompt stays the same across requests (which it does for code review), cached input pricing saves 50-90%. Anthropic caches prompt prefixes at $0.30/1M (vs $3.00 standard) for Sonnet 4.6. DeepSeek caches at $0.028/1M. This makes repeated reviews significantly cheaper.
All prices on this page are sourced from official provider pricing pages. We verify against platform.claude.com, ai.google.dev, api-docs.deepseek.com, groq.com, and together.ai directly. Prices are updated monthly. Last verified: April 2026.
Most AI providers charge per token — a token is roughly 4 characters or 0.75 words in English. Pricing is split between input tokens (your prompt) and output tokens (the model's response). Prices are listed per 1 million tokens.
DeepSeek V3.2 is the cheapest at $0.28/$0.42 per 1M tokens. For better quality, Gemini 2.5 Flash at $0.30/$2.50 offers strong performance at low cost. Git AutoReview supports BYOK so you can use any provider with your own API keys.
Input tokens are the text you send to the model (your prompt, context, instructions). Output tokens are the text the model generates in response. Output tokens are typically 3-5x more expensive than input tokens because generation requires more compute.
A typical PR review uses 2,000-5,000 input tokens (diff + context) and 500-1,500 output tokens (comments). With Claude Sonnet 4.6, that costs about $0.01-$0.03 per review. With DeepSeek, under $0.01. Git AutoReview's Team plan at $14.99/month is cheaper than BYOK for most teams.
BYOK (Bring Your Own Key) means you use your own API keys and pay the provider directly. This gives you full control over costs and means your code goes directly to the AI provider, not through a middleman. Git AutoReview supports BYOK on all plans including Free.
Gemini 2.5 Pro and Claude Opus 4.6 / Sonnet 4.6 all support 1 million token context windows. This means you can send roughly 750,000 words in a single request — enough for an entire codebase.
AI API prices typically drop every 3-6 months as providers optimize their infrastructure. We update this comparison table monthly. Prices shown are from official provider pricing pages as of March 2026.
Yes. Most providers offer discounted rates for cached/repeated prompts. Anthropic offers prompt caching at 90% discount, DeepSeek at 90% discount ($0.028/1M cached), and OpenAI has similar options. This is especially useful for code review where the system prompt is the same every time.
Yes. Git AutoReview runs Claude, Gemini, and GPT in parallel and merges duplicate findings. This catches more issues than any single model. With BYOK, you control the cost of each model independently.
Developer Toolkit by Git AutoReview
Free tools for developers. AI code review for teams.