How to Add AI Code Review to Bitbucket Pipelines
Set up automated AI code review in your Bitbucket Pipelines CI/CD workflow. YAML examples, pipeline optimization, and integration with Jira and VS Code.
Using Bitbucket? Get AI code review with Gemini, Claude & GPT.
Try it free on VS CodeHow to Add AI Code Review to Bitbucket Pipelines
Your CI/CD pipeline already runs tests, lints code, and checks for security issues. Why not have it review your code too?
Bitbucket Pipelines is Atlassian's built-in CI/CD service, and it's where most Bitbucket teams run their automation. Adding AI code review to that pipeline means every pull request gets automated feedback before a human reviewer even opens it.
This guide covers three ways to get AI code review running in Bitbucket Pipelines, from a quick VS Code-based approach to a fully automated pipeline step. Actual YAML configurations, cost breakdowns, and the specific tradeoffs for each approach are all included.
Why Add AI Review to Your Pipeline?
Manual code review is the slowest part of most development workflows. Studies from Google and Microsoft put the number at 6-12 hours per week spent on reviews per developer, with a median first-response time over 4 hours.
Adding AI review to your pipeline changes the math:
Instant feedback on every PR. The AI reviews your code as soon as the pipeline triggers. No waiting for a teammate across time zones to wake up and open the diff.
Consistent standards. Human reviewers have different priorities and varying attention depending on the day. AI review applies the same checks every time: security vulnerabilities, error handling gaps, performance problems, naming conventions.
Faster human reviews. When a senior developer opens a PR that's already been AI-reviewed, they skip the mechanical stuff (style, obvious bugs, missing null checks) and go straight to architecture and business logic.
Fewer broken builds. Catching issues before merge means fewer hotfixes and less time debugging production problems that a review would have caught.
Git AutoReview runs Claude, Gemini & GPT on your pull requests. You approve every comment before it goes live.
Install the VS Code Extension →
Approach 1: VS Code Extension + Bitbucket (Simplest)
The fastest way to add AI code review to your Bitbucket workflow is through a VS Code extension. This doesn't modify your pipeline at all — the review happens in your editor before you even create the PR.
How It Works
- You write code and push a branch
- Open a pull request in Bitbucket
- In VS Code, Git AutoReview shows the PR diff
- You click "Review with AI" — Claude, Gemini, or GPT analyzes the changes
- You read the suggestions, pick the ones that matter, and publish them as PR comments
- Your teammate sees the AI review alongside the code, already curated by you
Setup
Install Git AutoReview from the VS Code marketplace, connect your Bitbucket account (Cloud, Server, or Data Center), and add at least one AI provider API key.
The extension supports all three Bitbucket deployment types:
| Deployment | Authentication | Setup Time |
|---|---|---|
| Bitbucket Cloud | Atlassian OAuth | ~2 minutes |
| Bitbucket Server | Personal Access Token | ~5 minutes |
| Bitbucket Data Center | Personal Access Token | ~5 minutes |
When to Use This Approach
Choose this approach when:
- You want AI review without touching your pipeline YAML
- Your team prefers human curation of AI suggestions (the "human-in-the-loop" model)
- You have Bitbucket Server or Data Center with strict firewall rules
- You want to try AI code review before committing to pipeline changes
Limitations
The review only runs when someone manually triggers it in VS Code. If a developer forgets or skips the review, the PR goes without AI feedback. For teams that need every PR reviewed, Approach 2 or 3 are better.
Approach 2: Pipeline Step with Git Diff (Automated)
This approach adds a step to your bitbucket-pipelines.yml that extracts the diff and sends it to an AI provider. Every PR gets reviewed automatically — no human trigger needed.
The Architecture
Developer pushes branch
│
▼
Bitbucket Pipeline triggers on PR
│
├── Step 1: Build & Test (existing)
│
├── Step 2: AI Code Review (new)
│ ├── git diff origin/main...HEAD
│ ├── Send diff to AI API (Claude/GPT/Gemini)
│ └── Output review to pipeline logs or artifact
│
└── Step 3: Deploy (existing)
YAML Configuration
Here's a working bitbucket-pipelines.yml configuration that adds AI review as a pipeline step:
image: atlassian/default-image:4
pipelines:
pull-requests:
'**':
- parallel:
- step:
name: Build & Test
caches:
- node
script:
- npm ci
- npm run lint
- npm test
- step:
name: AI Code Review
script:
- apt-get update && apt-get install -y jq
- git fetch origin main
- DIFF=$(git diff origin/main...HEAD)
- |
if [ -z "$DIFF" ]; then
echo "No changes to review"
exit 0
fi
- |
REVIEW=$(curl -s https://api.anthropic.com/v1/messages \
-H "x-api-key: $ANTHROPIC_API_KEY" \
-H "anthropic-version: 2023-06-01" \
-H "content-type: application/json" \
-d "{
\"model\": \"claude-sonnet-4-6-20250514\",
\"max_tokens\": 4096,
\"messages\": [{
\"role\": \"user\",
\"content\": \"Review this code diff for bugs, security issues, and improvements. Be specific about file names and line numbers. Focus on real problems, not style preferences.\n\nDiff:\n${DIFF}\"
}]
}")
- echo "$REVIEW" | jq -r '.content[0].text'
artifacts:
- review-output.txt
Key Details
Running in parallel. The AI review step runs alongside your build and test steps, not after them. This means it doesn't add to your total pipeline time — the build and review happen simultaneously.
Using git diff instead of the API. Some guides suggest using the Bitbucket REST API to fetch the PR diff. Don't. The REST API approach has authentication problems, permission issues, and breaks frequently. git diff origin/main...HEAD is reliable and works with any Bitbucket deployment.
Artifact output. The review gets saved as a pipeline artifact, so you can download and reference it later. The review also prints to the pipeline log for quick access.
Repository Variables
You'll need to set these as repository variables in Bitbucket (Settings → Repository Variables):
| Variable | Value | Secured |
|---|---|---|
ANTHROPIC_API_KEY |
Your Claude API key | Yes |
OPENAI_API_KEY |
Your GPT API key (alternative) | Yes |
GOOGLE_AI_KEY |
Your Gemini API key (alternative) | Yes |
Mark API keys as "Secured" so they don't appear in pipeline logs.
Cost Analysis
Running AI review in your pipeline costs two things: pipeline minutes and API calls.
Pipeline minutes. The AI review step typically takes 15-30 seconds. On Bitbucket Premium ($6.60/user/month), you get 3,500 build minutes per month. If your team creates 200 PRs per month and each review uses 0.5 minutes, that's 100 minutes — about 3% of your allocation.
API costs. A typical PR diff is 500-2000 tokens. Using Claude Sonnet at roughly $3/million input tokens and $15/million output tokens, each review costs approximately $0.01-0.05. For 200 PRs/month, budget about $2-10 for AI API costs.
| Component | Monthly Cost (200 PRs) |
|---|---|
| Pipeline minutes | ~100 min of 3,500 included |
| Claude API | $2-10 |
| GPT API | $1-8 |
| Gemini API | $1-5 |
| Total added cost | $2-10/month |
Compare this to per-developer pricing from commercial tools ($15-25/user/month for a team of 10 = $150-250/month) and the pipeline approach looks very cost-effective.
Git AutoReview supports Claude, Gemini & GPT with human approval. $14.99/month for your whole team.
See Pricing →
Approach 3: Bitbucket Pipelines + Git AutoReview Extension (Best of Both)
The previous approaches have trade-offs: Approach 1 is manual, Approach 2 dumps raw AI output into pipeline logs without curation. The third approach combines both.
How It Works
- Pipeline runs automatically on every PR (catches everything)
- AI review results appear as structured pipeline output
- Developer opens VS Code, sees the AI suggestions in the Git AutoReview panel
- Developer picks the useful suggestions and publishes them as curated PR comments
- Reviewers see clean, relevant feedback — not a wall of AI text
You get automation (every PR gets reviewed) with quality control (humans filter the noise).
When This Matters
AI models sometimes flag things that aren't actually problems. They suggest "improvements" that would make code worse, or complain about performance in code that runs once during startup. Without human filtering, the false positives pile up and teammates learn to ignore all AI feedback.
That's "AI fatigue," and it's the main reason automated review tools get turned off. The human-in-the-loop model avoids it.
Making the Most of Bitbucket Pipelines + AI Review
Integrate with Jira for Context
If your team uses Jira (and since you're on Bitbucket, you probably do), connect your AI review tool to Jira. When the AI can read the linked ticket's acceptance criteria, the feedback gets a lot more specific.
Instead of generic suggestions like "consider adding error handling," you get:
Jira ticket PROJ-456 specifies: "User should see a friendly error
message when the API returns 429 (rate limited)."
This PR adds the API call in src/api/client.ts but doesn't handle
the 429 status code. The catch block on line 47 only handles
network errors. Consider adding a specific handler for rate limiting
that shows the user a retry message.
Git AutoReview's Jira integration does this automatically when you connect your Atlassian account.
Optimize Pipeline Minutes
Pipeline minutes are finite. A few ways to keep AI review from eating your build budget:
Run only on PRs, not all branches. The YAML examples above use pull-requests triggers specifically. Don't run AI review on every push to every branch.
# Good: Only review pull requests
pipelines:
pull-requests:
'**':
- step:
name: AI Review
# ...
# Bad: Reviews every push to every branch
pipelines:
default:
- step:
name: AI Review
# ...
Skip small changes. If the diff is under 10 lines, the review probably isn't worth the pipeline time.
- |
LINES=$(git diff origin/main...HEAD | wc -l)
if [ "$LINES" -lt 10 ]; then
echo "Small change ($LINES lines), skipping AI review"
exit 0
fi
Cache your tools. If your review step installs packages, cache them.
Use conditions. Skip review for documentation-only changes or dependency updates.
- |
FILES=$(git diff --name-only origin/main...HEAD)
if echo "$FILES" | grep -qvE '\.(md|txt|json|lock)$'; then
echo "Code changes detected, running review..."
else
echo "Only docs/config changes, skipping review"
exit 0
fi
Handle Large Diffs
AI models have context limits. A PR that changes 50 files will exceed most models' input capacity. Handle this gracefully:
- |
DIFF=$(git diff origin/main...HEAD)
TOKENS=$(echo "$DIFF" | wc -w)
if [ "$TOKENS" -gt 30000 ]; then
echo "WARNING: Large diff ($TOKENS words). Reviewing changed files individually."
for FILE in $(git diff --name-only origin/main...HEAD | head -20); do
echo "--- Reviewing $FILE ---"
git diff origin/main...HEAD -- "$FILE" > /tmp/file_diff.txt
# Send individual file diff to AI...
done
fi
Alternatively, use a model with a larger context window. Gemini 3.1 Pro handles up to 1 million tokens, making it suitable for reviewing even the largest PRs in a single pass.
Pipeline Security
Protecting API Keys
Never hardcode API keys in your bitbucket-pipelines.yml. Always use repository variables marked as "Secured."
Secured variables in Bitbucket:
- Don't appear in pipeline logs
- Can't be read by forked repository pipelines
- Are only available to the original repository's pipelines
For extra security on Server/Data Center, you can use Bitbucket's built-in secret detection or integrate with your organization's vault (HashiCorp Vault, AWS Secrets Manager).
Code Privacy
When you send code diffs to AI providers, the code leaves your infrastructure temporarily. If your team has strict data handling rules:
Use BYOK (Bring Your Own Key). With Git AutoReview's BYOK setup, your code goes directly from your pipeline to the AI provider under your own API agreement. No third-party service stores or processes your code.
Check provider data policies. As of 2026:
- Anthropic (Claude): API data not used for training by default
- Google (Gemini): Business API data not used for training
- OpenAI (GPT): API data not used for training since March 2023
Consider on-premise models. For the most sensitive codebases, you can run open-source models (Llama, Qwen) on your own infrastructure and point the pipeline at your local API endpoint.
Common Bitbucket Pipeline Problems (and Solutions)
"Permission denied" on git fetch
Bitbucket Pipelines run with limited git context by default. If git fetch origin main fails:
- step:
name: AI Code Review
clone:
depth: full # Fetch full history instead of shallow clone
script:
- git fetch origin main
- git diff origin/main...HEAD
The depth: full setting ensures the pipeline has enough git history to compute the diff.
Pipeline timeout on large repos
Large monorepos can take a while to clone. Set a reasonable timeout:
- step:
name: AI Code Review
max-time: 10 # 10 minute timeout
script:
# ...
Rate limiting from AI providers
If your team submits many PRs simultaneously, you might hit API rate limits. Add retry logic:
- |
for i in 1 2 3; do
RESPONSE=$(curl -s -w "\n%{http_code}" https://api.anthropic.com/v1/messages \
-H "x-api-key: $ANTHROPIC_API_KEY" \
-H "anthropic-version: 2023-06-01" \
-H "content-type: application/json" \
-d "$PAYLOAD")
HTTP_CODE=$(echo "$RESPONSE" | tail -1)
if [ "$HTTP_CODE" = "200" ]; then
echo "$RESPONSE" | sed '$d' | jq -r '.content[0].text'
break
elif [ "$HTTP_CODE" = "429" ]; then
echo "Rate limited, waiting 30s..."
sleep 30
else
echo "API error: $HTTP_CODE"
break
fi
done
10 free reviews per day. No credit card required. Takes 2 minutes to set up.
Install Free → Setup Guide
Comparing AI Review Approaches for Bitbucket
| Feature | VS Code Extension | Pipeline DIY | Extension + Pipeline |
|---|---|---|---|
| Automation | Manual trigger | Fully automated | Automated + curated |
| Setup time | 2 minutes | 30 minutes | 35 minutes |
| Pipeline cost | Zero | ~100 min/month | ~100 min/month |
| AI cost | Pay per use | Pay per use | Pay per use |
| Human curation | Yes | No | Yes |
| Bitbucket Server/DC | Yes | Yes | Yes |
| Jira integration | Yes (via extension) | Manual | Yes (via extension) |
| False positive handling | Filter before publish | Raw output | Filter before publish |
For most teams, Approach 1 (VS Code extension) is the right starting point. It takes two minutes to set up, costs nothing in pipeline minutes, and gives you human control over what gets published.
If your team needs every PR automatically reviewed, Approach 3 (combined) is the most robust option.
Approach 2 (pipeline-only) makes sense for teams comfortable with custom tooling who want to minimize per-developer licensing costs and don't mind raw AI output in their pipeline logs.
What's Coming: Atlassian Rovo Dev + MCP
Atlassian announced agentic CI/CD capabilities for Bitbucket Pipelines in Q1 2026. The concept: AI agents interact directly with Bitbucket Cloud through custom MCP (Model Context Protocol) servers injected into the pipeline environment. These agents could raise pull requests, add review comments, create code suggestions, and read pipeline logs.
This is early-stage technology. If you need AI code review working today, the approaches in this guide are production-ready. When Atlassian's agentic features mature, they'll probably complement dedicated review tools rather than replace them, same way GitHub Copilot and CodeRabbit serve different purposes on the GitHub side.
Getting Started
Pick the approach that matches your team's needs:
-
Just exploring? Install Git AutoReview in VS Code. Free tier, two-minute setup, no pipeline changes.
-
Want automation? Add the pipeline step from Approach 2 to your
bitbucket-pipelines.yml. Set your API keys as secured repository variables. -
Want both? Start with the extension to understand the AI review quality, then add the pipeline step once your team is comfortable.
Whatever approach you pick, the result is the same: faster feedback, fewer bugs reaching main, and your human reviewers spending their time on the decisions that actually need a human brain.
Related Resources
Bitbucket Content:
- AI Code Review for Bitbucket: Complete Guide — Tool comparison and full setup walkthrough
- Bitbucket Cloud vs Data Center Comparison — Which deployment suits your team
- Bitbucket AI Code Review Migration Guide — Moving from manual to AI-assisted review
- Bitbucket vs GitHub for Teams — Platform comparison for 2026
- Bitbucket Data Center AI Code Review — Enterprise-specific guide
AI Models & Setup:
- Claude vs Gemini vs GPT for Code Review — Which model works best
- AI Code Review Setup Guide — Step-by-step for all platforms
- BYOK Code Review — Use your own API keys for privacy
Using Bitbucket? Get AI code review with Gemini, Claude & GPT.
Try it free on VS CodeAdd AI code review to your Bitbucket workflow
10 free AI reviews per day. Works with GitHub, GitLab, and Bitbucket. Setup takes 2 minutes.
Free forever for 1 repo • Setup in 2 minutes
Related Articles
AI Code Review for GitLab 2026: Cloud & Self-Managed Guide
How to set up AI-powered code review for GitLab Cloud and Self-Managed. Compare GitLab Duo, Git AutoReview, CodeRabbit, and other tools for merge request automation.
Bitbucket vs GitHub for Teams in 2026: An Honest Comparison
A practical comparison of Bitbucket and GitHub for development teams. Pricing, CI/CD, code review, security, Jira integration, and when each platform makes more sense.
From Manual to AI: A Bitbucket Team's Guide to AI Code Review
ROI data, migration playbook, and practical setup for engineering managers bringing AI code review to Bitbucket teams. McKinsey: 56% faster. GitHub: 71% time-to-first-PR reduction.
Get code review tips in your inbox
Join developers getting weekly insights on AI-powered code reviews. No spam.
Unsubscribe anytime. We respect your inbox.