Your team reviews 200 lines of code. Someone misses an N+1 query. It ships. Users feel it.
This isn’t a rare story. It’s every team’s Tuesday. Code review is the last line of defense before production — and it’s consistently the most overloaded, context-switching, energy-draining part of the engineering workflow.
The problem isn’t that your developers are careless. It’s that manual review doesn’t scale. And generic linters don’t understand your Laravel conventions, your service layer patterns, your authorization rules.
Claude AI does — if you set it up correctly. Here’s how to make it your Laravel-aware code reviewer that catches what humans miss.
Why Generic AI Review Falls Short
The naive approach is to paste a PR diff into Claude and ask “what’s wrong with this?” You’ll get feedback — but it won’t know:
- That your project uses
$this->authorize()on every controller method - That your team wraps all external API calls in a dedicated
Serviceclass - That N+1 queries in your
OrderResourceare a known performance hotspot - That your
Usermodel has a customscopeActive()that should always be used instead of rawwhere('active', 1)
Generic review is surface-level. Context-aware review is what saves deployments.
The solution is a CLAUDE.md file — a project-level instruction set that teaches Claude your conventions before it reviews a single line.
Step 1: Create Your CLAUDE.md
Place this file in the root of your Laravel project. It’s the system prompt Claude reads before reviewing anything:
# Laravel Project: Code Review Guidelines
## Architecture Conventions
- Controllers must be thin. Business logic belongs in dedicated Service classes under `app/Services/`.
- All Eloquent queries must go through Repository classes under `app/Repositories/`.
- Never use `User::where('active', 1)`. Always use the `scopeActive()` scope: `User::active()`.
## Security Rules
- Every controller method that modifies data MUST call `$this->authorize()` or use a Policy.
- Never expose `$fillable = ['*']`. Mass assignment must be explicit.
- All user input passed to external APIs must be validated via a FormRequest, not inline `$request->all()`.
## Performance Rules
- Flag any Eloquent relationship accessed inside a loop without eager loading — this is an N+1 query.
- Queries on the `orders` table must include an index hint or be reviewed for full table scans.
- Cache expensive computations using `Cache::remember()` with a TTL.
## Testing Requirements
- Every new public method in a Service class requires a corresponding Feature test.
- Factory usage is preferred over manual model creation in tests.
## Review Output Format
For each issue found, output:
- **Severity**: Critical / High / Medium / Low
- **File and line**: exact location
- **Issue**: clear description
- **Suggestion**: concrete fix with code example
This single file transforms Claude from a generic assistant into a reviewer that knows your project.
Step 2: GitHub Actions — Automatic PR Review on Every Push
Now wire Claude into your CI pipeline so it reviews every PR automatically when it opens or gets new commits:
# .github/workflows/claude-review.yml
name: Claude AI Code Review
on:
pull_request:
types: [opened, synchronize]
paths:
- 'app/**'
- 'tests/**'
- 'routes/**'
- 'database/migrations/**'
jobs:
review:
runs-on: ubuntu-latest
permissions:
contents: read
pull-requests: write
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Claude PR Review
uses: anthropics/claude-code-action@v1
with:
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
github_token: ${{ secrets.GITHUB_TOKEN }}
model: claude-sonnet-4-20250514
direct_prompt: |
You are a senior Laravel engineer reviewing a pull request.
Read the CLAUDE.md file in the root of this repo first — it contains
project-specific conventions and rules that must be checked.
Review only the changed files in this PR. For each issue found,
post an inline comment at the exact line with severity, description,
and a concrete code suggestion.
Focus on:
1. Security vulnerabilities (missing authorization, mass assignment)
2. N+1 query patterns
3. Convention violations (as defined in CLAUDE.md)
4. Missing test coverage for new Service methods
5. Any logic that belongs in a Service but is in a Controller
Confidence threshold: only flag issues you are 80%+ confident about.
Skip style nitpicks — Pint handles those.
The paths filter ensures Claude only runs when application code changes — not on README edits or config tweaks.
Step 3: The Pre-Push Local Review (Catch It Before CI Does)
Train your team to run a local review before pushing. This catches issues in seconds instead of waiting for CI:
# Install Claude Code globally
npm install -g @anthropic-ai/claude-code
# From your Laravel project root, review the current git diff
claude "Review my staged changes against the CLAUDE.md conventions in this repo.
Flag any N+1 queries, missing authorization, or convention violations.
Be specific about file and line number."
Add this as a Git pre-push hook for automated enforcement:
# .git/hooks/pre-push
#!/bin/bash
echo "🔍 Running Claude code review..."
claude --print "Review the diff of HEAD against main. Check CLAUDE.md conventions.
If you find Critical or High severity issues, exit with a non-zero code and list them." \
|| { echo "❌ Claude found critical issues. Fix before pushing."; exit 1; }
echo "✅ Claude review passed."
Step 4: The Security-Focused Review
For migrations, authentication changes, or anything touching payments, run a dedicated security scan:
# Targeted security review of a specific file
claude "Perform a security review of app/Http/Controllers/PaymentController.php.
Check for: missing authorization, SQL injection risks, unvalidated inputs,
exposed sensitive data in responses, and missing rate limiting.
Reference our CLAUDE.md security rules throughout."
Or use Claude Code’s built-in security command:
claude /security-review
This scans your entire repo for OWASP Top 10 vulnerability classes and posts findings with exact file references.
What Claude Catches That Humans Miss
Here are the real catches that justify the setup time:
N+1 queries in API Resources:
// Claude flags this immediately
class OrderResource extends JsonResource
{
public function toArray($request)
{
return [
'id' => $this->id,
'customer' => $this->user->name, // N+1 — user not eager loaded
'items' => $this->items->count(), // N+1 — items not eager loaded
];
}
}
// Claude suggests this fix
// In your controller:
$orders = Order::with(['user', 'items'])->paginate(20);
Missing authorization:
// Claude flags: no authorization check — any authenticated user can delete any order
public function destroy(Order $order)
{
$order->delete();
return response()->noContent();
}
// Claude suggests:
public function destroy(Order $order)
{
$this->authorize('delete', $order); // or Gate::authorize()
$order->delete();
return response()->noContent();
}
Convention drift:
// Claude flags: raw where() instead of scopeActive()
$users = User::where('active', 1)->where('role', 'admin')->get();
// Claude suggests (per CLAUDE.md):
$users = User::active()->where('role', 'admin')->get();
Tuning the Signal-to-Noise Ratio
The biggest risk with AI review is alert fatigue — too many low-confidence comments that developers learn to ignore.
Two things keep the signal high:
1. The 80% confidence threshold — already set in the GitHub Action above. Claude only posts a comment if it’s 80%+ confident it’s a real issue. This eliminates most false positives.
2. Scope Claude to what Pint can’t check — Laravel Pint handles formatting, spacing, and style. Tell Claude explicitly to skip those. Focus it on logic, security, and conventions. Your CLAUDE.md is where you draw that line.
## Out of Scope for Review
- Code formatting and indentation (handled by Pint)
- Import ordering (handled by Pint)
- Docblock formatting
- Preference-based naming (camelCase vs snake_case is already enforced)
The Human Review That Remains
Claude doesn’t replace your senior engineers — it replaces the exhausting first pass. After Claude runs, your human reviewers focus on:
- Architectural intent — does this PR solve the right problem?
- Business context — does this logic match what the ticket actually asks for?
- Risk trade-offs — is this the right level of abstraction for this stage of the product?
These are judgment calls that require knowing the company, the customer, and the codebase’s future direction. Claude doesn’t have that context. Your senior engineers do.
The combination — Claude handling blocking and tackling, humans handling strategy — is what gets teams to safer deployments without burning out reviewers.
Getting Started Today
You don’t need to set up GitHub Actions on day one. Start with just two steps:
- Create your
CLAUDE.mdwith your three or four most important conventions - Run a local
claudereview on your next PR before pushing
Once you see what it catches, the GitHub Actions integration becomes an obvious next step. The CLAUDE.md file you write today will keep improving as you add more conventions over time — and every PR reviewed trains your team on the rules, not just the code.
That’s a code review culture, not just a tool.
