Since launching MakerKit in 2022, I've shipped hundreds of features using AI coding assistants. Claude Code has become my primary tool for building SaaS functionality, and the results have been remarkable: Anthropic reports that 90% of their code is now written with Claude Code, a statistic that was unthinkable even 12 months ago.
But here's what most tutorials miss: the quality of AI-generated code depends almost entirely on the code it has to reference.
This guide covers the practices that actually move the needle when building production features with Claude Code. Not prompting tricks or workflow hacks, but the structural decisions that determine whether AI generates maintainable code or technical debt.
Claude Code Best Practices
Build production-ready SaaS features with AI agents.
Why Reference Code Matters
When you start from scratch with Claude Code, you're relying on training data that includes:
- Outdated Next.js Pages Router patterns mixed with App Router
- Authentication examples that skip session invalidation
- Stripe integrations that don't handle webhook retries
- Database access without proper connection pooling
- React patterns from 2020 that don't work with Server Components
AI doesn't know which patterns are current. It generates code that "works" but accumulates technical debt from day one.
I see this constantly in support: developers who scaffolded their entire auth system with AI, then discover months later that sessions don't invalidate on password change, or that their webhook handlers aren't idempotent, or that their RLS policies have gaps.
How Good Reference Code Changes Everything
When Claude Code has a well-structured codebase to reference, it:
- Follows established patterns: The existing code shows how to structure Server Actions, handle errors, and validate input
- Uses the right abstractions: Instead of inventing new patterns, AI extends what's already working
- Maintains consistency: New code matches the style, types, and conventions of existing code
- Avoids common pitfalls: Production-tested code demonstrates the edge cases that need handling
The hidden cost of building from scratch isn't just the initial development time. It's every hour you'll spend later fixing the patterns AI helped you establish wrong.
MakerKit includes AI agent rules and an MCP server specifically so that Claude Code understands the codebase patterns. When you ask AI to "add a new billing feature," it sees how existing billing code works and follows suit.
Setting Up Claude Code for Success
Before diving into feature development, ensure Claude Code can effectively navigate your codebase.
MCP Server Integration
Makerkit's MCP Server gives Claude Code direct access to:
- Component discovery: List all
@kit/uicomponents and their props - Healthcheck scripts: Critical commands to run after changes
- PRD management: Create, read, and update product requirements
Set up the MCP Server before proceeding. This integration significantly improves Claude's understanding of your specific codebase patterns.
Agent Rules That Actually Work
After extensive testing with Claude Code and other AI agents, I've found that detailed API instructions have diminishing returns. These tools crawl your existing patterns, learn from them, and apply them. With a polished codebase, agents already have the context to produce good code.
Instructions that actually move the needle
Below are some of the instructions that actually move the needle with AI agents in our experience testing with MakerKit:
- Tell agents what they MUST do and MUST NOT do. Absolute rules beat suggestions. "Always verify ownership with userId before database writes" works. "Consider checking ownership" gets ignored.
- Use rules as a router. Tell Claude where and how to find things it needs. "Database schemas are in
packages/database/src/schema/. Check existing tables before creating new ones." This prevents hallucinated file paths and inconsistent patterns. - Skip exhaustive API documentation. Unless you find Claude messing up the code, skip it. Claude can read your actual code. Instead of documenting every function signature, point to reference implementations: "See
apps/web/app/home/[account]/settings/_lib/server/server-actions.tsfor the server action pattern."
Sub-Agents for Specialized Tasks
The kit includes a code-quality-reviewer agent that catches security issues, performance problems, and convention violations. Run it after completing tasks:
@code-quality-review perform a review on the feedback widget implementationThis creates a separate context focused on code review, avoiding the context drift that happens in long development sessions.
Creating Your Own Agents and Skills
You can create specialized agents for your domain. Add them to .claude/agents/:
---name: api-reviewerdescription: Review API endpoints for security and performancemodel: sonnet---You review API route handlers for:- Authentication checks (requireAuth middleware)- Input validation (Zod schemas)- Error handling (proper HTTP status codes)- Response formatting (consistent shape)Flag any endpoint missing authentication or validation.Invoke with @api-reviewer followed by your request. Each agent runs in its own context, so you can have a long development session and spawn fresh reviewers without losing clarity.
Skills are reusable prompts for common tasks. Create them in .claude/skills/your-skill-name/:
---name: billing-featuredescription: Guide for implementing billing-related features---# Billing Feature DevelopmentYou are an expert at implementing billing features in this codebase.## Guidelines1. Always use the existing Stripe utilities in `packages/billing/`2. Verify subscription status before allowing premium actions3. Use the `withSubscription` middleware for protected routes## Reference ImplementationSee `apps/web/app/home/[account]/billing/` for the existing patterns.Invoke skills with /billing-feature at the start of your message. Unlike agents, skills inject context into your current conversation rather than spawning a separate one. Use skills when you want Claude to follow specific patterns throughout a task.
Makerkit ships with skills for Drizzle, Server Actions, React forms, Playwright tests, and frontend design. Create your own for domain-specific patterns like billing, notifications, or integrations unique to your app.
Now for the interesting part: getting Claude to produce coherent implementations instead of scattered code.
PRD-Driven Development
Product Requirements Documents help Claude structure complex work into manageable tasks. Instead of one sprawling prompt, you define discrete user stories that Claude implements sequentially.
Makerkit includes a lightweight PRD system through the MCP Server. More sophisticated tools like Taskmaster AI exist, but the built-in system requires no setup and works well for most features.
Creating Effective PRDs
The key to good PRDs is rich detail. Vague requirements produce vague implementations.
Here's an example prompt for a feedback widget feature:
Create a PRD for a user feedback widget.All authenticated users should be able to:- Click a button to submit feedback- Choose a category (bug, feature request, other)- Add a description- Submit the feedback- Button is visible in the bottom right corner of the page in authed pages only- Cannot submit feedback if they're not authenticatedSuper Admin should be able to:- View all feedback in a simple dashboard in the Super Admin section- Mark feedback as "new", "reviewing", or "done"- Delete feedback- Dispatch an email to CONTACT_EMAIL when a new feedback is submitted
Claude creates a structured PRD with prioritized user stories:

Revising Before Implementation
Read the PRD before starting implementation. Claude may add redundant steps or miss requirements.
For example, Claude often adds explicit authentication checks even when the codebase already enforces them through middleware. In Makerkit, the enhanceAction function handles auth automatically. A task like "prevent unauthenticated users from submitting" is redundant and wastes time.
Remove or consolidate these before proceeding.
Building a Feature: Step-by-Step
With a PRD in place, implementation goes faster.
Starting the First Task
Ask Claude to begin:
Get started with the first task in the PRDClaude will implement the feedback widget, including:
- Database schema: Tables for storing feedback with proper relations
- Migration: Generated and applied database changes
- Components: The feedback button and submission dialog
- i18n: Translation files for internationalization
- Server Actions: Type-safe mutations for storing feedback
- Layout integration: Widget added only to authenticated pages

Reviewing AI-Generated Code
Claude did remarkably well on this task, but review revealed two issues:
- Redundant data: The schema included both
user_idandaccount_idwhen only one was needed for personal accounts - Unnecessary queries: The Server Action called
getUsereven though the user was already authenticated viaenhanceAction
These are subtle issues that work correctly but add unnecessary complexity. I asked Claude to fix them.
This is why understanding your codebase matters. AI agents generate functional code, but production-quality code requires human review for architectural decisions.
Using the Code Quality Reviewer
After each task, spawn the review agent:
@code-quality-review perform a review on the first task of the PRD
The reviewer checks for:
- Security issues (missing auth checks, RLS gaps)
- Performance problems (N+1 queries, missing indexes)
- Convention violations (incorrect file locations, naming issues)
- Type safety problems

It can be nitpicky, but it catches edge cases worth addressing.
The Completed Widget
The feedback dialog appears when users click the widget button:

Second Task: Admin Dashboard
Continue with the next user story:
Get started with the second task in the PRDClaude creates the admin dashboard for managing feedback:

Minor UI adjustments were needed, but the core functionality worked immediately.
What Actually Works with AI Agents
After building dozens of features with Claude Code, here's what consistently produces good results:
Keep Tasks Focused
When Claude starts a task, check its TODO list. If it contains too many items, ask it to focus on the current task only. Large TODO lists clutter the context window and degrade output quality.
Start Fresh Sessions for Complex Work
Long conversations cause context drift. Claude gradually "forgets" the rules from AGENTS.md and starts generating inconsistent code. For complex features, start a new session for each major task.
Run Verification After Every Task
pnpm healthcheck # Typecheck, lint, formatpnpm test:unit # Unit testsFor database changes:
pnpm --filter @kit/database drizzle:generatepnpm --filter @kit/database drizzle:migrateUpdate the PRD Manually
Claude sometimes updates the PRD after completing tasks, but not always. Check and update it yourself to maintain accurate status tracking.
Don't Skip Your Own Review
AI agents are tools, not replacements for engineering judgment. Review every change, understand what was implemented, and verify it matches your requirements.
The goal isn't to prove you can build everything yourself. It's to ship production-quality features efficiently. Claude Code, combined with good reference code and structured requirements, makes that possible at a pace that wasn't achievable before.
If you're new to Claude Code with Makerkit, start with the AI Agentic Development guide to configure your environment properly.
Summary
The practices that produce reliable results with Claude Code:
- Start with good reference code: AI extends existing patterns. Bad patterns propagate.
- Use PRDs for complex features: Structured requirements produce structured implementations. Use your own PRD, or tools like Taskmaster AI.
- Keep tasks focused: One user story at a time, with clear scope.
- Review everything: Functional code isn't the same as production-quality code.
- Run the code quality reviewer: Catch security and convention issues before they accumulate.
- Use absolute rules in AGENTS.md: "Must do" and "must not do" work. Suggestions don't.
- Keep rules updated and keep testing: Keep your rules updated and keep testing the rules to ensure they are working as expected. When you catch AI make a mistake, update the rules to prevent it from happening again.
Claude Code makes experienced developers faster. It's not a replacement for engineering judgment, but it dramatically reduces the time between requirements and working features.