AI agents are the paradigm shift everyone's been waiting for. Not chatbots with personality, not GPT wrappers with a nice UI. Real agents that research, reason, and execute multi-step workflows autonomously. In this tutorial, you'll build one.
We're creating an AI Content Agent that researches topics, generates outlines, writes drafts, and edits for consistency. It uses Mastra for agent orchestration, MakerKit for the SaaS foundation, and Drizzle for type-safe database access. By the end, you'll have a production-ready system that handles multi-tenancy, usage tracking, and proper error handling.
Why this stack? Mastra is TypeScript-native, built for the patterns Next.js developers already know. MakerKit handles the boring-but-critical stuff: auth, billing, team management. Together, they let you focus on the agent logic instead of reinventing infrastructure.
What you'll build:
- A multi-step content generation workflow (research, outline, draft, edit)
- Custom tools for web search and database persistence
- Memory system for brand voice consistency across sessions
- Production patterns: rate limiting, usage tracking, error handling
- React UI for interacting with your agent
Prerequisites: Familiarity with Next.js, React, TypeScript. Basic understanding of how LLM APIs work. About 2-3 hours to complete.
Tested with Mastra 0.x, AI SDK 4.x, Next.js 16, January 2026.
What is an AI Wrapper?
An AI wrapper is an application that sends user input to an LLM API and displays the response. It's a thin layer: UI, prompt template, API call, output formatting. ChatGPT wrappers, GPT wrappers, and most "AI-powered" tools on Product Hunt fall into this category.
Here's what a typical AI wrapper looks like:
User Input → Prompt Template → LLM API → Formatted OutputThat's it. One round trip. The AI responds, and you're done.
Wrappers have legitimate uses. They're fast to build, easy to understand, and work well for single-turn interactions. Need a summarizer? Wrapper. Translation tool? Wrapper. Grammar checker? Wrapper. The pattern handles stateless, single-purpose tasks efficiently.
But wrappers hit walls quickly:
- No memory. Each request starts fresh. The AI doesn't remember your brand voice, past conversations, or context from previous sessions. You're re-explaining everything, every time.
- No tools. The AI can't search the web, query your database, or take actions. It's limited to what's in the prompt.
- No multi-step reasoning. Complex tasks require breaking problems down, executing steps in sequence, and adjusting based on results. Wrappers do one thing and stop.
- No autonomy. The AI responds exactly once per request. It can't decide "I need more information" and go get it.
For a content generation use case, a wrapper might generate a blog post from a prompt. But it can't research the topic first. It can't check what you've written before for tone consistency. It can't save the output anywhere. It certainly can't iterate on its own work.
The wrapper pattern dominated 2023-2024 because it was the fastest path to "AI-powered" marketing claims. Now developers need more. Enter AI agents.
From AI Wrapper to AI Agent
The difference between an AI wrapper and an AI agent isn't complexity for its own sake. It's capability.
An AI agent is a system where the LLM can reason about tasks, use tools to take actions, maintain memory across interactions, and execute multi-step workflows. The agent doesn't just respond. It thinks, acts, and iterates.
Here's the conceptual difference:
| Capability | AI Wrapper | AI Agent |
|---|---|---|
| Memory | None (stateless) | Persistent context |
| Tools | None | Can call functions, APIs, databases |
| Reasoning | Single response | Multi-step planning |
| Autonomy | Responds once | Decides what to do next |
| Complexity | Simple | Sophisticated |
| Best for | Single-turn tasks | Multi-step workflows |
Think of the spectrum this way:
- Wrapper: "Here's a prompt, give me a response"
- Assistant: "Here's a conversation, remember what we discussed"
- Agent: "Here's a goal, figure out how to achieve it"
- Autonomous Agent: "Here are your objectives, keep working until done"
We're building at level 3. The agent has clear goals (create content), tools to accomplish them (web search, database save), memory for context (brand voice), and workflows for multi-step execution (research, outline, draft, edit).
Why 2026 is the agent year:
The building blocks matured. Frameworks like Mastra provide agent primitives out of the box. Model costs dropped by 90% since GPT-4's launch. Tool calling became reliable across providers. Memory systems moved beyond naive vector search.
Two years ago, building an agent meant stitching together research papers and experimental libraries. Now you import a framework and focus on your use case.
Why Mastra for TypeScript AI Agents
Mastra's philosophy: Python trains, TypeScript ships.
Most AI agent frameworks started in Python because that's where ML researchers work. LangChain, CrewAI, AutoGPT. They're Python-first, with JavaScript ports that feel like second-class citizens.
Mastra flipped this. It's TypeScript-native from the ground up, designed for the patterns Next.js and React developers already use. No awkward Python->JS translations. No missing features. No "check the Python docs, they're more complete."
What Mastra provides:
- Agents: The core abstraction. Define an agent with a model, instructions, and tools.
- Tools: Functions the agent can call. Web search, database queries, API calls, anything.
- Workflows: Multi-step orchestration. Chain agent calls with dependencies and branching.
- Memory: Semantic and episodic memory. The agent remembers context across sessions.
- RAG: Built-in retrieval-augmented generation. Pull from knowledge bases.
- MCP Support: Model Context Protocol for standardized tool interfaces.
- Evals: Test your agent's outputs programmatically.
Why not LangChain.js?
LangChain.js is a port of LangChain Python. It works, but the abstractions often feel foreign to TypeScript developers. The documentation assumes Python knowledge. Updates lag behind the Python version. And the abstraction layers run deep, making debugging opaque.
Mastra is opinionated where LangChain is flexible. That's a feature when you want to ship. LangChain gives you 15 ways to build a chain. Mastra gives you one good way and you move on.
Quick comparison:
| Feature | Mastra | LangChain.js | CrewAI |
|---|---|---|---|
| Language | TypeScript-native | Python port | Python only |
| Next.js DX | Excellent | Decent | N/A |
| Workflow engine | Built-in | LCEL (verbose) | Built-in |
| Memory | Built-in | Requires setup | Built-in |
| Learning curve | Moderate | Steep | Moderate |
| Production focus | Yes | Sometimes | Limited |
Mastra isn't trying to be everything. It's trying to be the best way to build AI agents in TypeScript. For Next.js developers, that focus pays off.
Mastra Documentation has the complete API reference. We'll cover the essentials as we build.
The Tech Stack
Here's what we're building with and why each piece earns its place.
Architecture Overview
┌─────────────────────────────────────────────────────┐│ FRONTEND ││ Next.js 16 + React 19 ││ (Chat UI, Content Dashboard) │└─────────────────────┬───────────────────────────────┘ │┌─────────────────────▼───────────────────────────────┐│ API LAYER ││ Next.js Route Handlers + Server Actions ││ (Rate limiting, Auth) │└─────────────────────┬───────────────────────────────┘ │┌─────────────────────▼───────────────────────────────┐│ AI LAYER ││ Mastra ││ (Agent, Tools, Workflows, Memory) ││ + ││ AI SDK ││ (Model abstraction, Streaming) │└─────────────────────┬───────────────────────────────┘ │┌─────────────────────▼───────────────────────────────┐│ DATA LAYER ││ PostgreSQL + Drizzle ORM ││ (Content, Users, Usage, Embeddings) │└─────────────────────────────────────────────────────┘The Stack
Mastra for AI Agents
Mastra handles agent orchestration. It defines the agent, manages tool calls, runs workflows, and maintains memory. We're using it because it's the most TypeScript-native option and integrates cleanly with Next.js patterns.
Vercel AI SDK for Model Abstraction
AI SDK (by Vercel) provides the model abstraction layer. Write once, swap between OpenAI, Anthropic, Google, or open-source models. Mastra uses AI SDK under the hood, so you get unified streaming, structured outputs, and provider flexibility.
Drizzle ORM for Type-Safe Database Queries
Drizzle ORM gives us type-safe database queries that AI tools actually understand. When you ask Claude Code or Cursor to "add a field to the content table," they read the Drizzle schema and generate correct migrations. Drizzle vs Prisma has the full comparison if you're deciding between them.
MakerKit for SaaS Infrastructure
MakerKit provides the SaaS infrastructure: authentication, billing, team management, admin panels. We're using the Drizzle kit specifically. Everything you need for a multi-tenant SaaS without building auth and billing from scratch.
PostgreSQL for Data Storage
PostgreSQL handles data storage.
Project Setup
Prerequisites:
- Node.js 20.10+
- MakerKit Drizzle kit license
- OpenAI API key (or Ollama for local development)
- Serper API key (for web search)
Step 1: Clone MakerKit Drizzle Starter
Please make sure to have a MakerKit Drizzle kit license before continuing.
git clone https://github.com/makerkit/next-drizzle-saas-kit-turbo.git my-ai-appcd my-ai-apppnpm installpmpm turbo gen setupStep 2: Install AI Dependencies
Next, you need to install the dependencies for the project:
pnpm add @mastra/core ai @ai-sdk/openai ollama-ai-provider zod -F webWhat each package does:
@mastra/core- Agent framework with tool supportai- Vercel's AI SDK for streaming and model abstraction@ai-sdk/openai- OpenAI provider for AI SDKollama-ai-provider- Ollama provider for local developmentzod- Schema validation for tool inputs/outputs
Step 3: Configure Environment Variables
Add to apps/web/.env.local:
# AI Provider: 'openai' (default) or 'ollama' for local devAI_PROVIDER=openai# OpenAI (required if AI_PROVIDER=openai)OPENAI_API_KEY=sk-...# Ollama (optional, for local development)OLLAMA_BASE_URL=http://localhost:11434/api# Web SearchSERPER_API_KEY=...For local development without API costs, use Ollama:
AI_PROVIDER=ollama# Then run: ollama serve && ollama pull llama3.3Deployment
Set these environment variables in Vercel:
# RequiredOPENAI_API_KEY=sk-...SERPER_API_KEY=...DATABASE_URL=postgresql://...# Optional (defaults to openai)AI_PROVIDER=openaiSet up OpenAI usage alerts at platform.openai.com/usage:
- Monthly budget limit
- Email alerts at 50%, 80%, 100%
Query ai_usage_log to enforce per-organization limits based on subscription tier.
From Vibe Coding to Production
You can prototype agents quickly with vibe coding tools. Describe what you want, watch code appear. It works for getting started, and there's nothing wrong with using Cursor or Claude Code to scaffold the initial agent.
But prototypes aren't products.
Production AI agents need everything prototypes skip:
- Error handling. The AI API will fail. Context windows overflow. Rate limits hit. Your agent needs graceful recovery, not crashes.
- Rate limiting. Without limits, one user can burn through your entire API budget in an afternoon. Ask me how I know.
- Usage tracking. If you're charging for generations, you need to know exactly how many tokens each user consumed.
- Multi-tenancy. Each account needs isolated data, separate brand voice, independent usage limits. Mixing tenant data isn't just a bug; it's a lawsuit.
- Authentication. Your agent endpoint needs to verify who's calling and what they're allowed to do.
MakerKit provides all of this out of the box. Authentication is handled. Billing infrastructure exists. Multi-tenancy is built into the architecture. The admin panel shows you usage across accounts.
The gap between "AI agent that works on my machine" and "AI agent that handles production traffic" is substantial. Using a SaaS starter kit closes that gap. You focus on the agent logic. The infrastructure is already solved.
What's Next
You now have a production-ready AI Content Agent. But this is a foundation, not a ceiling.
Extend the agent:
- Add image generation tools for blog post covers
- Integrate social media posting (Twitter, LinkedIn)
- Build a human-in-the-loop editing step before publish
- Implement RAG to pull from your existing content library
Improve quality:
- Add an evaluation pipeline to measure content quality over time
- Build A/B testing for different prompts and workflows
- Create feedback loops where user edits improve future generations
Scale the product:
- Add team collaboration features
- Build content calendars and scheduling
- Integrate with CMS platforms (WordPress, Contentful)
Mastra's evaluation framework helps with quality measurement. MakerKit's team features handle collaboration. The architecture we've built supports all of these extensions.
Resources:
- Mastra Documentation for the complete API reference
- AI SDK Documentation for model provider details
- MakerKit Documentation for the SaaS infrastructure
Quick Recommendation
Building an AI Content Agent with Mastra is best for:
- TypeScript/Next.js developers who want native DX, not Python ports
- SaaS builders who need multi-tenant AI features
- Teams that want production patterns, not just demos
- Developers moving beyond simple GPT wrappers
Skip this approach if:
- You need a quick prototype (use a no-code AI builder)
- Your use case is single-turn with no memory or tools needed (a wrapper is fine)
- You're building in Python (use LangChain or CrewAI)
Our pick: Mastra + MakerKit for production AI agents in TypeScript. The combination gives you agent orchestration without the infrastructure headache. Build the AI logic, ship the SaaS features, iterate based on user feedback.
Frequently Asked Questions
What is the difference between an AI wrapper and an AI agent?
How much does it cost to run an AI Content Agent?
Can I use Claude instead of GPT-4 with Mastra?
Is Mastra production-ready?
How do I add RAG to pull from my own documents?
How does billing work with MakerKit?
Why use Mastra over LangChain.js?
Can I use this pattern for other agent types?
Next Steps
Pick your path based on where you are:
- Ready to build? Clone the MakerKit Drizzle kit and follow along with this tutorial
- Exploring options? Check the Mastra documentation and AI SDK guides first
- Want the foundation without the AI? Start with MakerKit and add AI features as you need them
The AI agent landscape is moving fast. Frameworks improve monthly. Model capabilities expand. Costs drop. But the fundamentals we've covered, tools, memory, workflows, production patterns, will remain relevant. Master the concepts, and the specific implementations become details.