Build Your Own AI Chatbot: A Complete Guide
In this tutorial, we will learn how to make a personal chat using any AI model. This is a 100% human-written guide, perfect for developers who want to integrate AI into their portfolio or create a standalone SaaS product. We will specifically use Next.js 15 and the Vercel AI SDK, which is the industry standard for building AI-powered Web interfaces.
We will focus on creating a Streaming Chat UI, just like ChatGPT, but personalized for you.
Why Build Your Own?
- Privacy: You control the data.
- Customization: Use any model (GPT-4, Gemini, Claude, Llama).
- Learn the Tech Stack: AI Engineering is the hottest skill in 2026-2026.
🛠️ Tech Stack
- Next.js 15 (Framework)
- Vercel AI SDK (AI Integration)
- TailwindCSS (Styling)
- Shadcn UI (Components)
📺 Video Tutorial Reference
For a visual walkthrough, I highly recommend checking out this video by Sam Thoyre which covers similar ground.
🚀 Step-by-Step Implementation
Step 1: Create a New Next.js Project
First, let's initialize our project. Open your terminal and run:
npx create-next-app@latest my-ai-chat
cd my-ai-chat
Select the defaults (TypeScript, Tailwind, App Router).
Step 2: Install Dependencies
We need the Vercel AI SDK and a provider. We'll use OpenAI for this example, but you can easily swap it for Google Generative AI (Gemini).
npm install ai @ai-sdk/openai zod
Step 3: Configure Environment Variables
Create a .env.local file in your root directory and add your secret key.
OPENAI_API_KEY=sk-your-secret-key
Step 4: Create the API Route
Next.js App Router uses Route Handlers. Create a file at app/api/chat/route.ts. This handles the communication with the AI model.
// app/api/chat/route.ts
import { openai } from "@ai-sdk/openai";
import { streamText } from "ai";
// Allow streaming responses up to 30 seconds
export const maxDuration = 30;
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: openai("gpt-4o"),
messages,
});
return result.toDataStreamResponse();
}
Step 5: Build the Chat Interface
Now, let's create the frontend. Open app/page.tsx and use the useChat hook. This hook handles state, submission, and streaming automatically!
// app/page.tsx
"use client";
import { useChat } from "ai/react";
export default function Chat() {
const { messages, input, handleInputChange, handleSubmit } = useChat();
return (
<div className="stretch mx-auto flex w-full max-w-md flex-col py-24">
<h1 className="mb-8 text-center text-2xl font-bold">My Personal AI</h1>
<div className="space-y-4">
{messages.map((m) => (
<div
key={m.id}
className={`rounded-lg p-4 whitespace-pre-wrap ${
m.role === "user"
? "self-end bg-blue-100 text-blue-900"
: "bg-gray-100 text-gray-900"
}`}
>
<span className="font-bold">
{m.role === "user" ? "You: " : "AI: "}
</span>
{m.content}
</div>
))}
</div>
<form
onSubmit={handleSubmit}
className="fixed bottom-0 w-full max-w-md border-t bg-white p-2"
>
<input
className="w-full rounded border border-gray-300 p-2 shadow-sm"
value={input}
placeholder="Say something..."
onChange={handleInputChange}
/>
</form>
</div>
);
}
Step 6: Run It!
npm run dev
Visit http://localhost:3000. You now have a fully functional, streaming AI chatbot!
🎨 Pro Tips for "Review"
- System Prompts: You can customize the AI's personality by adding a
systemproperty to thestreamTextfunction. - UI/UX: Use Shadcn UI for beautiful input fields and avatars.
- Markdown Support: Use
react-markdownto render code blocks and bold text properly in the AI response.
Conclusion
Building a personal AI chat is easier than ever with the Vercel AI SDK. You don't need to be an expert in Python or Machine Learning. With just a few lines of JavaScript, you can harness the power of LLMs directly in your web/mobile apps.
Start building today!