AI Integration
Learn how to integrate and use Vercel AI SDK in NEXTDEVKIT to build powerful AI features
NEXTDEVKIT integrates Vercel AI SDK and Google Generative AI to provide powerful AI functionality for your applications. With simple configuration, you can quickly enable chatbots, image generation, and other AI features.
🚀 Quick Start
NEXTDEVKIT comes pre-configured with Google Gemini AI models. You only need to configure one environment variable to enable all AI features.
1. Configure Google Gemini API
Get your Google Generative AI API key:
- Visit Google AI Studio
- Create a new project or select an existing one
- Generate an API key
Add the API key to your .env
file:
# ---------GenAI----------
GOOGLE_GENERATIVE_AI_API_KEY=your_google_ai_api_key
2. Start the Application
After configuration, restart the development server:
pnpm dev
Now you can access the /app/ai
path to experience the following AI features:
- Chatbot -
/app/ai/chat
- Image Generation -
/app/ai/image
🏗️ AI Architecture
NEXTDEVKIT's AI integration uses a modular design:
src/
├── ai/ # AI core configuration
│ ├── index.ts # Model registry and config
│ ├── types.ts # AI type definitions
│ ├── errors.ts # AI error handling
│ ├── prompts.ts # System prompts
│ ├── agents/ # AI agents
│ │ └── get-weather.ts # Weather query tool
│ ├── image/ # Image generation related
│ │ ├── suggestions.ts # Image generation suggestions
│ │ └── use-image-generation.ts # Image generation hook
│ └── models/ # Model configurations
├── app/(app)/app/ai/ # AI feature pages
│ ├── chat/ # Chatbot pages
│ │ └── page.tsx
│ ├── image/ # Image generation pages
│ │ └── page.tsx
│ └── layout.tsx
├── app/api/ai/demo/ # AI API routes
│ ├── chat/ # Chat API
│ │ └── route.ts
│ └── image/ # Image generation API
│ └── route.ts
└── components/examples/ai/ # AI component library
├── chat/ # Chat components
│ ├── index.tsx # Main chat component
│ ├── messages.tsx # Message display
│ ├── multimodal-input.tsx # Multimodal input
│ └── suggested-actions.tsx # Suggested actions
└── image/ # Image generation components
├── index.tsx # Main image generation component
├── model-select.tsx # Model selection
└── prompt-input.tsx # Prompt input
⚙️ AI Model Configuration
Core Configuration File
NEXTDEVKIT uses Vercel AI SDK's Provider Registry system:
import { google } from "@ai-sdk/google";
import { createProviderRegistry, customProvider } from "ai";
// Default model IDs
export const DEFAULT_FAST_MODEL_ID = "google:fast";
export const DEFAULT_CHAT_MODEL_ID = "google:chat";
export const DEFAULT_IMAGE_MODEL_ID = "google:image";
export const DEFAULT_REASONING_MODEL_ID = "google:reasoning";
export const registry = createProviderRegistry({
google: customProvider({
languageModels: {
fast: google("gemini-2.5-flash-lite"), // Fast model
chat: google("gemini-2.5-flash"), // Standard chat
reasoning: google("gemini-2.5-pro"), // Reasoning model
},
imageModels: {
image: google.imageModel("imagen-3.0-generate-002"), // Image generation
},
fallbackProvider: google,
}),
});
Supported Model Types
Model Type | Model Name | Purpose | API Cost |
---|---|---|---|
fast | gemini-2.5-flash-lite | Quick responses, low cost | Lowest |
chat | gemini-2.5-flash | Daily conversations, balanced performance | Medium |
reasoning | gemini-2.5-pro | Complex reasoning, high quality | Highest |
image | imagen-3.0-generate-002 | Image generation | Per image billing |
Model Configuration Management
NEXTDEVKIT uses dedicated model configuration files to manage different types of AI models:
Chat Model Configuration
export interface ChatModel {
id: string; // Model identifier
name: string; // Display name
description: string; // Model description
}
export const chatModels: Array<ChatModel> = [
{
id: DEFAULT_FAST_MODEL_ID,
name: "Fastest Model",
description: "Most cost-efficient model supporting high throughput",
},
{
id: DEFAULT_CHAT_MODEL_ID,
name: "Chat Model",
description: "Adaptive thinking, cost efficiency",
},
{
id: DEFAULT_REASONING_MODEL_ID,
name: "Reasoning Model",
description: "Enhanced thinking and reasoning, multimodal understanding, advanced coding",
},
];
Image Model Configuration
export interface ImageModel {
id: string; // Model identifier
name: string; // Display name
description: string; // Model description
modelId: string; // Actual model ID
}
export const imageModels: Array<ImageModel> = [
{
id: DEFAULT_IMAGE_MODEL_ID,
name: "Imagen 3.0",
description: "Google's advanced image generation model",
modelId: "imagen-3.0-generate-002",
},
];
Model Configuration Usage
These configuration files are primarily used for:
-
UI Model Selector: Display available models in chat and image generation interfaces
// Usage in components import { chatModels } from "@/ai/models/chat"; const selectedModel = chatModels.find( (model) => model.id === selectedModelId );
-
Model Dropdown Menu: Render model selection lists
{chatModels.map((model) => ( <SelectItem key={model.id} value={model.name}> <div className="flex flex-col gap-1"> <span className="font-medium">{model.name}</span> <span className="text-sm text-muted-foreground"> {model.description} </span> </div> </SelectItem> ))}
-
Default Model Settings: Provide default selected models
const defaultModel = imageModels[0]; // Use first model as default
🔧 Core Feature Implementation
💬 Chatbot
API Implementation
import { streamText, convertToModelMessages } from "ai";
import { registry, DEFAULT_REASONING_MODEL_ID } from "@/ai";
import { systemPrompt } from "@/ai/prompts";
import { getWeather } from "@/ai/agents/get-weather";
export async function POST(request: NextRequest) {
const { messages, selectedChatModel } = await request.json();
const result = streamText({
model: registry.languageModel(selectedChatModel),
system: systemPrompt({ selectedChatModel }),
messages: convertToModelMessages(messages),
tools: {
getWeather, // Weather query tool
},
});
return result.toUIMessageStreamResponse({
sendReasoning: selectedChatModel === DEFAULT_REASONING_MODEL_ID,
});
}
Chat Component
"use client";
import { useChat } from "@ai-sdk/react";
import { DefaultChatTransport } from "ai";
export function Chat({ id, initialMessages, initialChatModel }) {
const { messages, sendMessage, status } = useChat({
id,
messages: initialMessages,
transport: new DefaultChatTransport({
api: "/api/ai/demo/chat",
prepareSendMessagesRequest({ messages, id }) {
return {
body: {
id,
messages,
selectedChatModel: initialChatModel
}
};
},
}),
});
return (
<div className="chat-container">
<Messages messages={messages} />
<MultimodalInput onSend={sendMessage} />
</div>
);
}
🖼️ Image Generation
API Implementation
import { experimental_generateImage as generateImage } from "ai";
import { registry, DEFAULT_IMAGE_MODEL_ID } from "@/ai";
export async function POST(req: NextRequest) {
const { prompt, provider, modelId } = await req.json();
const config = providerConfig[provider];
const result = await generateImage({
model: config.createImageModel,
prompt,
size: "1024x1024",
seed: Math.floor(Math.random() * 1000000),
providerOptions: {
vertex: { addWatermark: false } // Vertex AI configuration
}
});
return NextResponse.json({
provider,
image: result.image.base64,
});
}
Image Generation Component
"use client";
import { useImageGeneration } from "@/ai/image/use-image-generation";
export function GenerateImage({ suggestions }) {
const {
images,
isLoading,
startGeneration
} = useImageGeneration();
const handlePromptSubmit = (prompt: string) => {
const providerToModel = { google: "imagen-3.0-generate-002" };
startGeneration(prompt, ["google"], providerToModel);
};
return (
<div className="max-w-4xl mx-auto">
<PromptInput
onSubmit={handlePromptSubmit}
isLoading={isLoading}
suggestions={suggestions}
/>
<ImageResults images={images} />
</div>
);
}
🛠️ Implementation Steps
Step 1: Install Dependencies
NEXTDEVKIT comes pre-installed with the following AI-related dependencies:
{
"dependencies": {
"@ai-sdk/google": "^2.0.12",
"@ai-sdk/react": "^2.0.33",
"ai": "^5.0.33"
}
}
To add other providers:
# OpenAI
pnpm add @ai-sdk/openai
# Anthropic
pnpm add @ai-sdk/anthropic
# Azure OpenAI
pnpm add @ai-sdk/azure
Step 2: Configure Environment Variables
Ensure your .env
file contains:
# Google AI Key
GOOGLE_GENERATIVE_AI_API_KEY=your_api_key
# Optional: Other providers
OPENAI_API_KEY=your_openai_key
ANTHROPIC_API_KEY=your_anthropic_key
Step 3: Extend Model Configuration
Add new providers:
import { openai } from "@ai-sdk/openai";
import { anthropic } from "@ai-sdk/anthropic";
export const registry = createProviderRegistry({
google: customProvider({
languageModels: {
fast: google("gemini-2.5-flash-lite"),
chat: google("gemini-2.5-flash"),
},
}),
openai: customProvider({
languageModels: {
chat: openai("gpt-4"),
fast: openai("gpt-3.5-turbo"),
},
}),
anthropic: customProvider({
languageModels: {
chat: anthropic("claude-3-sonnet-20240229"),
},
}),
});
Step 4: Add New Models to Interface
After adding new AI models, you need to update the model configuration files to display them in the interface:
Adding New Chat Models
// First define new model ID constants
export const DEFAULT_OPENAI_MODEL_ID = "openai:chat";
export const DEFAULT_ANTHROPIC_MODEL_ID = "anthropic:chat";
import {
DEFAULT_CHAT_MODEL_ID,
DEFAULT_FAST_MODEL_ID,
DEFAULT_REASONING_MODEL_ID,
DEFAULT_OPENAI_MODEL_ID, // New addition
DEFAULT_ANTHROPIC_MODEL_ID, // New addition
} from "@/ai";
export const chatModels: Array<ChatModel> = [
{
id: DEFAULT_FAST_MODEL_ID,
name: "Fastest Model",
description: "Most cost-efficient model supporting high throughput",
},
{
id: DEFAULT_CHAT_MODEL_ID,
name: "Chat Model",
description: "Adaptive thinking, cost efficiency",
},
{
id: DEFAULT_REASONING_MODEL_ID,
name: "Reasoning Model",
description: "Enhanced thinking and reasoning, multimodal understanding, advanced coding",
},
// Newly added models
{
id: DEFAULT_OPENAI_MODEL_ID,
name: "GPT-4",
description: "OpenAI's powerful language model",
},
{
id: DEFAULT_ANTHROPIC_MODEL_ID,
name: "Claude 3 Sonnet",
description: "Anthropic's balanced performance model",
},
];
Adding New Image Models
// Define new image model ID
export const OPENAI_IMAGE_MODEL_ID = "openai:image";
// Add image model in registry
export const registry = createProviderRegistry({
openai: customProvider({
languageModels: {
chat: openai("gpt-4"),
},
imageModels: {
image: openai.imageModel("dall-e-3"), // New image model
},
}),
});
import { DEFAULT_IMAGE_MODEL_ID, OPENAI_IMAGE_MODEL_ID } from "@/ai";
export const imageModels: Array<ImageModel> = [
{
id: DEFAULT_IMAGE_MODEL_ID,
name: "Imagen 3.0",
description: "Google's advanced image generation model",
modelId: "imagen-3.0-generate-002",
},
// Newly added image model
{
id: OPENAI_IMAGE_MODEL_ID,
name: "DALL-E 3",
description: "OpenAI's image generation model",
modelId: "dall-e-3",
},
];
Model ID Naming Conventions
Follow these naming conventions to ensure configuration consistency:
-
Provider Prefix: Use
{provider}:{type}
formatgoogle:fast
- Google's fast modelopenai:chat
- OpenAI's chat modelanthropic:reasoning
- Anthropic's reasoning model
-
Constant Naming: Use
DEFAULT_
or{PROVIDER}_
prefixexport const DEFAULT_FAST_MODEL_ID = "google:fast"; export const OPENAI_CHAT_MODEL_ID = "openai:chat"; export const ANTHROPIC_REASONING_MODEL_ID = "anthropic:reasoning";
-
Model Configuration Matching: Ensure
id
field matches registry keys// Registry configuration languageModels: { fast: google("gemini-2.5-flash-lite"), // Maps to "google:fast" chat: openai("gpt-4"), // Maps to "openai:chat" } // Models configuration IDs { id: "google:fast", name: "...", description: "..." } { id: "openai:chat", name: "...", description: "..." }
🔄 Advanced Features
🧠 Reasoning Mode
Support for Gemini 2.5 Pro's chain-of-thought reasoning:
reasoning: wrapLanguageModel({
model: google("gemini-2.5-pro"),
middleware: [
defaultSettingsMiddleware({
settings: {
providerOptions: {
google: {
thinkingConfig: {
thinkingBudget: 8192, // Thinking token budget
includeThoughts: true, // Include thinking process
},
},
},
},
}),
extractReasoningMiddleware({ tagName: "think" }), // Extract reasoning process
],
})
🛠️ Tool Calling (Function Calling)
Define agent tools:
import { z } from "zod";
export const getWeather = {
description: "Get weather information for a specified city",
parameters: z.object({
location: z.string().describe("City name"),
}),
execute: async ({ location }: { location: string }) => {
// Call weather API
const weather = await fetchWeatherData(location);
return `Weather in ${location}: ${weather.condition}, temperature ${weather.temperature}°C`;
},
};
Enable tools in chat:
const result = streamText({
model: registry.languageModel(selectedChatModel),
messages: convertToModelMessages(messages),
experimental_activeTools: ["getWeather"], // Activate tools
tools: { getWeather },
});
🎨 Image Generation Suggestions
Preset prompt suggestions:
export const suggestions = [
{
title: "Landscape Photography",
prompt: "Spectacular mountain sunrise with golden light over sea of clouds",
},
{
title: "Portrait Photography",
prompt: "Professional business portrait, modern office background, natural lighting",
},
{
title: "Abstract Art",
prompt: "Colorful abstract geometric shapes, modern art style",
},
];
export function getRandomSuggestions(count: number) {
return suggestions.sort(() => 0.5 - Math.random()).slice(0, count);
}
📊 Data Flow Management
Chat Data Stream
Use Context Provider to manage chat state:
"use client";
import { createContext, useContext, useState } from "react";
const DataStreamContext = createContext();
export function DataStreamProvider({ children }) {
const [dataStream, setDataStream] = useState([]);
return (
<DataStreamContext.Provider value={{ dataStream, setDataStream }}>
{children}
</DataStreamContext.Provider>
);
}
export function useDataStream() {
return useContext(DataStreamContext);
}
Multimodal Input
Support for text and file uploads:
export function MultimodalInput({ onSend }) {
const [input, setInput] = useState("");
const [files, setFiles] = useState([]);
const handleSubmit = () => {
onSend({
content: input,
experimental_attachments: files,
});
};
return (
<div className="input-container">
<FileUpload onFilesChange={setFiles} />
<TextInput value={input} onChange={setInput} />
<SendButton onClick={handleSubmit} />
</div>
);
}
🔒 Security and Performance
User Authentication
All AI API endpoints require authentication (disabled by default):
export async function POST(request: NextRequest) {
const session = await getSession(request);
if (!session?.user) {
return new ChatSDKError("unauthorized:chat").toResponse();
}
// Continue processing request...
}
Request Timeout Management
Prevent long-running requests:
const TIMEOUT_MILLIS = 55 * 1000; // 55 seconds
const withTimeout = <T>(promise: Promise<T>, timeoutMillis: number) => {
return Promise.race([
promise,
new Promise<T>((_, reject) =>
setTimeout(() => reject(new Error("Request timed out")), timeoutMillis)
),
]);
};
const result = await withTimeout(generatePromise, TIMEOUT_MILLIS);
Error Handling
Unified error handling system:
export class ChatSDKError extends Error {
constructor(
public code: "unauthorized:chat" | "rate-limited" | "model-error",
message?: string
) {
super(message ?? code);
this.name = "ChatSDKError";
}
toResponse() {
return Response.json(
{ error: this.code, message: this.message },
{ status: this.getStatusCode() }
);
}
private getStatusCode() {
switch (this.code) {
case "unauthorized:chat": return 401;
case "rate-limited": return 429;
default: return 500;
}
}
}
🚨 Troubleshooting
Common Issues
-
Invalid API Key
Error: Invalid API key for Google Generative AI
- Check
GOOGLE_GENERATIVE_AI_API_KEY
in your.env
file - Ensure API key is valid and has sufficient quota
- Check
-
Model Access Restricted
Error: Model 'gemini-2.5-pro' not available
- Some models require access approval
- Check available models list in Google AI Studio
-
Image Generation Failed
Error: Image generation timed out
- Imagen models respond slowly, 55-second timeout is set
- Simplify prompts or retry generation
Debugging Tips
Enable detailed logging:
console.log("Request details:", {
messages: messages.length,
model: selectedChatModel,
timestamp: new Date().toISOString()
});
Check model registration:
console.log("Available models:", Object.keys(registry.languageModels));
💰 Cost Optimization
Model Selection Strategy
- Daily conversations: Use
gemini-2.5-flash-lite
(lowest cost) - Complex queries: Use
gemini-2.5-flash
(balanced performance) - Professional analysis: Use
gemini-2.5-pro
(highest quality)
Token Management
const result = streamText({
model: registry.languageModel(selectedChatModel),
messages: convertToModelMessages(messages),
maxTokens: selectedChatModel.includes("pro") ? 4096 : 2048, // Adjust based on model
temperature: 0.7,
});
🔗 Related Resources
🎯 Next Steps
Now that you understand the AI integration architecture, you can explore more related features: