AI SDK v6 Foundations. Quick Reference

1. Generate Text
import { generateText } from "ai";
const { text } = await generateText({
model: "anthropic/claude-sonnet-4.5",
prompt: "What is love?",
});
2. Providers
Install the provider you need. They all follow the same interface.
| Provider | Package |
|---|---|
| OpenAI | @ai-sdk/openai |
| Anthropic | @ai-sdk/anthropic |
@ai-sdk/google |
|
| Mistral | @ai-sdk/mistral |
| Amazon Bedrock | @ai-sdk/amazon-bedrock |
| xAI Grok | @ai-sdk/xai |
| DeepSeek | @ai-sdk/deepseek |
| Groq | @ai-sdk/groq |
Full list at the docs. Any OpenAI-compatible API works with the OpenAI Compatible provider. Self-hosted models work through Ollama or LM Studio.
3. Prompts
Three types. Text. Messages. System.
Text prompt
const result = await generateText({
model: "anthropic/claude-sonnet-4.5",
prompt: `Plan a trip to \({destination} for \){days} days.`,
});
System prompt
const result = await generateText({
model: "anthropic/claude-sonnet-4.5",
system: "You help plan travel itineraries.",
prompt: `Plan a trip to \({destination} for \){days} days.`,
});
Message prompt
An array of messages. Good for chat interfaces.
const result = await generateText({
model: "anthropic/claude-sonnet-4.5",
messages: [
{ role: "user", content: "Hi!" },
{ role: "assistant", content: "Hello, how can I help?" },
{ role: "user", content: "Best Currywurst in Berlin?" },
],
});
Image in a message
const result = await generateText({
model: "anthropic/claude-sonnet-4.5",
messages: [
{
role: "user",
content: [
{ type: "text", text: "Describe this image." },
{ type: "image", image: "https://example.com/photo.png" },
],
},
],
});
Image can be a URL string. A base64 string. A Buffer. An ArrayBuffer. A Uint8Array.
File in a message
const result = await generateText({
model: google("gemini-1.5-flash"),
messages: [
{
role: "user",
content: [
{ type: "text", text: "What is this file about?" },
{
type: "file",
mediaType: "application/pdf",
data: fs.readFileSync("./example.pdf"),
},
],
},
],
});
File support depends on the provider. Google and Anthropic support PDFs. OpenAI supports audio files.
Provider options
Extra settings specific to a provider. Three levels.
// Function level
await generateText({
model: azure("your-deployment"),
providerOptions: {
openai: { reasoningEffort: "low" },
},
});
// Message level
const messages = [
{
role: "system",
content: "Cached message",
providerOptions: {
anthropic: { cacheControl: { type: "ephemeral" } },
},
},
];
// Message part level
{
type: "image",
image: "https://example.com/photo.png",
providerOptions: {
openai: { imageDetail: "low" },
},
}
4. Tools
A tool is a function the model can call. Three parts. Description. Input schema. Execute function.
Define a tool
import { tool } from "ai";
import { z } from "zod";
const weatherTool = tool({
description: "Get the weather in a location",
inputSchema: z.object({
location: z.string().describe("The location to get the weather for"),
}),
execute: async ({ location }) => {
return { temperature: 72, conditions: "sunny" };
},
});
Use a tool with generateText
const { text } = await generateText({
model: "anthropic/claude-haiku-4.5",
prompt: "What is the weather in London?",
tools: { weather: weatherTool },
});
Three types of tools
| Type | Who defines it | Who runs it |
|---|---|---|
| Custom | You | You |
| Provider-defined | Provider defines schema. You write execute. |
You |
| Provider-executed | Provider | Provider |
Provider-defined example. Anthropic bash tool.
import { anthropic } from "@ai-sdk/anthropic";
const result = await generateText({
model: anthropic("claude-opus-4-5"),
tools: {
bash: anthropic.tools.bash_20250124({
execute: async ({ command }) => runCommand(command),
}),
},
prompt: "List files in the current directory",
});
Provider-executed example. OpenAI web search.
import { openai } from "@ai-sdk/openai";
const result = await generateText({
model: openai("gpt-5.2"),
tools: {
web_search: openai.tools.webSearch(),
},
prompt: "What happened in the news today?",
});
Tool messages in a conversation
When the model calls a tool. you send back the result as a tool message.
const result = await generateText({
model: "anthropic/claude-sonnet-4.5",
messages: [
{ role: "user", content: "How many calories in Roquefort?" },
{
role: "assistant",
content: [
{
type: "tool-call",
toolCallId: "12345",
toolName: "get-nutrition-data",
input: { cheese: "Roquefort" },
},
],
},
{
role: "tool",
content: [
{
type: "tool-result",
toolCallId: "12345",
toolName: "get-nutrition-data",
output: {
type: "json",
value: { name: "Roquefort", calories: 369, fat: 31, protein: 22 },
},
},
],
},
],
});
Schemas
Supported schema formats.
Zod v3 and v4 directly or via
zodSchema()Valibot via
valibotSchema()from@ai-sdk/valibotRaw JSON schema via
jsonSchema()
5. Streaming
Use streamText instead of generateText. Shows output as it arrives instead of waiting for the full response.
import { streamText } from "ai";
const { textStream } = streamText({
model: "anthropic/claude-sonnet-4.5",
prompt: "Write a poem about embedding models.",
});
for await (const textPart of textStream) {
console.log(textPart);
}
Use streaming when the model is slow or the output is long. Skip it if a fast model gives results quickly enough.
Key concepts
Generative AI means models that create outputs like text or images or audio based on patterns from training data.
LLM means large language model. It predicts the next words in a sequence. It can hallucinate. That means it makes up facts when it does not know the answer.
Embedding model converts data into a list of numbers called a vector. Used for search and similarity tasks. Not for generating text.
Tool means a function the model can call to do things it cannot do alone. Like math. Or fetching data from an API.
Streaming means sending parts of the response as they are generated instead of waiting for the whole thing.






