Prompt Engineering in Vercel's AI SDK

Prompt Engineering in Vercel's AI SDK

Effective Tool Prompting

For Better Tool Calls

  1. Choose appropriate models -> More powerful models like GPT-4 handle tools much better than smaller models

  2. Keep tool sets manageable -> Limit yourself to 5 or fewer tools when possible

  3. Simplify parameter structures -> Complex nested schemas with many optional fields confuse models

  4. Use meaningful naming -> Clear, descriptive names for tools and parameters help models understand their purpose

  5. Add descriptions to parameters -> Use Zod's .describe() method to clarify what each parameter does:

parameters: z.object({
  location: z.string().describe("City and state or country name"),
  includeHourly: z
    .boolean()
    .describe("Whether to include hour-by-hour forecast"),
});
  1. Document tool outputs -> Explain what your tool returns in its description, especially when there are dependencies between tools

  2. Provide examples -> Including sample tool calls and results in your prompt can guide the model:

Example tool call:
weather({"location": "New York, NY"})

Example result:
{"temperature": 72, "conditions": "Partly cloudy"}

Handling Schema Challenges

The translation between Zod schemas and what models actually produce isn't always perfect, especially with specific data types.

Date Handling

Models return dates as strings, but Zod typically expects JavaScript Date objects. The solution is to use transformers:

const schema = z.object({
  events: z.array(
    z.object({
      event: z.string(),
      date: z
        .string()
        .date()
        .transform((value) => new Date(value)),
    })
  ),
});

This validates the date string format first, then converts it to a proper Date object.

Debugging Your Prompts

The AI SDK offers two helpful debugging techniques:

Inspecting Warnings

To check if your configuration is supported:

const result = await generateText({
  model: openai("gpt-4o"),
  prompt: "Hello, world!",
});

console.log(result.warnings);

This helps identify when you're using features not supported by a particular provider.

Examining HTTP Requests

For deeper debugging, you can see exactly what's being sent to the provider:

const result = await generateText({
  model: openai("gpt-4o"),
  prompt: "Hello, world!",
});

console.log(result.request.body);

This shows you the raw HTTP request body, letting you verify the exact payload being sent to the model.