How to Build Good APIs With Next.js and TypeScript

Introduction
Stripe's API is the gold standard. Consistent error envelopes. Idempotency out of the box. Rate limit headers that actually help. SDK types that match the spec. Every endpoint feels like it was designed by the same person.
This isn't magic. It's a set of patterns you can steal. Here's how to build a public API with Next.js and TypeScript that your users will actually enjoy integrating with.
Start With TDD
Before writing any endpoint or SDK code, write the test first. Write the request, assert the expected response shape, run it, watch it fail, then implement until it passes. You start from the behavior you actually want, not the implementation you think is right.
Use Mock Service Worker for local and CI tests. MSW intercepts at the fetch level, so your test code hits the same paths your SDK users will. Write a test, mock the endpoint with MSW, see it fail, implement, see it pass. This loop works for both your API routes and your SDK client.
For integration coverage, run a separate test suite against an actual staging environment after merge. This catches what mocks can't. Real auth, real database, real latency. Keep two layers. MSW-mocked tests per PR for fast feedback, real-environment tests post-merge for confidence. You can also locally against localhost run your tests. Just disable the MSW mocking with a flag.
Zod as the Single Source of Truth
Every request and response contract starts as a Zod schema. Not a TypeScript type. Not a JSON Schema file. Zod, because it validates at runtime and infers types at compile time.
import { z } from 'zod'
export const CreateOrderRequestSchema = z
.object({
customerId: z.string().uuid(),
currency: z.enum(['usd', 'eur', 'gbp']),
items: z.array(
z.object({
productId: z.string().uuid(),
quantity: z.number().int().min(1),
})
),
metadata: z.record(z.string(), z.string()).optional(),
})
.strict()
export type CreateOrderRequest = z.infer<typeof CreateOrderRequestSchema>
.strict() is critical. It rejects unknown fields instead of silently stripping them. If someone sends { "curency": "usd" } with a typo, they get a 400, not a silent success with a missing currency. Stripe does this. You should too.
Define these schemas in domain folders, not in the API layer. lib/orders/contracts.ts, not app/api/v1/orders/schema.ts. They'll be reused for OpenAPI generation, SDK types, and tests.
From Zod to OpenAPI to SDK Types
Once Zod owns the contracts, OpenAPI generation is mechanical. Use @asteasolutions/zod-to-openapi.
import {
OpenAPIRegistry,
OpenApiGeneratorV3,
} from '@asteasolutions/zod-to-openapi'
const registry = new OpenAPIRegistry()
registry.registerPath({
method: 'post',
path: '/v1/orders',
request: {
body: {
content: {
'application/json': { schema: CreateOrderRequestSchema },
},
},
},
responses: {
200: {
description: 'Order created',
content: {
'application/json': { schema: CreateOrderResponseSchema },
},
},
},
})
const generator = new OpenApiGeneratorV3(registry.definitions)
const spec = generator.generateDocument({
openapi: '3.0.0',
info: { title: 'Your API', version: '1.0.0' },
})
Run this in a build script. Output YAML. Commit it. Add a CI check that regenerates and diffs. If the YAML is stale, the build fails. No manual spec editing ever.
From that YAML you get:
SDK type generation via
openapi-typescriptoropenapi-generatorAPI reference docs via Mintlify, Redocly, or Scalar
Client mocks for testing
One schema. Three outputs. Zero drift.
The Route Handler Pattern
Every endpoint should be a thin configuration object, not a wall of try/catch logic. Build a single createApiRouteHandler that composes all cross-cutting concerns:
type ApiRouteConfig<TInput> = {
route: string
method: string
requiredPermission: string
supportsIdempotency?: boolean
parseInput: (args: { request: Request }) => Promise<TInput>
resolveRateLimitPolicy?: (args: { actor: Actor }) => RateLimitPolicy | null
execute: (args: { actor: Actor; input: TInput }) => Promise<ApiResult>
}
Then every route handler is five lines:
export const POST = createApiRouteHandler({
route: '/v1/orders',
method: 'POST',
requiredPermission: 'orders',
supportsIdempotency: true,
parseInput: ({ request }) =>
parseJsonBody({ request, schema: CreateOrderRequestSchema }),
resolveRateLimitPolicy: ({ actor }) =>
resolvePolicy({
route: '/v1/orders',
method: 'POST',
authType: actor.authType,
}),
execute: executeOrderCreate,
})
Auth, rate limiting, idempotency, logging, usage tracking, error formatting, all handled by the wrapper. The route file just declares what's unique about this endpoint.
The handler's internal flow is always the same:
auth → parse input → rate limit → idempotency check → execute → store idempotency → respond
If any step throws, the catch block formats the error envelope and logs it. Every endpoint is consistent because they all go through the same pipeline.
Error Responses That Guide the User
A bad error response:
{ "error": "Bad Request" }
A good one:
{
"error": {
"code": "PRODUCT_NOT_FOUND",
"type": "not_found",
"message": "Product 'prod_abc123' does not exist.",
"param": "items[0].productId",
"suggestion": "Check the product ID or create it via POST /v1/products first.",
"docs": "https://docs.yourapi.com/api/errors#product-not-found"
}
}
Every error response should have:
code: Machine-readable. Your users will
switchon this.type: Category like
invalid_request,authentication_error,rate_limit,not_found,conflict,internal_error.message: Human-readable. One sentence. Tells them what happened.
suggestion: Tells them what to do next. This is where most APIs fail, they describe the problem but not the fix.
docs: Direct link to the relevant docs page.
param: Which field caused the issue, if applicable.
Build a typed error factory:
class ApiRouteError extends Error {
constructor(
readonly status: number,
readonly error: ApiErrorPayload,
readonly headers: Record<string, string> = {}
) {
super(error.message)
}
}
function createProductNotFoundError({ productId }: { productId: string }) {
return new ApiRouteError(404, {
code: 'PRODUCT_NOT_FOUND',
type: 'not_found',
message: `Product '${productId}' does not exist.`,
param: 'productId',
suggestion:
'Check the product ID or create it via POST /v1/products first.',
docs: 'https://docs.yourapi.com/api/errors#product-not-found',
})
}
Every error type gets its own factory. Auth failures suggest checking the key. Permission errors name the missing permission. Rate limits include retryAfter. Idempotency conflicts tell you to reuse the original payload or generate a new key.
When your users are debugging at 2am, these suggestions are the difference between a 5-minute fix and an hour of guessing.
Idempotency Keys
Any POST endpoint that creates a resource needs idempotency. Without it, a network retry creates duplicates.
The flow:
Client sends
Idempotency-Key: <unique-string>headerServer hashes
method + route + orgId + bodyinto a fingerprintCheck Redis, has this key been seen?
No record: Execute the request, store the fingerprint + response.
Same fingerprint: Replay the stored response, identical to the original.
Different fingerprint: Return 409 Conflict, same key but different payload.
function buildIdempotencyFingerprint({
method,
route,
orgId,
body,
}: {
method: string
route: string
orgId: string
body: unknown
}): string {
const serialized = stableSerialize({ method, route, orgId, body })
return `sha256:${sha256(serialized)}`
}
Key details:
Stable serialization: Sort object keys before hashing.
{ a: 1, b: 2 }and{ b: 2, a: 1 }must produce the same fingerprint.Scope keys to the org:
api:idempotency:POST:/v1/orders:{orgId}:{key}. One org can't replay another's responses.TTL of 24 hours: Keys expire. Don't store them forever.
Skip 5xx: Don't cache server errors. The client should retry and get a real attempt.
Fail open: If Redis is down, skip idempotency and process the request. Degraded is better than broken.
{
"error": {
"code": "IDEMPOTENCY_CONFLICT",
"type": "conflict",
"message": "The same idempotency key was reused with a different request payload.",
"suggestion": "Reuse the original payload or send a new idempotency key.",
"docs": "https://docs.yourapi.com/api/idempotency"
}
}
Rate Limiting That Helps Instead of Punishes
Sliding window with Redis sorted sets. Each request adds a timestamped member. Count members within the window. Clean up expired ones.
const policy = { name: 'orders.read', limit: 100, windowSec: 60 }
Return the state in headers on every response, success or failure:
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 73
X-RateLimit-Reset: 1713000000
When the limit is hit:
HTTP/1.1 429 Too Many Requests
Retry-After: 34
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1713000000
Key design decisions:
Per-operation policies: Reads are cheaper than writes. Batch writes are more expensive than single writes. Don't apply the same limit everywhere.
Auth-type tiers: API keys get 100/min. Dashboard sessions get 300/min. Your own UI shouldn't rate limit itself into a bad experience.
Fail open: If Redis is unreachable, allow the request but mark it as degraded in your logs. Losing rate limiting temporarily is better than a full outage.
Identify by key, not IP:
key:{keyId}for API keys,session:{orgId}:{userId}for sessions. IP-based limiting breaks for shared networks and proxies.
Usage Tracking for Your Users
Stripe gives you a dashboard showing every API call, its status, and latency. Your API should too.
Track every request asynchronously, after the response is already sent. Next.js gives you after() for exactly this. It runs a callback after the response is flushed to the client. Zero added latency.
import { after } from 'next/server'
function scheduleUsageTracking({
actor,
context,
statusCode,
latencyMs,
errorCode,
}) {
after(() => {
recordApiUsage({
orgId: actor.orgId,
keyId: actor.keyId,
route: context.route,
method: context.method,
statusCode,
latencyMs,
errorCode,
rateLimited: errorCode === 'RATE_LIMITED',
idempotencyStatus,
occurredAt: Date.now(),
})
})
}
Track into hourly rollup buckets per org, per route, and per key.
What to surface to users in their API settings dashboard:
Total requests by status code (2xx, 4xx, 5xx)
Latency (p50, p95, p99) per endpoint
Error breakdown by code (
INVALID_REQUEST,RATE_LIMITED, etc.)Per-key usage so they can see which integration is hammering the API
Rate limit proximity so they know when they're close to the ceiling
This isn't just nice to have. When a user says "your API is slow", you can point them to their own dashboard and have a data-driven conversation.
Structured Logging and Observability
Every API request produces a structured log line. Not a console.log. A JSON payload with every field you need to debug, alert, and analyze.
logInfo({
event: 'api.request.completed',
payload: {
service: 'public-api',
env: 'production',
requestId: 'req_abc123',
route: '/v1/orders',
method: 'POST',
apiVersion: 'v1',
statusCode: 200,
latencyMs: 142,
authType: 'api_key',
orgId: 'org_xyz',
keyId: 'key_456',
errorCode: null,
rateLimited: false,
idempotencyStatus: 'stored',
},
})
Specific events to log:
| Event | When |
|---|---|
api.request.completed |
Every request, success or known error |
api.request.failed |
Unexpected 500s only |
api.auth.failed |
Invalid or revoked keys |
api.rate_limit.hit |
429 responses |
api.idempotency.replay |
Cached response served |
Redact sensitive data. API keys, email addresses, names, phone numbers, anything PII-adjacent gets stripped before it hits your logging pipeline. Build a redactForLogs function and run every payload through it.
Track degraded dependencies. If Redis was unreachable during rate limiting or idempotency, log it as degradedDependencies: ["rate_limit_redis"]. You'll know exactly when infrastructure problems affected API behavior.
Request IDs everywhere. Generate a requestId for every request. Return it in the x-request-id response header. Log it. When a user reports a bug, they send the request ID and you have the full trace.
Design Principles
Thin route files. Your Next.js route handler should be 2 lines: set the timeout, re-export the handler. All logic lives in a shared library.
// app/api/v1/orders/route.ts
export const maxDuration = 30
export { POST } from '@/lib/api/routes/orders'
Domain logic stays out of the API layer. The handler calls executeOrderCreate(). That function lives in lib/orders/, not lib/api/. It knows nothing about HTTP, headers, or request objects. It takes typed input, returns typed output.
Strict schemas at the boundary. .strict() on every request schema. Reject unknown fields. If you strip them silently, you're hiding bugs in your users' integration code.
Cache-Control: no-store on every response. API responses should never be cached by intermediaries. Add it to your response builder and forget about it.
Validate ownership, not just auth. Authenticating a request is step one. If the request references a resource (a product, a subscription, a webhook endpoint), verify it belongs to the caller's org before loading it.
The Payoff
When you wire all of this together, adding a new endpoint becomes a checklist:
Define the Zod schema in the domain folder
Register it in the OpenAPI spec
Write a 5-line route handler config
Implement the business logic function
Regenerate the YAML, run the CI check
Auth, errors, rate limits, idempotency, logging, usage tracking, all automatic. Every endpoint behaves the same way because every endpoint goes through the same pipeline.
Your users get consistent errors that tell them what went wrong and how to fix it. Rate limit headers that let them build smart retry logic. Idempotency that makes their integration resilient. A usage dashboard that gives them visibility into their own traffic.
That's what makes an API good. Not clever abstractions. Not bleeding-edge tech. Just consistency, predictability, and respect for the person on the other end of the HTTP call.





