Skip to main content

Command Palette

Search for a command to run...

Next.js use cache: remote: A Distributed Cache in One Line

Published
8 min read
Next.js use cache: remote: A Distributed Cache in One Line

Introduction

I discovered this while digging through the Vercel docs and the Next.js cache components skill. I was trying to understand how caching actually works when you deploy to Vercel. What I found is one of the most powerful features in Next.js 16 that not enough people are talking about.

You can write 'use cache: remote' inside any async function and get a shared distributed cache across all your serverless instances. No Redis. No config. No infrastructure. Just a directive.

What Cache Components Are

Next.js 16 introduced Cache Components. You enable them with one line in your config:

// next.config.ts
const nextConfig = {
  cacheComponents: true,
};

This unlocks three directives:

  • 'use cache': in-memory cache on the server

  • 'use cache: remote': shared remote cache across all instances

  • 'use cache: private': per-user browser cache

You put these directives inside async functions. Next.js caches the return value. That is it.

The Problem with Plain use cache

Plain 'use cache' stores data in memory. On Vercel, your app runs as serverless functions. Each function instance has its own memory. When the instance shuts down, the cache is gone.

So this happens:

  1. User A hits your site. Instance A spins up. Fetches data. Caches it in memory.

  2. User B hits your site. Instance B spins up. It has no idea about Instance A's cache. Fetches data again.

  3. Instance A gets recycled. Cache gone.

For build-time prerendered content this is fine. The cached result is baked into the static shell and served from the CDN. But for runtime data, anything that runs at request time, the in-memory cache is basically useless in a serverless environment.

What use cache: remote Does Differently

'use cache: remote' stores the cached data in a shared key-value store that all serverless instances can access. On Vercel, this is called Runtime Cache. It lives in the same region as your function. You do not set it up. Vercel provides it automatically.

Now:

  1. User A hits your site. Instance A fetches data. Stores it in the remote cache.

  2. User B hits your site. Instance B checks the remote cache. Finds the data. Skips the database.

  3. Instance A gets recycled. Does not matter. The cache is in the remote store, not in memory.

Every user. Every serverless instance. Same region. Same cache.

How It Works in Practice

Say you have a homepage with featured products from Supabase.

import { createClient } from "@/utils/supabase/server";
import { cacheTag, cacheLife } from "next/cache";

export async function getFeaturedProducts() {
  "use cache: remote";
  cacheTag("products");
  cacheLife("max");

  const supabase = await createClient();
  const { data } = await supabase
    .from("products")
    .select("*")
    .eq("featured", true);

  return data;
}

First request hits Supabase. Result goes into the remote cache. Every request after that skips Supabase entirely. Your database sees one request instead of thousands.

Use it in your page like any normal function:

export default async function HomePage() {
  const products = await getFeaturedProducts();
  return (
    <section>
      {products.map((p) => (
        <ProductCard key={p.id} product={p} />
      ))}
    </section>
  );
}

It does not matter what data source you use. Supabase. Prisma. Drizzle. Raw fetch. A GraphQL client. 'use cache: remote' caches the return value of the function. It does not care how the data was fetched.

Cache Forever Until You Invalidate

You can cache data indefinitely. Use cacheLife('max') or set a custom long expiration:

cacheLife({
  stale: 31536000,
  revalidate: 31536000,
  expire: 31536000,
});

That is roughly one year. The cache stays until you explicitly kill it. Which brings us to the best part.

Invalidation with Tags

Every cached function can be tagged with cacheTag(). When your data changes, call revalidateTag() to drop the cache.

// Tag your cached functions
async function getHeroContent() {
  "use cache: remote";
  cacheTag("homepage", "hero");
  cacheLife("max");
  return db.hero.findFirst();
}

async function getFeaturedProducts() {
  "use cache: remote";
  cacheTag("homepage", "products");
  cacheLife("max");
  return db.products.findMany({ where: { featured: true } });
}

async function getTestimonials() {
  "use cache: remote";
  cacheTag("homepage", "testimonials");
  cacheLife("max");
  return db.testimonials.findMany();
}

Notice the trick. Each function has a shared 'homepage' tag and a specific tag. So you can invalidate just products. Or nuke the entire homepage cache at once.

"use server";

import { revalidateTag } from "next/cache";

// Invalidate just products
export async function onProductUpdate() {
  revalidateTag("products");
}

// Invalidate the entire homepage
export async function refreshHomepage() {
  revalidateTag("homepage");
}

Call these from a server action, a webhook, an admin panel. The cache entry drops across every region. Next request rebuilds it fresh.

There are two invalidation functions:

  • revalidateTag(): background revalidation. Current request might still see stale data. Next request sees fresh data.

  • updateTag(): immediate. Same request sees fresh data.

Time-Based Invalidation

If you do not want to manage tags manually, use time-based expiration:

async function getTrendingPosts() {
  "use cache: remote";
  cacheLife("hours"); // revalidates every few hours
  return db.posts.findMany({ orderBy: { views: "desc" } });
}

Built-in profiles: 'minutes', 'hours', 'days', 'weeks', 'max'.

Or go custom:

cacheLife({
  stale: 300, // 5 min. serve old data while refreshing
  revalidate: 600, // 10 min. background refresh interval
  expire: 3600, // 1 hour. hard expiration
});

stale means the data is old but still served while fresh data is fetched in the background. revalidate is how often Next.js checks for new data behind the scenes. expire is the hard cutoff where the cache is completely gone.

Where the Cache Actually Lives

On Vercel, use cache: remote uses Vercel's Runtime Cache. It is a key-value store in each region where your functions run. You do not configure it.

  • Shared across all users in the same region. Yes.

  • Shared across all serverless instances. Yes.

  • Shared across regions. No. Each region has its own cache.

  • Survives new deployments. Yes.

  • Shared between preview and production. No. Separate environments.

The word "non-durable" in the docs means Vercel can evict entries under memory pressure. It is a cache, not a database. Your database is still the source of truth. If the cache misses, your function just re-fetches and caches again.

If you are self-hosting (not on Vercel), you need to configure your own cache handler via cacheHandlers in next.config.ts. You can use Redis, Memcached, DynamoDB, whatever. But on Vercel it is zero config.

One Rule: No Runtime APIs Inside

You cannot call cookies(), headers(), or read searchParams inside a 'use cache: remote' function. These change per request. Caching them makes no sense.

The fix: read them outside, pass the value in as an argument.

async function ProductPrice({ productId }: { productId: string }) {
  const currency = (await cookies()).get("currency")?.value ?? "USD";
  const price = await getPrice(productId, currency);
  return (
    <span>
      {price} {currency}
    </span>
  );
}

async function getPrice(productId: string, currency: string) {
  "use cache: remote";
  cacheTag(`price-${productId}`);
  cacheLife({ expire: 3600 });

  // Cache key = productId + currency
  // All users with same currency share this entry
  return db.products.getPrice(productId, currency);
}

The arguments automatically become part of the cache key. Different arguments create different cache entries. So all USD users share one entry. All EUR users share another.

Smart Cache Key Design

Cache on dimensions with few unique values. Filter the rest in memory.

// BAD: caching per price filter creates thousands of entries
async function getProducts(category: string, minPrice: number) {
  "use cache: remote";
  return db.products.find({ category, minPrice });
}

// GOOD: cache per category, filter price in memory
async function getProducts(category: string) {
  "use cache: remote";
  return db.products.findByCategory(category);
}

// Then in your component
const products = await getProducts("electronics");
const filtered = products.filter((p) => p.price >= minPrice);

Same idea with user data. Do not cache per user ID. Cache per language or role or region. Fewer entries. Higher hit rate.

The Three Directives Compared

use cache use cache: remote use cache: private
Storage In-memory Remote KV store Browser only
Shared across users Yes Yes No
Shared across instances No Yes N/A
Access cookies/headers No No Yes
Extra cost None Infrastructure None
Best for Static shell content Runtime shared data Per-user compliance

Why This Matters

Before this, if you wanted a distributed cache in front of your database, you had to set up Redis or Memcached, write cache keys manually, handle invalidation yourself, and manage infrastructure.

Now you write 'use cache: remote' and cacheTag('products') inside a function. Call revalidateTag('products') when data changes. Done. A distributed cache with declarative invalidation built into your component tree.

Your database goes from handling every single request to handling one request per cache lifetime per region. That is a massive reduction in load, cost, and latency.

It is one of the most practical features in modern web development and I think more people should know about it.