Skip to content

Question: Multi-pod deployment + prepopulation leads to STALE cache on other pods #164

@tamadam

Description

@tamadam

I deployed a Next.js app using this cache handler to Kubernetes, and I’m seeing unexpected cache behavior when running multiple pods. I want to confirm if this is expected or if I’m missing something.

I’m running 3 pods of the same application in Kubernetes. Cache handler setup is based entirely on your example implementation.

Because I can’t prepopulate the cache during build time, I do it after the Next.js server starts. I configured a Kubernetes liveness probe that calls GET /api/health.

Inside that route I do:

  • Load all routes to warm from an external API
  • Hit my own server using fetch("http://localhost:3000/...")
  • Only warm the cache once (using a Redis startup flag)
const routesToPreload = await fetch(
  my-routes-api
);

const pagesToWarm: string[] = await routesToPreload.json();

const preloadPromises = pagesToWarm.map((route) =>
  fetch(`http://localhost:3000${route}`, { next: { revalidate: 0 } })
    .then((res) => {
      if (!res.ok) {
        throw new Error(`Failed to fetch ${route}, status ${res.status}`);
      }
    })
    .catch((err) => {
      console.error(`Error loading route ${route}:`, err.message);
    })
);

await Promise.allSettled(preloadPromises);

My startup flag to ensure only one pod prepopulates:

export const startupKey = my_app_${process.env.GIT_HASH}:startup_flag;

Prepopulation logic

const redisClient = global.redisClientInstance;

if (redisClient) {
  const isInitialized = await redisClient.get(startupKey);

  if (!isInitialized) {
    console.info("Cache not initialized — prepopulating...");

    await redisClient.set(startupKey, "1");
    await prepopulateCache();

    console.info("Cache prepopulation complete.");
  } else {
    console.info("Cache already initialized, skipping.");
  }
}

Prepopulation runs only on 1 pod, which is expected.

But after that:

  • If I access the app through Pod A → page is HIT
  • Then access the same route through Pod B → STALE
  • Then Pod C → STALE
  • Then Pod A again → HIT

After each pod serves the page at least once (usually within 1 hour, my revalidate time), everything finally becomes HIT across all pods

It looks like every pod still performs a regeneration even though Redis was prepopulated.

Is this the normal behavior or how to solve this?

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions