v1.2.9 · TypeScript-first · Apache 2.0

Node.js finally gets
caching done right
for production

Memory, Redis, and Disk under one API.
Stampede protection, distributed invalidation, and circuit breakers.
Install it in one line.

GitHub Docs
npm install layercache

Layered caching, properly structured

Requests check the fastest layer first. Hits automatically backfill upper layers, and misses execute the fetcher only once.

🔵
L1 · Memory (MemoryLayer)
~0.005ms hit
In-process LRU with automatic eviction when `maxSize` is exceeded.
miss -> next layer
🟡
L2 · Redis (RedisLayer)
~0.2-4ms hit
Shared cache across instances with gzip/brotli compression and Pub/Sub invalidation.
miss -> next layer
🟤
L3 · Disk (DiskLayer)
~2ms hit
Persistent cache that survives restarts, with configurable file limits.
all miss -> run fetcher
Fetcher (DB / API)
~5–500ms
Even with 100 concurrent requests, it runs once and backfills every layer.

The numbers hold up

Measured on a realistic workload with real file I/O and SHA-256 hashing. Node.js v20, Redis 7, single-process run.

Fastest Path

0.005ms
L1 hit

The first cache layer returns in microseconds, which changes the entire performance profile of the app before Redis or disk even enter the picture.

Warm-hit latency
From 5.175ms to 0.005ms
5.175ms 0.005ms
1035x faster than running without cache
Node.js v20 single-process benchmark run
Redis 7 real networked L2 layer
SHA-256 + file I/O not a synthetic no-op benchmark
17,184
req/s
HTTP throughput
161 req/s 17,184 req/s
Measured with the layered cache already warm
1.74ms
Average HTTP latency
Average response time
249ms 1.74ms
143x lower average latency
375 → 5
Actual DB executions
Stampede protection
No cache: 375 Cache: 5
With 75 concurrent requests
128ms
x500 concurrency
With Redis delayed by 100ms
Linear: 50,000ms Actual: 128ms
No queue amplification under pressure
1
cross-instance fetch
Distributed single-flight
60 instances miss at once Fetcher runs once
Backed by Redis distributed locking

Stop cache stampedes at the source

When cache entries expire, hundreds of concurrent requests can slam the database at once. layercache prevents that completely.

❌ no cache
75 requests -> 375 DB executions
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
Every request hits the DB directly
avg latency: 409ms · max: 429ms
✓ layercache
75 requests -> 5 DB executions
DB
DB
DB
DB
DB
The other 70 return immediately from cache
avg latency: 6.9ms · max: 13ms

Everything you actually need is here

Auto Backfill
An L2 hit automatically repopulates L1, so the next request returns from memory.
🛡
Graceful Degradation
If Redis goes down and L1 is still alive, traffic keeps flowing. Failed layers are bypassed temporarily.
🔖
Tag-Based Invalidation
Clear every related key at once with a single tag. Wildcard and glob patterns are supported too.
🔄
Stale-while-revalidate
Return stale data immediately and refresh it in the background. Users do not wait.
📡
Redis Pub/Sub Sync
Delete a key on server A and the L1 caches on servers B and C are invalidated immediately.
🔬
Prometheus + OpenTelemetry
Expose hit rate, per-layer latency, and circuit breaker state as metrics.
🧊
Adaptive TTL
Hot keys can extend their own TTL automatically. Sliding TTL and midnight-aligned TTL are supported.
💾
Snapshot and Warmup
Preload cache from disk on restart and remove cold-start penalties.
🖥
Admin CLI
npx layercache stats|keys|invalidate - manage Redis cache directly from the terminal.

From install to production in 10 minutes

cache.ts - full production setup
import {
  CacheStack, MemoryLayer, RedisLayer,
  RedisInvalidationBus, RedisSingleFlightCoordinator
} from 'layercache'
import Redis from 'ioredis'

const redis = new Redis()

// L1 invalidation bus across instances
const bus = new RedisInvalidationBus({
  publisher: redis,
  subscriber: new Redis()
})

// Distributed stampede protection
const coordinator = new RedisSingleFlightCoordinator({ client: redis })

export const cache = new CacheStack(
  [
    new MemoryLayer({ ttl: 60, maxSize: 10_000 }),
    new RedisLayer({
      client: redis, ttl: 3600,
      compression: 'gzip',
      prefix: 'myapp:'
    })
  ],
  {
    invalidationBus: bus,
    singleFlightCoordinator: coordinator,
    gracefulDegradation: { retryAfterMs: 10_000 }
  }
)

// Usage: on misses, the fetcher runs only once
const user = await cache.get(`user:${id}`, () => db.findUser(id), {
  ttl: 120,
  tags: ['users']
})

// Invalidate everything by tag
await cache.invalidateByTag('users')

Drop it into the stack you already use

A single middleware line plugs into major Node.js frameworks including Express, Fastify, and NestJS.

Express
Fastify
NestJS
Next.js
Hono
tRPC
GraphQL
Redis (ioredis)
Memcached
Prometheus
OpenTelemetry
MessagePack
Express middleware - just two lines
import { createExpressCacheMiddleware } from 'layercache'

app.get('/api/products',
  createExpressCacheMiddleware(cache, {
    ttl: 30, tags: ['products'],
    keyResolver: (req) => `products:${req.url}`
  }),
  productHandler
)

Start now

411 tests. Full TypeScript types. Apache 2.0 licensed.
A new production-ready baseline for Node.js caching.

GitHub