#1
Caching improves performance, but it can also create a serious problem called cache stampede. This happens when many requests try to rebuild the same cache entry at the same time after it expires.
When this occurs, the database may receive thousands of simultaneous queries, which can overload the system.
This article explains practical strategies to prevent cache stampede using Redis in Spring Boot.

1. What Is Cache Stampede

A cache stampede happens when a popular cache entry expires and multiple requests try to rebuild it simultaneously.
Example scenario:
High traffic request → Cache expires
            ↓
1000 requests → Database query
Result:
  • Database overload
  • Increased latency
  • Possible system crash
Preventing this requires controlling how cache is rebuilt.

2. Using Redis Distributed Lock (Mutex)

One common solution is using a mutex lock so that only one request rebuilds the cache.
Example logic:
Request arrives
      ↓
Try acquiring Redis lock
      ↓
If lock acquired → rebuild cache
If lock denied → wait for cache
Example implementation:
public Object getProduct(String id) {

    String key = "product:" + id;
    Object value = redisTemplate.opsForValue().get(key);

    if (value != null) {
        return value;
    }

    String lockKey = "lock:" + id;

    Boolean locked = redisTemplate.opsForValue()
            .setIfAbsent(lockKey, "1", Duration.ofSeconds(10));

    if (Boolean.TRUE.equals(locked)) {

        value = fetchFromDatabase(id);

        redisTemplate.opsForValue().set(key, value, Duration.ofMinutes(10));
        redisTemplate.delete(lockKey);

        return value;
    }

    return redisTemplate.opsForValue().get(key);
}
Only one request performs the database query.

3. Request Coalescing

Another strategy is request coalescing, where multiple requests share the same cache-building operation.
Example flow:
Request 1 → rebuild cache
Request 2 → wait
Request 3 → wait
When the first request finishes, other requests receive the cached result.
This prevents duplicate database queries.

4. Probabilistic Early Expiration

Instead of waiting until a cache entry fully expires, the system refreshes it before expiration.
Example idea:
Cache TTL = 10 minutes
Refresh randomly between 8–10 minutes
This spreads cache refresh operations over time and avoids traffic spikes.

5. Using Redis "Stale Cache" Strategy

Another technique is allowing slightly stale data while refreshing the cache in the background.
Example flow:
Cache expired
      ↓
Return stale data
      ↓
Refresh cache asynchronously
Benefits:
  • Fast responses
  • No database overload
  • Users still receive acceptable data

6. Using Randomized TTL

Random TTL values prevent many keys from expiring at the same moment.
Example:
long ttl = 600 + new Random().nextInt(120);
redisTemplate.opsForValue().set(key, value, Duration.ofSeconds(ttl));
This spreads cache expirations over time.

7. Monitoring Cache Stampede

Monitor these metrics:
  • Cache miss spikes
  • Database query spikes
  • Redis lock usage
  • Request latency
Sudden increases may indicate a cache stampede problem.

8. Real Production Example

Large systems like e-commerce platforms often combine multiple techniques:
Redis Cache
   ↓
Mutex Lock
   ↓
Random TTL
   ↓
Async Cache Refresh
This layered approach protects the database during heavy traffic.
#ads

image quote pre code