FroquizFroquiz
HomeQuizzesSenior ChallengeGet CertifiedBlogAbout
Sign InStart Quiz
Sign InStart Quiz
Froquiz

The most comprehensive quiz platform for software engineers. Test yourself with 10000+ questions and advance your career.

LinkedIn

Platform

  • Start Quizzes
  • Topics
  • Blog
  • My Profile
  • Sign In

About

  • About Us
  • Contact

Legal

  • Privacy Policy
  • Terms of Service

Β© 2026 Froquiz. All rights reserved.Built with passion for technology
Blog & Articles

Redis for Developers: Caching, Sessions, Queues and Beyond

Learn how to use Redis effectively as a developer. Covers data structures, caching patterns, session storage, pub/sub, rate limiting, distributed locks, and common interview questions.

Yusuf SeyitoğluMarch 11, 20263 views10 min read

Redis for Developers: Caching, Sessions, Queues and Beyond

Redis is one of those tools that keeps surprising you with how many problems it elegantly solves. You start using it for caching, then discover it is perfect for sessions, rate limiting, pub/sub messaging, leaderboards, and distributed locks. Understanding Redis makes you a significantly more effective backend developer.

What Is Redis?

Redis (Remote Dictionary Server) is an in-memory data structure store. It can serve as:

  • A cache β€” store frequently read data in memory for fast retrieval
  • A database β€” persist data with optional durability
  • A message broker β€” pub/sub and stream-based messaging
  • A session store β€” fast, TTL-aware key-value storage

Its defining characteristic: all data lives in RAM. Operations are typically sub-millisecond. A well-configured Redis instance handles millions of operations per second.

Data Structures

Redis is not just a key-value store β€” it supports rich data structures that enable complex use cases with simple commands.

Strings

The simplest type. Stores text, numbers, or binary data (up to 512MB):

bash
SET user:1:name "Alice" GET user:1:name # "Alice" SET counter 0 INCR counter # 1 INCRBY counter 5 # 6 # Set with expiry (TTL) SET session:abc123 "user-data" EX 3600 # expires in 1 hour TTL session:abc123 # seconds remaining

Hashes

Store an object's fields without serialization:

bash
HSET user:1 name "Alice" email "alice@example.com" age 30 HGET user:1 name # "Alice" HGETALL user:1 # all fields and values HMGET user:1 name email # multiple fields HINCRBY user:1 age 1 # increment a field HDEL user:1 age # delete a field

Lists

Ordered list of strings. Push/pop from either end:

bash
RPUSH tasks "task1" "task2" "task3" # push to right (tail) LPUSH tasks "task0" # push to left (head) LRANGE tasks 0 -1 # get all items LLEN tasks # length LPOP tasks # remove and return from left RPOP tasks # remove and return from right # Blocking pop -- waits for an item if list is empty BLPOP tasks 30 # block up to 30 seconds

Sets

Unordered collection of unique strings:

bash
SADD tags "javascript" "nodejs" "backend" SISMEMBER tags "nodejs" # 1 (true) SISMEMBER tags "php" # 0 (false) SMEMBERS tags # all members SCARD tags # count SREM tags "backend" # remove # Set operations SUNION set1 set2 # union SINTER set1 set2 # intersection SDIFF set1 set2 # difference

Sorted Sets

Like sets but each member has a score. Enables ranking, leaderboards, and time-ordered data:

bash
ZADD leaderboard 1500 "alice" ZADD leaderboard 2300 "bob" ZADD leaderboard 1800 "carol" ZRANK leaderboard "alice" # rank (0-indexed, ascending) ZREVRANK leaderboard "alice" # rank from highest ZSCORE leaderboard "bob" # 2300 ZRANGE leaderboard 0 -1 WITHSCORES # all members with scores ascending ZREVRANGE leaderboard 0 2 # top 3 (descending) ZINCRBY leaderboard 100 "alice" # increment score

Caching Patterns

Cache-Aside (Lazy Loading)

The most common pattern. Application checks cache first; on miss, loads from database and populates cache:

javascript
async function getUser(userId) { const cacheKey = `user:${userId}`; // Try cache first const cached = await redis.get(cacheKey); if (cached) { return JSON.parse(cached); } // Cache miss -- load from database const user = await db.users.findById(userId); // Store in cache with 1-hour TTL await redis.setex(cacheKey, 3600, JSON.stringify(user)); return user; } async function updateUser(userId, data) { await db.users.update(userId, data); // Invalidate cache await redis.del(`user:${userId}`); }

Write-Through

Write to cache and database simultaneously. Cache is always up-to-date but adds write latency:

javascript
async function updateUser(userId, data) { await Promise.all([ db.users.update(userId, data), redis.setex(`user:${userId}`, 3600, JSON.stringify(data)), ]); }

Session Storage

Redis is ideal for session storage β€” fast reads, built-in TTL, shared across multiple app servers:

javascript
// Express with connect-redis const session = require("express-session"); const RedisStore = require("connect-redis").default; const { createClient } = require("redis"); const redisClient = createClient({ url: process.env.REDIS_URL }); await redisClient.connect(); app.use(session({ store: new RedisStore({ client: redisClient }), secret: process.env.SESSION_SECRET, resave: false, saveUninitialized: false, cookie: { secure: true, maxAge: 86400000 }, // 24 hours }));

Rate Limiting

Count requests per user per time window:

javascript
async function checkRateLimit(userId, limit = 100, windowSeconds = 60) { const key = `ratelimit:${userId}:${Math.floor(Date.now() / (windowSeconds * 1000))}`; const current = await redis.incr(key); if (current === 1) { await redis.expire(key, windowSeconds); } if (current > limit) { throw new Error(`Rate limit exceeded: ${current}/${limit} requests`); } return { current, limit, remaining: limit - current }; }

Pub/Sub

Simple real-time messaging between services or clients:

javascript
// Publisher const publisher = createClient(); await publisher.connect(); await publisher.publish("notifications", JSON.stringify({ userId: 123, message: "Your order has shipped", })); // Subscriber const subscriber = createClient(); await subscriber.connect(); await subscriber.subscribe("notifications", (message) => { const notification = JSON.parse(message); sendWebSocketToUser(notification.userId, notification.message); });

Distributed Locks

Prevent multiple instances of your app from running the same operation simultaneously:

javascript
const lockKey = `lock:send-report:${reportId}`; const lockValue = crypto.randomUUID(); const lockTTL = 30; // seconds // Try to acquire lock (SET NX = only if not exists) const acquired = await redis.set(lockKey, lockValue, { NX: true, EX: lockTTL, }); if (!acquired) { console.log("Another instance is processing this report"); return; } try { await generateAndSendReport(reportId); } finally { // Only release if we own the lock (check value matches) const currentValue = await redis.get(lockKey); if (currentValue === lockValue) { await redis.del(lockKey); } }

Persistence Options

Redis is in-memory but offers durability options:

OptionDescriptionUse when
No persistenceData lost on restartPure cache
RDB snapshotsPoint-in-time snapshots every N minutesTolerate some data loss
AOF (Append Only File)Log every write operationNeed strong durability
RDB + AOFBothMaximum safety

For a cache, no persistence is fine β€” data can be rebuilt from the source. For a session store or queue, use AOF.

Common Interview Questions

Q: When would you use Redis vs Memcached?

Both are in-memory caches. Redis supports rich data structures (lists, sets, sorted sets, hashes), persistence, pub/sub, and Lua scripting. Memcached is simpler, uses less memory per item, and supports multi-threading better for pure caching. Choose Redis for almost everything β€” its versatility is worth it.

Q: What is cache stampede (thundering herd) and how do you prevent it?

When a popular cache key expires, many requests simultaneously miss the cache and all hit the database at once. Prevention strategies: use a mutex/lock so only one request rebuilds the cache while others wait; use probabilistic early expiration to rebuild the cache before it expires; stagger TTLs with random jitter.

Q: How does Redis handle eviction when memory is full?

Redis has configurable eviction policies: noeviction (return errors), allkeys-lru (evict least recently used), volatile-lru (evict LRU among keys with TTL), allkeys-random, and others. For a cache, allkeys-lru is usually appropriate.

Practice on Froquiz

Redis and caching patterns are common in backend and system design interviews. Explore our backend and infrastructure quizzes on Froquiz to test your knowledge.

Summary

  • Redis stores data in RAM β€” sub-millisecond operations, ideal for caching, sessions, queues
  • Strings β€” simple cache values with TTL; Hashes β€” object fields; Lists β€” queues; Sets β€” unique membership; Sorted Sets β€” rankings and leaderboards
  • Cache-aside is the most common caching pattern β€” check cache, miss loads from DB, writes invalidate cache
  • Redis is the standard for session storage across distributed app servers
  • Use INCR with TTL for rate limiting β€” atomic and simple
  • Pub/Sub for lightweight real-time messaging between services
  • Use SET NX EX for distributed locks to prevent duplicate processing

About Author

Yusuf Seyitoğlu

Author β†’

Other Posts

  • GraphQL Schema Design: Types, Resolvers, Mutations and Best PracticesMar 12
  • System Design Fundamentals: Scalability, Load Balancing, Caching and DatabasesMar 12
  • Java Collections Deep Dive: ArrayList, HashMap, TreeMap, LinkedHashMap and When to Use EachMar 12
All Blogs