Why Redis Is Fast

Just a guy who loves to write code and watch anime.
Introduction
At Spawn we use redis for many things beyond caching. I wanted to write a post about why Redis is fast but also the upsides of Redis being single threaded.
The Speed Problem
Your app talks to a database. Probably PostgreSQL or MySQL. The data lives on disk. Every read means a disk lookup.
Then your app grows. Thousands of requests per second. Every one of them hitting the database. Latency creeps up. Queries start queuing. Your users start noticing.
You need something faster. Way faster.
What If the Data Just Lived in Memory
That is the core idea behind Redis. It is a data store that keeps everything in RAM. No disk seeks. No waiting on spinning drives or even SSDs. Just memory.
A RAM lookup is measured in nanoseconds. A disk lookup is measured in milliseconds. That is a difference of roughly six orders of magnitude. It is not a little faster. It is absurdly faster.
What Redis Actually Is
Redis is a key-value store. You give it a key. You get back a value. Simple as that.
But it is not just strings. It supports lists. Sets. Sorted sets. Hashes. Streams. Bitmaps. It is more like a data structure server that happens to live entirely in memory.
You can use it as a cache. A session store. A message broker. A leaderboard. A rate limiter. Basically anywhere you need data fast and can tolerate it being outside your main database.
The Single Thread Thing
Here is the part that surprises people. Redis is single threaded. One thread handles all your commands. Every read. Every write. One at a time. In order.
That sounds like a terrible idea. But it is actually genius for what Redis does.
Why Single Threaded Works
When your data lives in memory there is no waiting on I/O. No blocking on disk reads. Every operation takes microseconds. The bottleneck is never the processing. It is the network.
A single Redis thread can handle hundreds of thousands of operations per second. Most apps will never saturate it.
And being single threaded means no locks. No mutexes. No race conditions. No deadlocks. Every command is atomic by default. You never have two operations fighting over the same key at the same time.
That makes Redis incredibly predictable. You send a command. It runs. It finishes. The next one runs. No surprises.
The Tradeoffs
You cannot use multiple CPU cores for processing commands. One thread means one core. If you somehow max out that one thread you are stuck.
Long running commands can block everything. A badly placed KEYS or a huge SORT and every other client waits.
And since everything lives in memory your dataset is limited by how much RAM you have. RAM is not cheap. You are not storing terabytes in Redis. You shouldn’t.
But the Tradeoffs Barely Matter
For caching and fast lookups you almost never need more than one thread of throughput. If you do you can run multiple Redis instances. Shard your keys across them. Problem solved.
The memory limit is real but Redis is not meant to replace your database. It sits in front of it. You keep the hot data in Redis and the rest on disk where it belongs.






