Deep Dive Into Redis



Similar Posts:


linkedin
github
slack

Design: How Does Caching Work


Name Summary
Data persistent All data is eventually persistent. But can also be immediately
Availability Redis has built-in master/slave replication. Slave can be promoted to master on fly
Atomic Support for atomic operations
Reference SlideShare: What is redis

Q: What are Redis’ typical use cases?

All data is in memory but also persists. Mostly operations are O(1) behavior.

Redis is a single threaded application uses async IO. This helps to achieve high throughput.

Redis Key features:

Name Summary
Get/Set/Incr  
Lists  
Sets  
Sorted Sets  
Hash Tables  
PubSub  
SORT  
Transactions Support for atomic transactions

Link: Using Redis as an LRU cache:

Eviction Policy Description
allkeys-lru The cache evicts the least recently used (LRU) keys regardless of TTL set.
allkeys-lfu The cache evicts the least frequently used (LFU) keys regardless of TTL set.
volatile-lru The cache evicts the least recently used (LRU) keys from those that have a TTL set.
volatile-lfu The cache evicts the least frequently used (LFU) keys from those that have a TTL set.
volatile-ttl The cache evicts the keys with the shortest TTL set.
volatile-random The cache randomly evicts keys with a TTL set.
allkeys-random The cache randomly evicts keys regardless of TTL set.
no-eviction The cache doesn’t evict keys at all. This blocks future writes until memory frees up

In general as a rule of thumb:

  • Use the allkeys-lru policy when you expect a power-law distribution in the popularity of your requests, that is, you expect that a subset of elements will be accessed far more often than the rest. This is a good pick if you are unsure.
  • Use the allkeys-random if you have a cyclic access where all the keys are scanned continuously, or when you expect the distribution to be uniform (all elements likely accessed with the same probability).
  • Use the volatile-ttl if you want to be able to provide hints to Redis about what are good candidate for expiration by using different TTL values when you create your cache objects.

Generally, least recently used (LRU)-based policies are more common for basic caching use cases. Also, if you are experiencing evictions with your cluster, it is usually a sign that you should scale up or scale out to accommodate the additional data.


Q: How cache evictions works in Redis?

A: link


Q: how redis support distributed locks?

A:


Q: how redis supports transactions?

A:


Q: how redis do the data partitioning?

A:


Q: how redis support high-throughput counter?

A:


Q: What are the use cases for redis Pub/Sub functionality?

A:



Share It, If You Like It.

Leave a Reply

Your email address will not be published. Required fields are marked *