Friday, March 4, 2022

Run scalable and resilient Redis with Kubernetes and Azure Kubernetes Service

Redis is a successful open source in-memory data structure store first released in 2009. It is most commonly used as a database, cache, and message broker. Developers enjoy Redis for its versatility and simplicity. Low cognitive load makes development fast and efficient.


Language support for Redis is very mature with clients across every major language one might expect, with multiple community-supported implementations per language (including Go, Java, Python and Rust), as well as support for advanced functionality such as Redis Cluster and Redis Sentinel, which enable sharding and master-replica topologies respectively.


Many other open source projects integrate with Redis. Just two examples in the cloud native ecosystem that we contribute to in the CNCF are Dapr (Distributed Application Runtime) and KEDA (Kubernetes event-driven autoscaling). Dapr supports Redis for both its state store and pub/sub broker components, and Redis is used by default for local development. KEDA has scalers for Redis that support Redis Lists and Redis Streams, including support for Redis Cluster and Redis Sentinel.


Why run Redis containerized, or on Kubernetes?

Redis is fully supported by managed cloud services such as Azure Cache for Redis which are often the first choice for customers deploying Redis open-source and Redis Enterprise in the cloud.


However, there are many valid reasons developers choose to deploy their own instances of Redis. Linux is the recommended deployment platform, and Docker is a seamless, cross-platform, option to run Redis locally for development purposes. Redis is especially popular on container-oriented cloud native platforms such as Kubernetes, including managed platforms such as Azure Kubernetes Service. Common reasons to deploy Redis using Docker or Kubernetes include:


  • Portability – developers may want a truly vendor agnostic solution. An open source application, for example, can be packaged into a helm chart and be a single helm install away, on any Kubernetes cluster, anywhere.
  • Local development – Whether using Docker Compose, or KIND (Kubernetes IN Docker), the benefits of a local, containerized, development environment which closely mirrors a production environment in the cloud can be significant.
  • Scalability – Rather than remain static, it may make sense for Redis to scale with your application. Whether as a side-car cache to specific components of your application, or scaled in-cluster with the Horizontal Pod Autoscaler, Kubernetes is often a great choice for scalable compute.
  • Monitoring – Kubernetes has a rich observability ecosystem and the ability to monitor Redis with tools like Prometheus and Grafana is no exception.

Resilient and highly-available Redis

While Kubernetes can help make Redis more resilient – even a self-healing singleton Kubernetes service can be better than a stand-alone virtual machine – true high-availability is often desirable. For example, if you are using Redis as a cache, your application may be able to survive the temporary unavailability of Redis under modest load. However, under significant load, the loss of a cache may cause the application to fail.


In this post we explore how we can deploy a highly available Redis Cluster to Kubernetes.


Deploy AKS

In this post we will be using Azure Kubernetes Service (AKS) cluster which we will deploy using the Azure CLI. Make sure you have the Azure CLI installed before you continue.


Create a Resource Group

az group create --name my-aks \
  --location eastus

Create an AKS cluster

az aks create --resource-group my-aks \
  --name aks1 \
  --node-count 3 \
  --enable-addons monitoring \

Install kubectl if you do not have it installed already

az aks install-cli

Configure kubectl to authenticate to your cluster

az aks get-credentials --resource-group my-aks \
  --name aks1


Deploy Redis Cluster using the Bitnami Helm chart repository

Helm is the easiest way to deploy complex applications to Kubernetes. Bitnami provides helm charts for Redis as part of its Bitnami Stacks for Kubernetes which are fully supported on AKS, and actively maintained on GitHub.


Redis™ Cluster Packaged By Bitnami For Kubernetes provides two options for deploying a production-grade Redis deployment on Kubernetes. Redis Cluster enables sharing and disaster recovery, but supports a single database, and Redis Sentinel uses a master-replica, with a single master, and supports multiple databases. You can read more about replication in the Redis documentation. Let’s install it.


Add the bitnami helm chart repository.

helm repo add bitnami

Install the redis-cluster helm chart.

helm install my-redis bitnami/redis-cluster

You will be prompted to set a REDIS_PASSWORD environment variable extracted from the Kubernetes secret. Note this is a bash environment variable (use via Windows System for Linux/macOS/Linux, etc). If you use another shell, adjust the syntax accordingly.

export REDIS_PASSWORD=$(kubectl get secret --namespace "default" my-redis-redis-cluster -o jsonpath="{.data.redis-password}" | base64 --decode)


Explore with redis-cli

Run a pod inside the cluster that will enable us to connect to Redis using the REDIS_PASSWORD environment variable you set earlier.

kubectl run --namespace default my-redis-redis-cluster-client \
  --rm --tty -i --restart='Never' \
  --image -- bash

Connect to the Redis cluster using redis-cli. Note that the -c switch is required when connecting to a cluster.

redis-cli -c -h my-redis-redis-cluster -a $REDIS_PASSWORD

Next run four set commands from the CLI:

set one hello
set two world
set three goodbye
set four world

You will see output that indicates the client is being redirected to multiple nodes.> set one hello
-> Redirected to slot [9084] located at
OK> set two world
-> Redirected to slot [2127] located at
OK> set three goodbye
-> Redirected to slot [13861] located at
OK> set four world
-> Redirected to slot [8296] located at

The same will apply when you get the values.

get one
get two
get three
get four

For a deeper dive into the functionality, Redis has an excellent Redis Cluster Tutorial which can you follow from the Playing with the cluster section. It includes sections on Writing an example app with redis-rb-cluster, Resharding the cluster, Scripting a resharding operation, A more interesting example application (with consistency-test.rb), and Testing the failover.


When testing a failover, choose a pod to delete.

kubectl get pods

You will see the pods as follows.

NAME                            READY   STATUS    RESTARTS   AGE
my-redis-redis-cluster-0        1/1     Running   0          33m
my-redis-redis-cluster-1        1/1     Running   0          33m
my-redis-redis-cluster-2        1/1     Running   0          33m
my-redis-redis-cluster-3        1/1     Running   0          33m
my-redis-redis-cluster-4        1/1     Running   0          33m
my-redis-redis-cluster-5        1/1     Running   0          33m
my-redis-redis-cluster-client   1/1     Running   0          25m

Then delete a pod. It will be automatically re-created and you will see its status as ContainerCreating.

kubectl delete pod my-redis-redis-cluster-3

Next, we will connect to our cluster with Python and Go.


Explore with Python

The official Redis client for Python, redis/redis-py, now supports Redis Cluster. Previously Python users might have had experience with Grozken/redis-py-cluster which the official library’s support is now based on, with the original repo becoming a “laboratory for redis and cluster moving forward”.


Open a new terminal and set the REDIS_PASSWORD environment variable.

export REDIS_PASSWORD=$(kubectl get secret --namespace "default" my-redis-redis-cluster -o jsonpath="{.data.redis-password}" | base64 --decode)

Run a python container interactively.

kubectl run --namespace default python-redis-cluster-test \
  --rm --tty -i --restart='Never' \
  --image python -- bash

Install the official redis/redis-py package using pip.

pip install redis

Run python in interactive mode.


Paste the following snippet at the prompt (>>>).

from redis.cluster import RedisCluster as Redis
import os


from redis.cluster import RedisCluster as Redis

Now you have a fully authenticated client, rc.


Output the nodes.


You will see the nodes listed as follows.

>>> rc.get_nodes()
[[host=,port=6379,name=,server_type=primary,redis_connection=Redis<ConnectionPool<Connection<host=,port=6379,db=0>>>], [host=,port=6379,name=,server_type=replica,redis_connection=Redis<ConnectionPool<Connection<host=,port=6379,db=0>>>], [host=,port=6379,name=,server_type=primary,redis_connection=Redis<ConnectionPool<Connection<host=,port=6379,db=0>>>], [host=,port=6379,name=,server_type=replica,redis_connection=Redis<ConnectionPool<Connection<host=,port=6379,db=0>>>], [host=,port=6379,name=,server_type=primary,redis_connection=Redis<ConnectionPool<Connection<host=,port=6379,db=0>>>], [host=,port=6379,name=,server_type=replica,redis_connection=Redis<ConnectionPool<Connection<host=,port=6379,db=0>>>]]

Set and get some values.

rc.set("one", "goodbye")

Finally, create a loop that will lpush the current time to now every two seconds.

import datetime, time
while True:
  now ="%H:%M:%S")
  rc.lpush('now', now)

You will see the length of the list increase.

>>> import datetime, time
>>> while True:
...   now ="%H:%M:%S")
...   rc.lpush('now', now)
...   time.sleep(2)

Switch to the terminal where you ran redis-cli in the my-redis-redis-cluster-client pod, and run rpop now repeatedly.> rpop now
"18:33:14"> rpop now
"18:33:16"> rpop now


Explore with Go

Finally, we will deploy the pre-built container application that will log our new items. This provides an example of how to use the go-redis/redis package. This example uses the universal client.


We provided a pre-built container at and you can view the full source code of the sample at The repo includes a Dockerfile that uses distroless base image and helper scripts that allow you to run the same container to consume events (the default), produce events, and replicates the Python example above. You can see the code for the produce() and consume() functions in main.go, alongside a simple Set/Get example() from the original go-redis/redis quickstart.


Make sure you have the previous Python snippet above running in a terminal.


In another terminal window, we will set the REDIS_PASSWORD environment variable as before, and also get the internal endpoints for the my-redis-redis-cluster service which we will provide to the REDIS_ADDR.


The go-redis Universal Client lets us connect using single-node Client, a ClusterClient, or a FailoverClient, depending on whether a single endpoint, multiple endpoints, or a MasterName are provided. However, unlike the Python client, this snippet does not automatically discover the endpoints, and we are using the internal endpoints of the service for testing purposes only – do not do this in production. Finally, you may wonder why we cannot use kubectl port-forward to the my-redis-redis-cluster service and use that for local testing. This is because the client will attempt to connect to each shard directly, which will not be possible as your machine will not be able to reach those endpoints via the port-forward.

export REDIS_PASSWORD=$(kubectl get secret --namespace "default" my-redis-redis-cluster -o jsonpath="{.data.redis-password}" | base64 --decode)

export REDIS_ADDR=$(kubectl get endpoints my-redis-redis-cluster -o=jsonpath='{.subsets[0].addresses[*].ip}')

kubectl run --namespace default go-redis-sample \
  --rm --tty -i --restart='Never' \

To run the above as a producer, replacing the Python snippet above, you can use the do/ script in the repo. do/ is an example of the Docker build command for the image.


Finally, if you want to continue to hack on this inside the cluster, without any other dependencies, you could run asw101/nvgo container insider your cluster. It uses neovim, gopls language server in the official golang Docker image.



In this post you have deployed Redis Cluster to your AKS cluster with helm, explored it interactively with the Redis CLI and Python, as well as a pre-built Go container.


At this stage, you may wish to build another container in the language of your choice to future explore Redis within your Kubernetes environment. Or perhaps even try it with Dapr or KEDA.


Once you have finished exploring, you should delete the my-aks resource group for your AKS cluster to avoid any further charges.

az group delete -n my-aks


by Aaron Wislang (@as_w)

Posted at