Redis is a high-performance key-value storage system that is widely used in microservices architecture. If we want to use the advanced features provided by the Redis clustering pattern, we need to make changes to the client code, which makes application upgrades and maintenance difficult. Using Istio and Envoy, we can implement client-aware Redis Cluster data sharding without modifying client code and provide advanced traffic management features such as read/write separation and traffic mirroring.

Redis Cluster

A common use of Redis is as a data cache. By adding a Redis cache layer between the application server and the database server, the application server can reduce a large number of read operations to the database, avoid the risk of slow response or even breakdown of the database server under great pressure, and significantly strengthen the robustness of the whole system. How Redis works as a data cache is shown below:

In a small-scale system, a single Redis like the one shown in the figure above can function well with the cache layer. When a large amount of data needs to be cached in the system, one Redis server cannot bear the caching requirements of all application servers. At the same time, when a single Redis instance fails, a large number of read requests will be directly sent to the back-end database server, resulting in the instantaneous pressure of the database server exceeding the standard and affecting the stability of the system. We can use Redis Cluster to fragment cache data and put different data into different Redis fragments to improve the capacity of Redis cache layer. In each Redis fragment, multiple Replica nodes can also be used to share the load of cache read requests and realize the high availability of Redis. The system using Redis Cluster is shown below:

As can be seen from the figure, in Redis Cluster mode, clients need to send read and write operations of different keys to different Redis nodes in the Cluster according to the sharding rules of the Cluster. Therefore, clients need to understand the topology of Redis Cluster. This made it impossible to smoothly migrate an application using the Redis standalone node mode to the Redis Cluster without modifying the client. In addition, the client needs to understand the internal topology of the Redis Cluster, which will lead to the coupling between the client code and the Operation and maintenance of the Redis Cluster. For example, to achieve read/write separation or traffic mirroring, each client code needs to be modified and redeployed.

In this scenario, we can place an Envoy proxy server between the application server and the Redis Cluster, which will route cached read and write requests from the application to the correct Redis node. A microserver has a large number of application processes that need to access the cache server. To avoid single points of failure and performance bottlenecks, we deploy an Envoy proxy in the form of a Sidecar for each application process. Meanwhile, to simplify the administration of these proxies, Istio can be used as a control surface to configure all Envoy proxies uniformly, as shown in the following figure:

In the remainder of this article, we will describe how to manage Redis clusters through Istio and Envoy, implement client-insensitive data partitioning, and advanced routing strategies such as read/write separation and traffic mirroring.

Deploy Istio

The Pilot supports the Redis protocol, but its functions are weak. Only one default route can be configured for the Redis agent. In addition, the Redis Cluster mode is not supported, and advanced traffic management functions such as data sharding, read/write separation, and traffic mirroring cannot be implemented. To enable Istio to deliver the Redis Cluster configuration to the Envoy Sidecar, we modified the EnvoyFilter configuration code to support EnvoyFilter’s “REPLCAE” operation. The modified PR Implement REPLACE Operation for EnvoyFilter Patch has been submitted to the Istio community and incorporated into the main branch, and will be released in a later version of Istio.

At the time of this writing, the PR has not been incorporated in the latest Istio release, 1.7.3. So I built a Pilot image to enable the EnvoyFilter’s “REPLACE” operation. When installing Istio, we need to specify the Pilot image in the istioctl command, as shown in the following command line:

$ cdIstio-1.7.3 /bin $./istioctl install --set components.pilot.hub=zhaohuabing --setComponents. The pilot. The tag = 1.7.3 - enable - ef - the replaceCopy the code

Note: If you are using an Istio version later than 1.7.3 and have incorporated the PR, you can use the default Pilot mirroring in the Istio version.

The deployment of Redis Cluster

Please from github.com/zhaohuabing… Download the relevant code for the following example:

$ git clone https://github.com/zhaohuabing/istio-redis-culster.git
$ cd istio-redis-culster
Copy the code

We create a “Redis” namespace to deploy the Redis Cluster in this case.

$ kubectl create ns redis
namespace/redis created
Copy the code

Deploy Statefulset and Configmap for Redis server.

$ kubectl apply -f k8s/redis-cluster.yaml -n redis
configmap/redis-cluster created
statefulset.apps/redis-cluster created
service/redis-cluster created
Copy the code

Verify the Redis deployment

Verify that the Redis node is up and running:

$ kubectl get pod -n redis
NAME              READY   STATUS    RESTARTS   AGE
redis-cluster-0   2/2     Running   0          4m25s
redis-cluster-1   2/2     Running   0          3m56s
redis-cluster-2   2/2     Running   0          3m28s
redis-cluster-3   2/2     Running   0          2m58s
redis-cluster-4   2/2     Running   0          2m27s
redis-cluster-5   2/2     Running   0          117s
Copy the code

Create a Redis Cluster

In the previous step, we deployed six Redis nodes using Statefulset, but for now these six nodes are independent of each other and do not form a cluster. We use the Redis cluster create command to form a Redis cluster.

$ kubectl exec -it redis-cluster-0 -n redis -- redis-cli --cluster create --cluster-replicas 1 $(kubectl get pods -l app=redis-cluster -o jsonpath='{range.items[*]}{.status.podIP}:6379 ' -n redis)
Defaulting container name to redis.
Use 'kubectl describe pod/redis-cluster-0 -n redis' to see all of the containers in this pod.
>>> Performing hashslots allocation on 6 nodes... Master[0] -> Slots 0 - 5460 Master[1] -> Slots 5461 - 10922 Master[2] -> Slots 10923 - 16383 Adding replica 172.16.0.72:6379 to 172.16.0.138:6379 Adding Replica 172.16.0.201:6379 to 172.16.1.52:6379 Adding replica 172.16.0.139:6379 to 172.16.1.53:6379 M: 8 fdc7aa28a6217b049a2265b87bff9723f202af0 172.16.0.138:6379 slots: [0-5460] (5461 slots) master M: 4 dd6c1fecbbe4527e7d0de61b655e8b74b411e4c 172.16.1.52:6379 slots: [5461-10922] (5462 slots) master M: 0 b86a0fbe76cdd4b48434b616b759936ca99d71c 172.16.1.53:6379 slots: [10923-16383] (5461 slots) master S: 94 b139d247e9274b553c82fbbc6897bfd6d7f693 172.16.0.139:6379 replicates 0 b86a0fbe76cdd4b48434b616b759936ca99d71c S: E293d25881c3cf6db86034cd9c26a1af29bc585a 172.16.0.72:6379 replicates 8 fdc7aa28a6217b049a2265b87bff9723f202af0 S: Ab897de0eca1376558e006c5b0a49f5004252eb6 172.16.0.201:6379 replicates 4 dd6c1fecbbe4527e7d0de61b655e8b74b411e4c Can Iset the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for>>> Performing cluster Check (using node 172.16.0.138:6379) M: 8 fdc7aa28a6217b049a2265b87bff9723f202af0 172.16.0.138:6379 slots: [0-5460] (5461 slots) master 1 additional up (s) M: 4 dd6c1fecbbe4527e7d0de61b655e8b74b411e4c 172.16.1.52:6379 slots: [5461-10922] (5462 slots) 1 additional master Up (s) s: 94 b139d247e9274b553c82fbbc6897bfd6d7f693 172.16.0.139:6379 slots: (0 slots) slave replicates 0b86a0fbe76cdd4b48434b616b759936ca99d71c M: 0 b86a0fbe76cdd4b48434b616b759936ca99d71c 172.16.1.53:6379 slots: [10923-16383] (5461 slots) 1 additional master Up (s) s: ab897de0eca1376558e006c5b0a49f5004252eb6 172.16.0.201:6379 slots: (0 slots) slave replicates 4dd6c1fecbbe4527e7d0de61b655e8b74b411e4c S: E293d25881c3cf6db86034cd9c26a1af29bc585a 172.16.0.72:6379 slots: (0 slots) slave replicates 8fdc7aa28a6217b049a2265b87bff9723f202af0 [OK] All nodes agree about slots configuration. >>> Checkfor open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
Copy the code

Verify the Redis Cluster

You can run the cluster info command to view the configuration information of the Redis cluster and the member nodes in the cluster to verify whether the cluster is successfully created.

$ kubectl exec -it redis-cluster-0 -c redis -n redis -- redis-cli cluster info 
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:206
cluster_stats_messages_pong_sent:210
cluster_stats_messages_sent:416
cluster_stats_messages_ping_received:205
cluster_stats_messages_pong_received:206
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:416
Copy the code

Deploy the test client

We deploy a client to send test commands:

$ kubectl apply -f k8s/redis-client.yaml -n redis
deployment.apps/redis-client created
Copy the code

Deliver the Envoy configuration associated with the Redis Cluster through Istio

In the following steps, we will send the Redis Cluster configuration to the Envoy Sidecar via Istio to enable the advanced features of The Redis Cluster, including data sharding, read/write separation, and traffic mirroring, without changing the client.

Create Envoy Redis Cluster

Envoys provide an Envoy Cluster of type “enlist.clusters. Redis” to connect to the back-end Redis Cluster, which envoys retrieve the topology of the back-end Redis Cluster, This includes how many shards there are, which slots each shard is responsible for, and which nodes are included in the shard to distribute requests from clients to the correct Redis nodes.

Use EnvoyFilter to create the desired Envoy Redis Cluster:

$ kubectl apply -f istio/envoyfilter-custom-redis-cluster.yaml
envoyfilter.networking.istio.io/custom-redis-cluster created
Copy the code

Create Envoy Redis Proxy

The DEFAULT LDS delivered by Istio is configured with TCP Proxy Filter. You need to replace the TCP proxy Filter with Redis Proxy Filter.

Since EnvoyFilter’s “REPLACE” operation is not yet supported in 1.7.3, we need to update the CRD definition of EnvoyFilter before we can create the EnvoyFilter:

$ kubectl apply -f istio/envoyfilter-crd.yaml 
customresourcedefinition.apiextensions.k8s.io/envoyfilters.networking.istio.io configured
Copy the code

EnvoyFilter is used to replace TCP proxy filter with Redis Proxy filter so that Envoy can delegate Redis operation requests from clients:

$ sed -i .bak "s/\${REDIS_VIP}/`kubectl get svc redis-cluster -n redis -o=jsonpath='{.spec.clusterIP}'`/" istio/envoyfilter-redis-proxy.yaml
$ kubectl apply -f istio/envoyfilter-redis-proxy.yaml
envoyfilter.networking.istio.io/add-redis-proxy created
Copy the code

Verify Redis Cluster functionality

With everything in place, let’s verify the capabilities of Redis Cluster.

Redis data sharding

After we send the configuration defined in EnvoyFilter to Envoy via Istio, Envoy can automatically discover the topology of the back-end Redis Cluster, Requests are automatically distributed to the correct nodes in the Redis Cluster based on the key in the client request.

According to the command line output in the previous step of creating the Redis Cluster, we can see the topology of the Redis Cluster: there are three shards in the Cluster, and each shard has a Master node and a Slave(Replica) node. The client visits the Redis Cluster through an Envoy Proxy deployed in the same Pod as it, as shown below:

Master and Slave addresses of each shard in the Redis Cluster:

Shard[0] Master[0] Redis-cluster-0 172.16.0.138:6379 Replica Redis-cluster-4 172.16.0.72:6379 -> Slots 0-5460 Shard[1] Master[1] Redis-cluster-1 172.16.1.52:6379 replica Redis-cluster-5 172.16.0.201:6379 -> Slots 5461-10922 Shard[2] Master[2] Redis-cluster-2 172.16.1.53:6379 Replica Redis-cluster-3 172.16.0.139:6379 -> Slots 10923-16383Copy the code

Note: If you deploy the example in your OWN K8s cluster, the IP addresses and topology of the nodes in the Redis cluster may be slightly different, but the basic structure should be similar.

We tried to send set requests with different keys to the Rdeis Cluster from the client:

$ kubectl exec -it `kubectl get pod -l app=redis-client -n redis -o jsonpath="{.items[0].metadata.name}"` -c redis-client -n redis -- redis-cli -h redis-cluster
redis-cluster:6379> set a a
OK
redis-cluster:6379> set b b
OK
redis-cluster:6379> set c c
OK
redis-cluster:6379> set d d
OK
redis-cluster:6379> set e e
OK
redis-cluster:6379> set f f
OK
redis-cluster:6379> set g g
OK
redis-cluster:6379> set h h
OK
Copy the code

From the client side, all the requests are successful, we can use the scan command to view the data in each node on the server side:

View data in Shard[0]. The master node is Redis-cluster-0. The slave node is redis-cluster-4.

$ kubectl exec redis-cluster-0 -c redis -n redis -- redis-cli --scan
b
f
$ kubectl exec redis-cluster-4 -c redis -n redis -- redis-cli --scan
f
b
Copy the code

View data in Shard[1]. The master node is Redis-cluster-1 and the slave node is redis-cluster-5.

$ kubectl exec redis-cluster-1 -c redis -n redis -- redis-cli --scan
c
g
$ kubectl exec redis-cluster-5 -c redis -n redis -- redis-cli --scan
g
c
Copy the code

View the data in Shard[2]. The master node is Redis-cluster-2 and the slave node is redis-cluster-3.

$ kubectl exec redis-cluster-2 -c redis -n redis -- redis-cli --scan
a
e
d
h
$ kubectl exec redis-cluster-3 -c redis -n redis -- redis-cli --scan
h
e
d
a
Copy the code

As can be seen from the above verification results, the data set by the client is distributed to three shards in the Redis Cluster. The data distribution process is automatically implemented by the Envoy Redis Proxy. The client does not perceive the back-end Redis Cluster. For the client, the interaction with the Cluster is the same as the interaction with a single Redis node.

Using this method, we can seamlessly migrate Redis in the system from single node to cluster mode when the application business scale gradually expands and the pressure of single Redis node becomes too large. In cluster mode, data with different keys are cached in different data fragments. We can expand a fragment by increasing the number of Replica nodes in the fragment, or expand the whole cluster by increasing the number of fragments to cope with the increasing data pressure caused by continuous service expansion. Due to Envoy’s knowledge of the Redis Cluster topology, data distribution is done by Envoy and the migration and expansion process does not require clients to affect online business operations.

Redis reads and writes separate

In a Redis shard, there is usually one Master node. Once there are multiple Slave (Replica) nodes, the Master node is responsible for write operations and synchronizes data changes to the Slave nodes. When the read operation pressure from the application is high, more replicas can be added to the fragments to share the load of read operations. Envoy Redis Rroxy supports setting different read strategies:

  • MASTER: Reads data only from the MASTER node. This mode is required when clients require strong data consistency. This mode imposes heavy pressure on the Master, and multiple nodes cannot be used to load balance read operations in the same fragment.
  • PREFER_MASTER: reads data from the Master node, or from the Replica node when the Master node is unavailable.
  • REPLICA: reads data only from the REPLICA node. Since data is copied from the Master to the REPLICA asynchronously, expired data may be read in this mode. Therefore, it applies to scenarios where clients do not have high requirements on data consistency. In this mode, multiple Replica nodes can be used to share read loads from clients.
  • PREFER_REPLICA: reads data from the Replica node, and when the Replica node is unavailable, reads data from the Master node.
  • ANY: reads data from ANY node.

In the EnvoyFilter delivered earlier, we set the read policy of the Envoy Redis Proxy to “REPLICA”, so the client read should only be sent to the REPLICA node. Let’s use the following command to verify the read/write separation policy:

Initiate a series of get and set operations with key “B” through the client:

$ kubectl exec -it `kubectl get pod -l app=redis-client -n redis -o jsonpath="{.items[0].metadata.name}"` -c redis-client -n redis -- redis-cli -h redis-cluster

redis-cluster:6379> get b
"b"
redis-cluster:6379> get b
"b"
redis-cluster:6379> get b
"b"
redis-cluster:6379> set b bb
OK
redis-cluster:6379> get b
"bb"
redis-cluster:6379> 
Copy the code

In the previous Redis Cluster topology, we learned that key “B” belongs to the Shard[0]. You can run the redis-cli monitor command to view the commands received in the Master and Replica nodes in the sharding.

The Master node:

$ kubectl exec redis-cluster-0 -c redis -n redis -- redis-cli monitor
Copy the code

Slave nodes:

$ kubectl exec redis-cluster-4 -c redis -n redis -- redis-cli monitor
Copy the code

As you can see in the figure below, all get requests are dispatched to the Replica node by the Envoy.

Redis traffic mirroring

Envoy Redis Proxy supports traffic mirroring, in which requests from clients are sent simultaneously to a mirrored Redis server/cluster. Traffic mirroring is a very useful function. You can use traffic mirroring to import online data from the production environment to the test environment, so that the online data can be used to simulate the application as realistically as possible without affecting the normal use of online users.

We create a single Redis node to be used as the image server:

$ kubectl apply -f k8s/redis-mirror.yaml -n redis 
deployment.apps/redis-mirror created
service/redis-mirror created
Copy the code

Envofilters are used to enable mirroring policies:

$ sed -i .bak "s/\${REDIS_VIP}/`kubectl get svc redis-cluster -n redis -o=jsonpath='{.spec.clusterIP}'`/" istio/envoyfilter-redis-proxy-with-mirror.yaml
$ kubectl apply -f istio/envoyfilter-redis-proxy-with-mirror.yaml
envoyfilter.networking.istio.io/add-redis-proxy configured
Copy the code

Initiate a series of get and set operations with key “B” through the client:

$ kubectl exec -it `kubectl get pod -l app=redis-client -n redis -o jsonpath="{.items[0].metadata.name}"` -c redis-client -n redis -- redis-cli -h redis-cluster
redis-cluster:6379> get b
"b"
redis-cluster:6379> get b
"b"
redis-cluster:6379> get b
"b"
redis-cluster:6379> set b bb
OK
redis-cluster:6379> get b
"bb"
redis-cluster:6379> set b bbb
OK
redis-cluster:6379> get b
"bbb"
redis-cluster:6379> get b
"bbb"
Copy the code

You can run the redis-cli monitor command to view commands received on the Master, Replica, and mirror nodes respectively.

The Master node:

$ kubectl exec redis-cluster-0 -c redis -n redis -- redis-cli monitor
Copy the code

Slave nodes:

$ kubectl exec redis-cluster-4 -c redis -n redis -- redis-cli monitor
Copy the code

Mirror node:

$ kubectl exec -it `kubectl get pod -l app=redis-mirror -n redis -o jsonpath="{.items[0].metadata.name}"` -c redis-mirror -n redis -- redis-cli monitor
Copy the code

As you can see in the image below, all set requests are dispatched to a mirror node Envoy.

Realize the principle of

In the previous step, we created two EnvoyFilter configuration objects in Istio. The envoyFilters modify the Envoy Proxy configuration, including two parts: Redis Proxy Network Filter configuration and Redis Cluster configuration.

EnvoyFilter replaces the TCP Proxy Network Filter in the Listener created by Pilot for Redis Service. Replace it with a “type.googleapis.com/envoy.config.filter.network.redis_proxy.v2.RedisProxy” type Network Filter. The default route of the Redis Proxy points to custom-Redis -cluster, and the read/write separation policy and traffic mirroring policy are configured.

apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
  name: add-redis-proxy
  namespace: istio-system
spec:
  configPatches:
  - applyTo: NETWORK_FILTER
    match:
      listener:
        name: ${REDIS_VIP}_6379             # Replace REDIS_VIP with the cluster IP of "redis-cluster service
        filterChain:
          filter:
            name: "envoy.filters.network.tcp_proxy"
    patch:
      operation: REPLACE
      value:
        name: envoy.redis_proxy
        typed_config:
          "@type": type.googleapis.com/envoy.config.filter.network.redis_proxy.v2.RedisProxy
          stat_prefix: redis_stats
          prefix_routes:
            catch_all_route:
              request_mirror_policy:            # Send requests to the mirror cluster
              - cluster: outbound|6379||redis-mirror.redis.svc.cluster.local
                exclude_read_commands: True     # Mirror write commands only:
              cluster: custom-redis-cluster
          settings:
            op_timeout: 5s
            enable_redirection: true
            enable_command_stats: true
            read_policy: REPLICA               # Send read requests to replica
Copy the code

The following EnvoyFilter creates a Cluster of type “enenvoy. Clusters. Redis” in the CDS issued by the Pilot: “Custom-redis-cluster”. The cluster uses cluster SLOTS to query the cluster topology from a random node in the Redis cluster and save the topology locally. To distribute requests from clients to the correct Redis nodes in the cluster.

apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
  name: custom-redis-cluster
  namespace: istio-system
spec:
  configPatches:
  - applyTo: CLUSTER
    patch:
      operation: INSERT_FIRST
      value:
        name: "custom-redis-cluster"
        connect_timeout: 0.5s
        lb_policy: CLUSTER_PROVIDED
        load_assignment:
          cluster_name: custom-redis-cluster
          endpoints:
          - lb_endpoints:
            - endpoint:
                address:
                  socket_address:
                    address: redis-cluster-0.redis-cluster.redis.svc.cluster.local
                    port_value: 6379
            - endpoint:
                address:
                  socket_address:
                    address: redis-cluster-1.redis-cluster.redis.svc.cluster.local
                    port_value: 6379
            - endpoint:
                address:
                  socket_address:
                    address: redis-cluster-2.redis-cluster.redis.svc.cluster.local
                    port_value: 6379
            - endpoint:
                address:
                  socket_address:
                    address: redis-cluster-3.redis-cluster.redis.svc.cluster.local
                    port_value: 6379
            - endpoint:
                address:
                  socket_address:
                    address: redis-cluster-4.redis-cluster.redis.svc.cluster.local
                    port_value: 6379
            - endpoint:
                address:
                  socket_address:
                    address: redis-cluster-5.redis-cluster.redis.svc.cluster.local
                    port_value: 6379
        cluster_type:
          name: envoy.clusters.redis
          typed_config:
            "@type": type.googleapis.com/google.protobuf.Struct
            value:
              cluster_refresh_rate: 5s
              cluster_refresh_timeout: 3s
              redirect_refresh_interval: 5s
              redirect_refresh_threshold: 5
Copy the code

summary

This article describes how to use Envoy to provide client-insensitive Redis data sharding for microservice applications and how to manage the Redis Cluster configuration of multiple Envoy agents in a system through Istio. We can see that using Istio and Envoy can greatly simplify the coding and configuration work of clients using Redis Cluster, and the operation and maintenance strategy of Redis Cluster can be modified online to achieve advanced traffic management such as read/write separation and traffic mirroring. Of course, the introduction of Istio and Envoy does not reduce the complexity of the entire system, but rather centralize the maintenance of the Redis Cluster from the disparate application code to the service grid infrastructure layer. For the vast majority of application openers, the business value comes mainly from application code, and it is not cost-effective to invest a lot of effort in such infrastructure. It is recommended to use the Cloud native Service Mesh Service TCM (Tencent Cloud Mesh) to quickly introduce the traffic management and Service governance capabilities of Service Mesh to micro Service applications. The installation, maintenance, and upgrade of the Service Mesh infrastructure are unnecessary.

Reference documentation

  • Rancher.com/blog/2019/d…
  • medium.com/@fr33m0nk/m…
  • Implement REPLACE operation for EnvoyFilter patch

[Tencent cloud native] cloud said new, cloud research new technology, cloud travel new live, cloud appreciation information, scan code to pay attention to the public account of the same name, timely access to more dry goods!!