On May 11, 2019, OpenResty community jointly hosted OpenResty × Open Talk nationwide tour salon in Wuhan. The chief evangelist of OpenResty shared “Dynamic Service Routing Scheme Based on OpenResty” at the event.

OpenResty X Open Talk is a national Open Talk salon sponsored by the OpenResty community, which invites experienced OpenResty technical experts to share their OpenResty experience and improve the communication and learning among OpenResty users. Promote OpenResty open source projects. The event has been held in Shenzhen, Beijing, Wuhan, and will be held in Shanghai, Guangzhou, Hangzhou and other cities successively.

Shao Haiyang, chief advocate, operation and maintenance Director and senior system operation and maintenance architect of Youpai Cloud, has years of experience in CDN industry architecture design, operation and maintenance development and team management. He is proficient in Linux system and embedded system, Internet high-performance architecture design, CDN acceleration, KVM virtualization and OpenStack cloud platform research. Currently focusing on container and virtualization technologies in the private cloud practice of photographing the cloud.

Here is the full text:

Today, WE introduce a dynamic service routing solution based on NGX_LUa. It is a component in the whole process of containerization. Containerization has great challenges in service routing. Now this solution is open source, if you encounter the same problem in the future, you can directly use this solution.

Service Zero Down time updated

How do you keep dropping services when you update them? Failure is not allowed when we update the service. If the request fails due to our update failure, even if the request is very small, the reputation will be bad, and if an accident is caused, money will be lost. This is why we do dynamic service routing.

Service routing mainly includes the following parts:

  • Service registration refers to that when the service provider gets up, it goes to the service discovery registration to show that it provides the service, port, IP, what is the service name, etc.
  • Service discovery is a place to centrally manage services, keeping track of what services are available and where they are;
  • Load balancing, because there are many containers that provide the same service, you need to consider how to do load balancing in these containers.

There are many scenarios for service discovery, but their application scenarios and languages are different. Zookeeper is an old open source project, relatively mature, but has high requirements on resources. It is one of the earliest solutions we use. Our current Kafka and message queue are all dependent on Zookeeper. Etcd and Consul are up-and-comers, K8S is dependent on ETCD, etCD is dependent on container choreography; Youpaiyun uses Consul for service registration and discovery. It is a one-stop technology station with convenient deployment, visualization and maintenance. It not only supports KV storage, but also native service monitoring, multi-data center and DNS functions.

There are also many schemes for load balancing. One advantage of LVS is that after finishing the first two layers, if the performance is not good, another LVS can be added, because it is at the fourth layer and lower level, which will not destroy the original network structure, but its expansion is very difficult. HA_PROXY and Nginx have their own strengths. HA_PROXY consumes less CPU for HTTP header resolution. If you do pure forwarding, such as WAF, you can use HA_PROXY, which accounts for about 10% of the CPU. Nginx does pure header forwarding basically accounts for 20% to 25% of the CPU, but Nginx has stronger scalability. Nginx can do TCP, UDP and HTTP forwarding and load balancing, but HA_PROXY only supports TCP and HTTP. The biggest change to HA_PROXY is that it has been refactored in Lua and will be closely integrated with Lua in the future, which adds another capability and they are embracing the K8S ecosystem. Our solution is to choose Nginx because it focuses on HTTP, has good scalability and supports TCP.

As shown above, we put Nginx and Consul on the same map. In order to highlight the service, some things that are not very relevant to the service have been omitted here. We did service management based on Mesos, Docker, and Marathon. One particular service is Registrator, which launches a container on each physical machine using the Docker API and reports the container status to Consul on a regular basis. Nginx does load balancing because our services are currently based on Nginx directly into the container.

How are services in Consul updated to Nginx

In the previous figure, Nginx to containers, service registration to the configuration file is no problem, but from the Consul to Nginx might be a problem, because the Consul has all the information, but how to inform the information to the Nginx? If a new service is launched or a service is down, Consul will know how to get Nginx to delete the faulty service and add some new services, that’s what we need to figure out.

The issue here is how Consul services are updated to Nginx, and if you solve this problem, the Nginx +Consul+Registrator model is complete. There are many solutions to this problem:

1. Consul_template

Listen on the key in Consul and trigger the execution of a script. If the service using this feature changes, the configuration will be regenerated based on the preconfigured template. This is the last script to be executed.

If K/v changes, the template is converted to a real configuration file. Then a local command, Nginx -s reload, is executed to reload the configuration file. Then the new service takes effect.

Of course, Reload does have some disadvantages:

  • First, there is a performance loss if reloaded frequently;
  • Second, the old processes are constantly shutting down. If there are any connections, the old processes are always in the middle. You never know when Reload is really done.
  • Third, in-process cache invalidation, we will put some of the database information, some of the code all cache into the local, so that all cache invalidation;
  • The most important point is that it is inconsistent with the original intention of the design. The original intention of the design is to facilitate operation and maintenance without affecting the current request. It is equivalent to using Docker as a virtual machine.

2. Scheme 2: Internal NDS scheme

DNS solutions are also commonly used, such as changing Server from an IP address to a domain name by resolving a batch of IP addresses. This sounds perfect, and Consul supports DNS, so we don’t need to maintain another DNS. Just change the ID to a domain name.

But we felt that using DNS was better than doing Reload because

  • First, there is an extra layer of DNS resolution time, adding extra processing time;
  • Second, the DNS cache, which is the main reason, because the cache doesn’t immediately cut off a machine with a problem. If you want to mitigate the problem, you have to make the cache shorter, but it takes more parsing.
  • Thirdly, the port number will change. Physical machines are usually configured with the same port, and Docker can also be configured with the same port. However, for some applications that are not very sensitive to the network, such as some applications with strong CPU, we will directly bridge the container network, and the ports are randomly assigned at this time. Maybe each container is assigned differently, so it’s not feasible.

We wanted to dynamically modify the upstream service list of Nginx through the HTTP interface, and we found a ready-made solution called ngx_HTTP_dyups_module.

3. Scheme 3: ngx_HTTP_DYups_module

The ngx_HTTP_dyups_module can query the current information through the GET interface. POST can update upstream; You can also Delete upstream by Delete.

Here is an example with three requests:

  • First, after sending a request to service port 8080, it is found that there is no upstream service at all, so it is 502.
  • Add two service addresses using a Curl request.
  • The third instruction is exactly the same as the first instruction, because the second instruction has already added the service, so this is a normal output.

There are no Reload operations, no configuration changes, it does a function.

This module was very well written, but we used it for a while and then dropped it. The main reason was not that it was bad, but that we combined some of our own situations and found some problems:

  • First, it leads to load balancing algorithms that rely on Nginx itself. If internal use Ngx_lua we write more, after using this module, leads to we rely heavily on C module, which is itself of some of the load balancing algorithm, we have their own special requirements, such as the “native” priority, priority access to the machine service, strange as it may sound load balancing, if you want to do these things, We’re going to change the C code;
  • Second, the secondary development efficiency is low. The development efficiency of C is far lower than that of Lua.
  • Third, the pure Lua scheme cannot be used. It is not enough for one project to use such a scheme, but better for other projects to use it.

Dynamic load balancing Slardar feature

For all these reasons, we started building our own wheels.

The wheel has four parts:

  • In the first part, which is the most basic Nginx, we want to use some native instructions and retry strategies.
  • The second part is the lua module;
  • The third part is lua_resty_checkups, which is our luA management module that implements dynamic upstream management. This module is about 30% full and has some active health checks. It’s about 1500 lines of code. If it is a C module, it is estimated to have at least 10,000 lines;
  • The fourth part, luasocket, should never be used when Nginx is processing requests.

1, lua – resty – checkups

A brief introduction to the lua_resty_checkups template. It has several functions:

  • First, dynamic upstream management, which realizes synchronization between workers based on shared memory;
  • The second is passive health checking, which is a feature of Nginx itself.
  • Third, it is active health check. The module will send heartbeat packets to the back end. The heartbeat packets can be sent every 15 seconds to check whether the back end service is alive. We can also have some personalized checks, such as HeratBeat sending heartbeat packets upstream periodically to check whether the service is alive;
  • Fourth, load balancing algorithm, local first can save Intranet traffic.

2. Service differentiation

Delete curl from Host. Delete curl from Host

3. Request process

The request process can be divided into three parts. The top part is to receive the request, load a worker code, which finds the corresponding list according to host after execution, and then proxy the request to the server.

(4) Upstream

This is the same as dyups C module, which dynamically updates the list of upstream via HTTP interface. After adding, you can see the added two services in the management page, which contains the server address, some health check messages, the time of status change, and the number of failed services. Below is a record of an active health check.

Why are there active health checks? People usually use some passive health check, that is, after the request is sent to fail to know that the failure, active check is sent heartbeat packet, before the request can know whether the service is wrong.

5. Dynamic Lua loading

Dynamic Lua loading is often used when making games. At the beginning of the program ran some lua code, to the back end of the program to do parameter conversion and compatibility, such as a small adjustment is not willing to change, take the previous route to do, first of all, the request to do rewriting, because I can get the whole request, its request body can do arbitrary things.

In addition, we can do some simple parameter checking in conjunction with some permission controls. According to our statistics, at least 10% of our requests are repeated. If these repeated requests are executed, it is meaningless consumption. We will return 304, indicating that the result is the same as the previous one, so we can directly use the previous result. When we return 304, if we need the service at the back end to judge, we will collect the whole request and then send it to the back, which means that the Intranet bandwidth will be increased. In fact, the bandwidth has been saved, so we can not send it to the back.

This is an example of a dynamic load. If you push this code into Slardar, it will execute. If you do a delete operation, it will return 403, which means you can immediately disable the operation with this code. Everything you can imagine can be done, and the process is dynamic, and if the code loads, it can be seen in the status page.

Dynamic load balancing Slardar implementation

This is all about Slardar’s features. The implementation process is divided into three parts: dynamic upstream management, load balancing, and dynamic Lua code loading.

1. Dynamic upstream management

When you start up, load a configuration file from Consul via Luasocket. If your service hangs for no apparent reason, how do you know what happened when you first get up? So there has to be a way to fix these things, and we chose Consul, so when it starts, it has to load from Consul, listen to the managed port, receive upstream update instructions, and start a timer that does the synchronization between workers. Periodically check whether there is any update from the shared memory. If there is any update, it can be synchronized to your worker.

This is a simple flow chart. At the beginning, it is loaded from Consul, and after the fork, it enters the worker process, that is, those workers that have just been initialized and loaded are available. The other part starts the timer and will enter into this process once there is an update.

2. Load balancing


For load balancing, we use balance_by_lua_, where a request comes in and is sent here via upstream’s C module. The configuration file is shown here. With the balance_by_lua_ directive, we’re going to block it into this file, and we’re going to be able to pick one out of this Lua file with lua code, and that’s a checkups selection itself.

Pictured above is about the process, can look at the bottom part of the beginning, checkups. Select_peer is our module, and then according to the host to the current peer went out, this is achieved using lua control. The top part is to know whether it succeeded or failed, and if it failed, to give feedback on the status.

Dynamic Lua loading

The three lua functions are loadFile, loadString, and setfenv. Loadfile is to load the local lua code, loadstring HTTP request from the consul or body load code, setfenv setup code execution environment, can be loaded by the three functions, and the practice of the specific details will no longer be introduced here.

4. Advantages of dynamic load balancing Slardar

This is the wheel we built, using the lua-resty-Checkups module and balance_by_lua_*, which has the following advantages:

  • Pure LuA implementation, does not rely on the third party C module, so the secondary development is very efficient, reduce the maintenance burden;
  • You can use Nginx native proxy_Nginx can use Nginx’s native proxy_, because we only do it at the peer stage of the request. After the peer is selected, Nginx sends data directly to its own instructionsInstruction;
  • It is suitable for almost any NGX_LUA project and can satisfy both pure LUA scheme and C scheme.

What can Slardar do in microservices architecture

We are also in the process of transforming some of our previous services into microservices. Microservice is actually from a relatively large service, it is divided into some small services, its expansion is not the same as migration, the expansion of microservice can only expand part of it, the expansion can be based on demand.

We are now trying to a solution, the solution context is we have a need to do figure, do figure this function has a lot of, such as landscaping, thumbnail, watermarking, etc., if you want to do the figure to service optimization is very difficult, because it had too many functions, if we put it into service is different, such as the above is our service now above the dotted line, This is a gateway to microservices, followed by some smaller services. For example, beautification, its operation is more complex, CPU consumption is more, we must choose some CPU better machine; Using GPU to do thumbnails, this performance can be improved dozens of times; Finally is a regular in the rules of the diagram, that some of the ordinary enough.

There are some more partial services, such as gradient, which may only need to ensure that the service can be used. Through the route of this micro-service, we can split the previous service and its parameters into three small services according to the latter distinction, so that a service can be completed through three steps to make a map.

Of course, there are a lot of problems when we try this scheme. For example, one service used to be done by one program, but now it is changed to three, which is bound to increase the bandwidth of the Intranet, and the pictures in the middle have to be guided around. What should we do about this? The way we think about it now is to do some local first scheduling, that is, after we are done, if there are some local watermarks, we will use the local ones first.

Finally, to paraphrase the words of the master: Talk is cheap, Show me the code. At present, we have opened the Sladar project, the project address is: github.com/upyun/slard… .

Speech video and PPT:

Dynamic service routing scheme based on OpenResty