Monomer architecture

The following diagram shows a simple workflow of the monolithic architecture

The monolithic architecture is to gather all the modules and functions together and deploy them to a single server. This is a shuttle-off approach. If you win, you’ll be fine. If the request is too large for one machine to handle, you can only scale horizontally by adding machines.

Microservice architecture

In a microservice architecture, our applications are often split into different modules, and instead, several different services are deployed independently. They communicate with each other via HTTP or RPC, so we can develop each module independently of each other.

It can be seen that the functions provided by the previous single system were broken down into many modules and deployed into different services.

Mobile users access different back-end services directly through Nginx, but they need to complete the aggregation function on the mobile side. For example, the data that needs to be displayed on the home page needs to be obtained from different services, so the data needs to be combined before it can be displayed. Therefore, we generally add a layer between the mobile phone client and the micro service.

What does this floor mainly do? After aggregating the data, the data is returned in a uniform format. It can also be tailored to different device types (for example, tablets and phones display differently). With the addition of BFF(backend for frontend), the APP and the backend API are decoupled from each other. The two sides can be changed independently, without being influenced by the other side.

Some services may return data in XML or JSON format

Microservices look great, but there are some challenges. Under the microservice architecture, services are very fragmented, which reduces the coupling degree and increases the difficulty of unified management of services. Under the old service governance architecture, common functions such as authentication, flow limiting, logging, monitoring and so on had to be implemented separately in each service, leaving system maintainers with no global view to manage these functions. Any computer problem can be solved by adding another layer, so we can add another layer of API gateway to accommodate these common functions and provide system extensibility on top of that.

As you can see, another layer of Gateway has been proposed. For BFF, some companies may merge it with Gateway, depending on the actual situation.

The API Gateway pattern means that you want to put the API Gateway at the forefront of your microservices and make the API Gateway the gateway to every request made by your application. This significantly simplifies the way the client implementation communicates with the microservice application.

Before the gateway, a client adding an item to a shopping cart would have to request a user service, then a merchandise service, then a shopping cart service. The client needs to know how to consume the three different services together. Using API Gateways, we can abstract away all of this complexity and create optimized endpoints that clients can use and make requests to those modules.

You can also centralize middleware capabilities through API gateways. As you start to create more and more services, you will find yourself facing a new problem – that is, you will find yourself needing to authenticate and control traffic for some services.

Some services are public; Some are private; Others are partner APIs, which you can only offer to a specific set of users. Sooner or later you’ll find yourself writing the same code over and over again for each microservice that can be abstracted into middleware.

This is obviously not something that every microservice should be concerned about. The API gateway should take this on, that is, the microservice is only responsible for receiving the incoming request- and then returning a JSON-like response. The API Gateway then takes care of things like authentication, logging, and flow control.

Microservices are not all advantages, it also has a long list of issues to consider, Such as logging, monitoring, exception handling, fault tolerance, rollback, communication, message format, containers, service discovery, backup, testing, alarm, tracking, tools, documentation, extension, time zone, API version, network latency, health examination, load balancing, and so on questions, a new way of solving the problem will face new problems at the same time, so don’t think the service must be good, Each stage faces different problems, and we deal with them and look at them in different ways.

Microservers have a lot to worry about, so if your company is going to be a microservice, it’s important to have these capabilities to solve the problems that microservices face.

Cloud native services

After microservices, cloud native services emerged. What is cloud native service?

Cloud native application definition: An application developed based on the principles of microservices, packaged in a container. At run time, the container is scheduled by a platform (such as Kubernetes) running on top of the cloud infrastructure. Application development adopts continuous delivery and DevOps practices.

Cloud native services are still dependent on microservices, but they run on a platform with a cloud native foundation, and use continuous delivery and DevOps practices. What does this cloud-native platform do? Anyone who has used Kubernetes knows that it doesn’t have to worry about scaling, service discovery, load balancing, fault tolerance, rollback, updates, etc., and has mature solutions for Gateway, monitoring, etc. You really know who uses it.

I’m not going to expand it here, but if you’re interested, you can read about Kubernetes

API Gateway selection

What needs to be considered by the gateway

  • Current-limiting fuse
  • Dynamic routing and load balancing
  • Path-based routes, such as example.com/user access the ask user service, or at access the shopping service
  • The interceptor chain
  • Log collection and Metrics burial points
  • Response flow optimization
  • Programmable API
  • The Header first rewrite

Gateway comparison

Support the company Implementation language Bright spot insufficient
Nginx(2004) Nginx Inc C/Lua High performance, mature and stable High threshold, partial operation and maintenance, weak programming
Zuul1(2012) Netflix/Pivotal Java Mature, simple threshold is low General performance, general programming
Spring Cloud Gateway(2016) Pivotal Java Asynchronous, flexible configuration The early products
Envoy(2016) Lyft C++ High performance, programmable API/ServiceMesh integration High threshold
Kong(2014) Kong Inc OpenResty/Lua High performance, programmable API High threshold
Traefik(2015) Containous Golang Cloud native, programmable API/ docking various service discovery There are not many production cases

In fact, we use the Cloud native service, and Zuul and Spring Cloud Gateway combined with the Spring Cloud family bucket have a better effect, so it is not suitable for our current choice.

We focused on Kong and Traefik. After comparison, we finally chose Traefik. Compared with Kong, Traefic has the following advantages:

2. Traefic stores the state through Kubernetes (Kong needs to use Postgres or Cassandra to store the state), and uses Ingress to route all the traffic to the corresponding service via HTTPS. 3. It has been used in production environments worldwide and has been subjected to rigorous testing and benchmarking. The 4. Kong dashboard has been used in other projects of our company. Traefik comes with its own dashboard, which is always compatible with the latest version of Traefik, and the Traefik user interface is also better looking than Kong’s.

Traefik’s Middlewares are good to use, and Traefik suggests upgrading to Traefik 2

And Traefik can also be used as Kubernetes Ingress Controller, which can completely replace the NGINX Controller we talked about before

How do I use Traefick as a gateway for user authentication

Ingress route configuration:

Code:

After the above configuration, when we want to access the/API /orders API, we will first go to/API /auth to verify that the user is logged in. If the user is logged in, we will carry the corresponding HTTP header to the corresponding back-end service. The back-end service will then authenticate against the header