This blog introduces the concept of automation tools and their role in microservice clustering.

Writing in the front

Before we look at the concepts of automation tools, let’s look at the concepts of microservices and clustering.

What are microservices

While this concept is somewhat broad and my knowledge is limited, I will try to describe what microservices are, what clustering is, and why we need microservice clustering. Check out “Jack Bauer Opens a restaurant – From Mono to Microservice”, which explains why we need microservice clusters through a comic story in very plain language and illustrations.

Micro service

Traditional back-end services are mostly monomer applications. For example, a simple back-end service built by Sprint Boot, Node, or Gin is deployed to a server after basic services are realized. This becomes a monomer application.

As business requirements increase and business code slowly accumulates, individual applications become larger and larger. At the same time, a large amount of business code for each module is entangled with each other, making development and maintenance especially difficult. Imagine the despair of a newcomer to a project who sees tangled, logically complex business code.

This is where we need to understand the concept of microservices. If we want to talk about the maintainable, scalable and highly available monolithic applications, we need to split the monolithic applications into modules.

For example, all user-related logic can be made into a separate service, or orders, inventory can be made into a separate service. In this way, the business code is split into several separate services, each of which only needs to care about and handle its own module’s business logic. As a result, the business code is logical and clear to the developer. Even for later-to-project developers, code with clear business logic is easy to learn.

Unbundling of microservices

I’ve actually seen a lot of articles about microservices that basically stop there, but there’s one more concept worth mentioning. First, there is no standard for how microservices should be broken down.

The granularity at which you break down your services is highly business dependent. This is not to say that the code of a service is necessarily very small, based on the measurement of your business, for example, your system users are very large, then a user service code number of thousands of lines of code I think is normal.

Of course, I’ve also seen very thin systems with dozens of services that are broken up for high availability and fast location, but not many users. The problem is that with so many services, the address of the back-end API that the front end needs to maintain is quite large.

Leaving aside the question of whether all the split services are running on the same server, if they are, they are on different ports. The front end also needs to maintain such an API mapping table according to the service modules that the back end splits. Therefore, we need to propose a BFF, AKA Backend For Frontend.

BFF

The BFF layer was not originally proposed for the purpose mentioned in the microservices split module. It is designed to provide different apis for different devices. For example, the back-end service of a system needs to support different terminals, such as iOS and Android on mobile terminals and PC terminals.

This way, apis can be tailored to the needs of different devices without changing our existing microservices.

As a result, our underlying service cluster is so extensible that we don’t need to change the underlying service code for a new client, but instead add a BFF layer that is tailored to the terminal type.

As you can see from the diagram above, none of the clients have direct access to our underlying services. Instead, they go through the interfaces provided by the BFF layer, which then invokes different underlying services based on different routes. To summarize, the advantages of adding a BFF layer are as follows.

  • Strong scalability, can adapt to different clients
  • Unified API management, clients no longer need to maintain API mapping table
  • Can do centralized authentication, all requests will first go through BFF, in this layer to verify the validity of the call interface

Of course, BFF has its drawbacks.

  • There’s a lot of code redundancy if you don’t handle it properly
  • The need to invoke different underlying services increases the development effort

Of course, in a real production environment, we rarely expose the BFF layer directly to the client. We usually add another layer of gateways on top of the BFF layer. The gateway can implement permission authentication, current limiting fuses, and other functions before the request reaches BFF.

The cluster

Having talked briefly about what microservices are, now let’s talk about clustering. We know that when a single application is too big to maintain, the best way is to break it up into microservices. What’s the good of that?

  • Easy to maintain. Each microservice focuses on the business logic of its own module, and the business logic of each module will not be entangled.
  • Improve usability. When a single application dies, all modules in our system become unavailable. Splitting into microservices can avoid this problem as much as possible. The failure of a single service does not affect the normal running of other services.
  • Easy operation and maintenance. When a single application is redeployed, the entire system becomes unavailable. In microservices, the cost of redeploying a single service is significantly lower.

concept

With that said, let’s give clusters an idea. A cluster is a set of services deployed on different servers to provide external services.

example

Let me give you a specific example. For example, we use Docker Swarm to provide container cluster service.

Docker Swarm has such a concept of node, all hosts running Docker can actively create a Swarm or join an existing cluster, once joined, the host will become a node in the cluster. Nodes in a cluster are divided into two types: manager node and worker node. We can use Portainer to manage Docker hosts and Swarm clusters.

Let’s take an example of a request in a cluster.

After first entering the system, we will first enter a unified authentication system for authentication. After successful authentication, we will go to our micro-service gateway. If this place has its own special authentication, we will conduct authentication again. The gateway then dispatches our request to a container on a specific server based on the configured routing.

Automation tool

What technologies are involved in automation tools?

Java is just an analogy for your programming language. Microservices don’t really care what language is used, or even if each service uses a different technology stack.

So what are the automation tools? What is its role? What role does it play in the cluster? Let’s take a quick look at a picture.

build

Just a little bit of logic.

  • Firstly, the automatic tool organized the parameters needed for Jenkins construction, called Jenkins construction API, and recorded the construction operation into the database of the automatic tool
  • Then Jenkins used the configured credentials to go to the branch pull code of the corresponding project of Gitlab, and started the construction according to the configured build script, and recorded the construction into the database of the automation tool
  • After it is built, it is pushed to the Docker warehouse and recorded in the database of the automation tool

This is the logical end of the build.

Other functions

The automation tool can also choose to view the log of the current project directly from the project list, without having to re-open Kibana and filter again each time.

In the project Settings of the automation tool, we can also change the configuration of the Docker container without going to the Portainer or through the command line. If we want to command into a container, we need to find the corresponding service and then find the corresponding service instance before we can enter it. If we use The Portainer Api directly, we can directly implement this function into the automation tool when the endpoint is known. Use webshell to connect directly with one click.

What are the benefits?

  • Swarm clustering is blocked for most developers. Portainer is shielded for the development of non-administrators of the project, because this permission is very large. Once misoperation is caused by unfamiliarity, online services may be directly affected
  • Unified permission control. Unified control of permissions and environments in automation tools
  • Low cost to start. The automated tools built by you are much easier to use than using Portainer and Jenkins and Kibana directly

Function summary

To sum up, its functions are mainly the following.

  • build
  • The deployment of
  • The rollback
  • View ELK logs
  • Change the Docker configuration
  • Manage cluster environments, projects, and containers
  • The command line connects containers for specific items
  • … .

You might wonder if you look at this.

  • Build? So you’re saying Jenkins is a decoration?
  • The deployment? Change the Docker configuration? Command line connection containers for specific items? Is my Iterm2 also a decoration?
  • Roll back? Is my docker image tag useless?
  • Elk log? Is my Kibana for the news?

Function,

build

In fact, I personally think both automated tools and Jenkins come in handy when it comes to building. Moreover, the automation tool itself is using Jenkins, but it just calls Jenkins’ API, passes the parameters of the build, and finally really builds Jenkins.

However, for the test just joining the project, the self-developed Web UI is more friendly to newcomers, and the permission control can be done in the automation tool.

Deploy and roll back

Deployment on the back end of the automation tool is achieved through docker-Client. First we create the Docker client according to the configuration. Then call the Update Service to update the service if there are already running services, or create the service otherwise.

The rollback is essentially the same, but with the same arguments and a different tag.

Elk log

First, kiBANA_host and KibanA_index will be configured in the configuration of each environment. Then, according to the projectKey of the system, the corresponding Kibana log URL will be spliched and embedded into the automation tool using iframe. In this way, there is no need to manually open Kibana and set the corresponding filter. Especially if you have a large system, adding and removing filters can be time-consuming.

Update container Configuration

This is also done by calling the corresponding API to update the configuration of the corresponding service without logging into Portainer to change it.

At the same time, different Base Setting can be configured in the automation tool for different environments. Subsequent applications added in this environment need not be configured separately and can directly inherit the Docker Setting of the environment.

Manage cluster environments, projects, and containers

Automation tools can be used to create and manage environments in a unified manner. There are also three environments: R&D, test, and production. You can then create roles and users in the automation tool and assign different permissions to different roles to control permissions.

The command line connects containers for specific items

Usually we need to go into the container for a need, but we have two choices.

  • Enter the corresponding Service through the Portainer, find a specific container, and click Connect
  • Command line to a server on which the container is running, and then to a command line connection

But with automated tools, we have a third option.

  • Click on the link

How do you do that? The endpointId command is used to retrieve all containers, and then iterates through all containers to find the container with the same ID as the selected container. The NodeName command is used to determine which node the container is running on.

The WebSocket URL is then constructed using the existing information, and finally the front end establishes the WS connection through xTERM, thus directly connecting to the running container instance.

conclusion

Automation tools are just one idea, one solution, and its benefits are listed above. Of course, it certainly has a disadvantage, that is, it requires dedicated investment of manpower and resources to develop.

This is very unrealistic for the project team with short staff shortage and short project cycle. But if you have the energy and time, I think it’s worth a try. It is also possible to integrate more cluster-related functionality into automation tools based on Portainer’s API.

Previous articles:

  • What? You haven’t used these Chrome add-ons yet?
  • Build SpringBoot backend project framework from scratch
  • Go Module is used as package manager to build go Web server
  • WebAssembly is complete – learn about wASM in its past and present
  • Jack Bauer opens a restaurant – from monomer applications to microservices

Related:

  • Wechat official account: full stack notes of SH (or directly search wechat LunhaoHu in the interface of adding official account)