The preface

Spring Cloud is basically the standard framework for Java microservices development at this point in time, but the introduction of the Service Mesh architecture a few years ago has attracted a lot of practice. For some large software companies with project-based delivery models, an intrusive framework like Spring Cloud can be very troublesome to change even a version number. So I think the Service Mesh architecture is more useful for these types of companies. The reason for writing this article is to explore an evolutionary scheme that can minimize the cost of modification from a realistic perspective.

K8s first or Service Mesh first

Ali shared the experience of landing the Service Mesh earlier, and they had an analysis of who was landing the Service Mesh first. The problem arose because ali’s technology model at that time was ISTIO, whose deployment relied on K8S. Switching to K8S from a virtual machine environment is costly, and learning K8S is relatively difficult. At this point in time, isTIO is no longer the only technology for Service Mesh. Consul, a well-known open source software, started supporting Service Mesh about 2-3 years ago, and precipitation is now almost ready to give it a try. Consul, on the other hand, eliminates the hassle of going to K8S first, since it supports a virtual machine environment.

However, it’s worth noting that although Consul supports virtual machines, it’s much easier to implement Service Mesh in K8S using Consul than it is using virtual machines. After all, THE characteristics of K8S hardware software can solve many problems that software is difficult to solve before.

practice

Technical background

Let’s assume that our current software architecture uses Spring Cloud’s microservices architecture and runs in virtual machines. Spring Cloud supports a variety of service registries, including Eureka in the early stage, but now there are many service registries using Consul in China. Choose Consul as your service registry here. An additional consideration in choosing Consul is that the topic of this article is related to Consul. Another is that switching your registry to Consul is a relatively simple matter for Spring Cloud if you want to refer to this scenario.

The target

Migrate the systems implemented by the above technology stacks to the Service Mesh architecture and deploy them in virtual machines at the lowest possible change cost. According to the experience (such as the ServiceMesh landing case shared by Huawei), it is often necessary to degrade SpringCloud to Spring Boot. One of the main reasons was to remove Spring Cloud service addressing, reducing it to IP port direct communication and IP pointing locally. Both IStio and Consul under K8S provide a means of traffic hijacking to redirect traffic locally, but it does not work directly with Spring Cloud addressing. We can use a simple iptables command to redirect a local request to a remote IP port, but only if we know all the remote IP ports to redirect, which is deployment-dependent and varies with the scale of the system deployment and is practically impossible.

In this paper, another idea is to extend the Spring Cloud framework to adapt the Service Mesh, so that it can realize traffic redirection at the program level.

Service Mesh technology selection

Consul, that was mentioned earlier. Sidecar selects envoy, like Istio. Consul version is V1.10.1 and Envoy version 1.18.3 are used in this article. Prometheus 2.29.2 was used for monitoring

Deployment architecture

Prepare at least three Linux VM servers, as shown in The figure. One deploys Consul Server, the remaining two deploy Consul Client and Envoy, and two miniservers identified as Client and Server.

IP addresses of the three VMS are assigned for the convenience of the following description:

Consul Server VIRTUAL MACHINE IP address: 10.19.215.45 Client Micro-server virtual machine IP address: 10.19.215.69 Server A Server micro-server virtual machine IP address: 10.19.215.62 Server B

Deploying consul Cluster

Deploy the consul server

Convenience for the command we put the consul binary files in/usr/bin directory. The same goes for subsequent servers.

Get ready for consul Server configuration file server.hcl:

// Server node exclusive configuration Server = true bootstrap_expect = 1 client_addr = "0.0.0.0" UI = true node_name = "consul-server" connect { enabled = true } ports { grpc = 8502 } config_entries { bootstrap = [ { kind = "proxy-defaults" name = "global" config {protocol = "HTTP" envoy_prometheus_bind_addr = "0.0.0.0:9102"}}]} UI_config {enabled = true metrics_provider = "Prometheus" metrics_proxy {base_URL = "http://10.19.215.48:9090"}}Copy the code

To simplify Consul deployment, only one consul server node is required in the Consul cluster. You are advised to deploy three to five Consul Server nodes in the production environment.

Write another Consul. HCL file as a common configuration for each Consul node:

datacenter = "dc1" data_dir = "/opt/consul" // encrypt = "fARB1df3e3SNcR4DGwGj5VbpTjoYWiFTVJkd4cJcB9o=" // ca_file = "/etc/consul.d/consul-agent-ca.pem" // cert_file = "/etc/consul.d/dc1-server-consul-0.pem" // key_file = "/etc/consul.d/dc1-server-consul-0-key.pem" verify_incoming = false verify_outgoing = false verify_server_hostname = False client_addr = "0.0.0.0" // Client node configuration retry_JOIN = ["10.19.215.45"]Copy the code

The preceding two configuration files are stored in the /etc/consul.d directory. Run consul agent-config-dir =/etc/consul.d/ on the server where Consul Server resides. Consul Server is successfully started.

On VM A, Consul Client is deployed

Insert the consul binary into /usr/bin and create client. HCL in /etc/consul.d:

node_name = "consul-client"
connect {
  enabled = true
}
ports {
  grpc = 8502
}
Copy the code

Then copy the consul. HCL to /etc/consul.d and run consul agent-config-dir =/etc/consul.d/

On VM B, Consul Client is deployed

The client. HCL content must be changed to node_name:

node_name = "consul-client2"
connect {
  enabled = true
}
ports {
  grpc = 8502
}
Copy the code

Visit http://10.19.215.45:8500/ then we can see there are three node.

Prepare the Spring Cloud microservice

I won’t post the code here, just go to github.com/FunnyYish/c… Download.

It should be noted that these two microservices rely on additional JAR packages I wrote myself:

<dependency>
	<groupId>com.dys.consul</groupId>
	<artifactId>service-mesh</artifactId>
	<version>While the RELEASE</version>
</dependency>
Copy the code

This package has not been released to any warehouse yet, readers need to go to github.com/FunnyYish/s… Download the code and MVN install to the local repository for use. Additional configuration items are required to use this JAR package:

# enable sidecars
spring.cloud.consul.discovery.sidecar=true
The local port on the upstream microservice server is 1234
spring.cloud.consul.discovery.upstream.server=1234
Copy the code

The effect of the above configuration will make the restTemplate. GetForObject (” http://server/home “, String. Class); This line of code requests 127.0.0.1:1234 at run time.

Deploying microservices

There is nothing to say about this. After compiling and packaging these two microservices, it is important to run them on virtual machines A and B according to the deployment architecture described above. But they still can’t talk to each other at this point because Sidecar is not running.

The deployment of sidecars

Go to the archive. Tetratelabs. IO/envoy/envoy… Versions of Envoy artifacts can be found here. Download version 1.18.3, unpack it, and place it in /usr/bin.

Consul connect envoy -sidecar-for client-82-admin-bind localhost:19001 run consul connect envoy -sidecar-for client-82-admin-bind localhost:19001 on virtual machine A. It can also be specified through the Spring Cloud framework.

Consul Connect envoy -sidecar-for server-81-admin-bind localhost:19001 on virtual machine B

Then we can test the service invocation. The call is successfully made to port 82/ Call where the client microservice resides.

Consul controls link connectivity

Access the consul console http://10.19.215.45:8500/ui/dc1/services/client/intentions, can operate the construction of a new intention on the UI:

monitoring

Consul offers very limited monitoring UI support, so you’ll have to go to Grafana for more details. Consul Service Mesh monitoring works by configuring Prometheus to fetch tasks pointing to each envoy, consul then configuring to fetch monitoring data from Prometheus.

Prepare another server to install Prometheus as follows:

global:
  scrape_interval: 15s
  evaluation_interval: 15s

rule_files:
  # - "first.rules"
  # - "second.rules"

scrape_configs:
  - job_name: client-sidecar
    static_configs:
      - targets: ["10.19.215.69:9102"]
  - job_name: server-sidecar
    static_configs:
      - targets: ["10.19.215.62:9102"]
Copy the code

Notice that the consul Server configuration file contains the URL http://10.19.215.48:9090, which points to Prometheus, which was just deployed. Please modify this configuration item according to your own IP address. It should be noted that Consul provides a tool for Prometheus to automatically configure monitoring data from each envoy without manual configuration under K8S. This will be added later.