This is the 12th day of my participation in the August More Text Challenge. For details, see:August is more challenging

Eureka service registration and discovery

What had been?

Eureka is a submodule and one of the core modules of Netflix. Eureka is a REST-based service, which is used to locate services, to realize cloud mid-tier service discovery and failover. Service registration and discovery is very important for microservices. With service discovery and discovery, services can be accessed using only the service identifier without modifying the service invocation profile, similar to the Dubbo registry, such as ZooKeeper

The principle of

Eureka SpringCloud encapsulates the Eureka module developed by Netflix to realize service registration and discovery (compared with ZooKeeper). Eureka adopts C-S architecture design, and Eurekaserver serves as the server of service registration function. It is the service registry, and other microservices in the system, using the Eureka client to connect to the Eurekaserver and maintain the heartbeat connection, which is also the system for lake personnel to monitor the system through the Eurekaserver whether the microservices are running properly. SPringlecloud has some other modules that can use Eurekaserver to discover other microservices in the system, Eureka contains two components, the Eurekaserver and the Eureka client. The Eurekaserver provides service registration services. After each node is started, it registers with the Eurekaserver. This also Eurekaserver in the service registry will have a service node in the club to be happy, the service node information can be directly seen in the interface. EurekaClient is a Java client designed to simplify interactions with the Eurekaserver. The client also has a built-in load balancer that uses a round-table load algorithm to send a heartbeat to the Eurekaserver when the application is started. If the Eurekaserver does not receive a heartbeat from a node within multiple heartbeat cycles, the Eurekaserver removes the node from the service registry

The three roles

Zookeeper Eurekaprovider registers its own services with Eureka, so that consumers can find server Consumer services from Eureka to obtain the list of registered services. To find consumer services

Create our Eureka module

Import the jar package

    <dependency>
        <groupId>org.springframework.cloud</groupId>
        <artifactId>spring-cloud-starter-eureka-server</artifactId>
        <version>1.4.6. RELEASE</version>
    </dependency>
Copy the code

The configuration file

server:
  port: 7001
# Eureka configuration
eureka:
  instance:
    hostname: localhost  #Eureka server gets instance name
  client:
    fetch-registry: false  Set to flase to indicate that you are a registry
    register-with-eureka: false # indicates whether to register yourself with the Eureka registry
    service-url:
      defaultZone: http://${eureka.instance.hostname}:${server.port}/eureka/
Copy the code

Access effect!!

Inject our server’s modules into Eureka

Import dependence

    <dependency>
        <groupId>org.springframework.cloud</groupId>
        <artifactId>spring-cloud-starter-eureka-server</artifactId>
        <version>1.4.6. RELEASE</version>
    </dependency>
Copy the code

Modifying a Configuration File

server:
  port: 8001
  # mybatis configuration
mybatis:
  type-aliases-package: com.jj.pojo
  # com.jj.config-location: classpath:mybatis/mybatis-com.jj.config.xml
  mapper-locations: classpath:mybatis/mapper/*.xml
  # spring configuration
spring:
  application:
    name: spring-cloud-provider-dept-8001
  datasource:
    driver-class-name: com.mysql.cj.jdbc.Driver
    url: jdbc:mysql://localhost:3306/test? useUnicode=true&characterEncoding=utf8&serverTimezone=GMT
    username: root
    password: 123456
    type: com.alibaba.druid.pool.DruidDataSource
eureka:
  client:
    service-url:
      defaultZone: http://localhost:7001/eureka/
Copy the code

Add our annotations to the main launcher class

package com.jj;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.netflix.eureka.EnableEurekaClient;

/ * * *@author fjj
 * @date2021/4/16 17:59 * /
@SpringBootApplication
@EnableEurekaClient
public class SpringClouddept {
    public static void main(String[] args) { SpringApplication.run(SpringClouddept.class, args); }}Copy the code

The results of

Note that the version must matchWhen this phrase appeared, eureka’s self-protection mechanism was activated

Eureka requires a self-protection mechanism

Self-protection mechanism: By default, if EurekaServer does not receive a heartbeat from a microservice instance within a certain period of time, Eureserver will log off the instance (90s by default). This behavior can become very dangerous because the microservice itself is healthy and should not be logged off at this time. Eureka solves this problem through self-protection. When an Eurekaserver loses too many clients in a short period of time (possibly due to a network partition failure), the node enters self-protection mode. Once in the mode, the Eurekaserver protects the information in the service registry. When the network is recovered, the Eurekaserver will automatically exit the self-protection mode. In the self-protection mode, the Eurekaserver will protect the information in the service registry and will not unregister any service instance. When the number of heartbeats it receives recovers above the threshold, the Eurekaserver node will automatically exit the self-protected mode. Its design philosophy is to keep the wrong service registration information rather than blindly log out any potentially healthy service instance. Better to live than die

Self-protection mode is a security measure against network exceptions. Its architectural philosophy is that it is better to keep all microservices at the same time (both healthy and unhealthy microservices are kept) than to cancel any healthy microservices. Using self-protection mode can make Eureka cluster more robust and stable

Configuring Monitoring Information

Join the rely on

<! -- Configure some monitoring information -->
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-actuator</artifactId>
    </dependency>
Copy the code

To modify pom.xml, I only wrote a little bit of information. If the development process can write a little more.

server:
  port: 8001
  # mybatis configuration
mybatis:
  type-aliases-package: com.jj.pojo
  # com.jj.config-location: classpath:mybatis/mybatis-com.jj.config.xml
  mapper-locations: classpath:mybatis/mapper/*.xml
  # spring configuration
spring:
  application:
    name: spring-cloud-provider-dept-8001
  datasource:
    driver-class-name: com.mysql.cj.jdbc.Driver
    url: jdbc:mysql://localhost:3306/test? useUnicode=true&characterEncoding=utf8&serverTimezone=GMT
    username: root
    password: 123456
    type: com.alibaba.druid.pool.DruidDataSource
eureka:
  client:
    service-url:
      defaultZone: http://localhost:7001/eureka/
# info configuration
info:
  app.name: fjj-springcloud
  company.name: kkkkk
Copy the code

You can simply view some monitoring information

Output Take a look at some of the information registered

Modify the controller of the service provider

package com.jj.controller;

import com.jj.pojo.Dept;
import com.jj.service.impl.DeptServiceimpl;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.cloud.client.ServiceInstance;
import org.springframework.cloud.client.discovery.DiscoveryClient;
import org.springframework.web.bind.annotation.*;

import java.util.List;

/ * * *@author fjj
 * @date2021/4/16 17:57 * /
@RestController
public class DeptController {
    //zhuru
    @Autowired
    DeptServiceimpl deptServiceimpl;
    / / injection
    @Autowired
    DiscoveryClient discoveryClient;
    / / add
    @GetMapping("/dept/add")
    public boolean addDept(Dept dept){
        return  deptServiceimpl.adddept(dept);
    }
    // Obtain by id
    @GetMapping("/dept/get/{id}")
    public Dept get(@PathVariable("id") int id){
        return  deptServiceimpl.querybyid(id);
    }
    / / check
    @GetMapping("/dept/list")
    public List<Dept> queryAll(a){
        return deptServiceimpl.queryall();
    }
@RequestMapping("/kkk")
    // Get some registration information
public Object discovery(a){
        // Get the list of microservices
    List<String> services = discoveryClient.getServices();
    System.out.println("services = " + services);
    // Get a specific microservice message
    List<ServiceInstance> instances = discoveryClient.getInstances("SPRING-CLOUD-PROVIDER-DEPT-8001");
    for (ServiceInstance instance : instances) {
        System.out.println(
                instance.getHost()+"\t"
        );
    }
return this.discoveryClient; }}Copy the code

Access to the results

Configure the Eureka cluster

I’m not going to build that. My computer CPU can’t hold up. Basically what this means is that our registries can depend on each other, which means that if one of them dies, it won’t. The node is lost.

CAP principles compare with ZooKeeper

What is the CAP?

C: strong conformance A: availability P: fault tolerance of partitions Three in two CA,AP CP of CAP

The core of CAP theory

According to the CAP principle, NOSQL databases can be divided into three categories: CA: single-point cluster Systems that meet the requirements of consistency, availability and fault tolerance of partitions. AP: A system that meets the requirements for availability and fault tolerance of partitions may have lower requirements for consistency

How is Eureka better than ZooKeeper as a registry?

The famous CAP theory points out that it is impossible for A distributed system to satisfy C (- – causability), A (availability) and P (fault tolerance) simultaneously. Since partition fault tolerance P must be guaranteed in A distributed system, we can only make A trade-off between A and C. ●Zookeeper guarantees CP; ●Eureka guarantees AP;

Zookeeper guarantees CP

When we ask the registry for a list of services, we can tolerate the registry returning a few minutes old registration information, but we can’t accept that the service is directly down and unavailable. In other words, the service registry facility requires more availability than consistency. However, such a situation may occur in ZK. When the master node loses contact with other nodes due to a network failure, the remaining nodes will re-elect the leader. The problem is that it takes 30 to 120s to elect the leader, and the entire ZK cluster is unavailable during the election, which leads to the breakdown of the registration service during the election. In the cloud deployment environment, it is highly likely that the ZK cluster will lose its master node due to network problems. Although the service can be restored eventually, the long-term registration unavailability caused by the long election time cannot be tolerated.

Eureka guarantees AP

Eureka understood this point, and therefore prioritized usability at design time. All Eureka nodes are equal. The failure of several nodes does not affect the operation of normal nodes. The remaining nodes can still provide registration and query services. In addition, Eureka also has a self-protection mechanism. If Eureka’s client fails to register with one Eureka, it will automatically switch to another node. As long as there is one Eureka still in place, it can maintain the availability of the registration service, but the information may not be the latest. If more than 85% of the nodes do not have a normal heartbeat within 15 minutes, Eureka considers the client to have a network failure with the registry. The following conditions can occur:

  1. Eureka will no longer remove services from the registration list that should expire because they have not received a heartbeat for too long
  2. Eureka will still accept registration and query requests for new services, but it will not be synchronized to other nodes (i.e. the current node is still available).

3. When the network is stable, the new registration information of the current instance is synchronized to other nodes

As a result, Eureka is able to cope with a network failure that causes some nodes to lose contact, rather than breaking down the entire registration service like ZooKeeper