Make writing a habit together! This is my first day to participate in the “Gold Digging Day New Plan · April More text challenge”, click to see the details of the activity.

As we all know, SpringCloud is a one-stop solution for microservices, a collection of components, and since almost all components in SpringCloud use Netflix products, most of them are in the update or maintenance phase. We needed something else to replace them, and SpringCloud Alibaba was born. In this article, we integrate the SpringCloud Aibaba technology stack into several specific business scenarios to get a feel for its convenience and power.

Environment set up

Create parent project, modify POM file:


      
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.wwj</groupId>
    <artifactId>springcloud-alibaba</artifactId>
    <packaging>pom</packaging>
    <version>1.0 the SNAPSHOT</version>
    <modules>
        <module>shop-common</module>
        <module>shop-user</module>
        <module>shop-product</module>
        <module>shop-order</module>
    </modules>

    <properties>
        <maven.compiler.source>8</maven.compiler.source>
        <maven.compiler.target>8</maven.compiler.target>
        <spring-cloud.version>Hoxton.SR8</spring-cloud.version>
        <spring-cloud-alibaba.version>2.2.5. RELEASE</spring-cloud-alibaba.version>
    </properties>

    <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>2.3.2. RELEASE</version>
    </parent>

    <dependencyManagement>
        <dependencies>
            <dependency>
                <groupId>org.springframework.cloud</groupId>
                <artifactId>spring-cloud-dependencies</artifactId>
                <version>${spring-cloud.version}</version>
                <type>pom</type>
                <scope>import</scope>
            </dependency>
            <dependency>
                <groupId>com.alibaba.cloud</groupId>
                <artifactId>spring-cloud-alibaba-dependencies</artifactId>
                <version>${spring-cloud-alibaba.version}</version>
                <type>pom</type>
                <scope>import</scope>
            </dependency>
        </dependencies>
    </dependencyManagement>
</project>
Copy the code

Here, four microservices need to be created. Let’s take creating a user service as an example. The other three services are created in the same process.

<dependencies>
  <dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
  </dependency>
  <dependency>
    <groupId>com.alibaba</groupId>
    <artifactId>druid-spring-boot-starter</artifactId>
    <version>1.1.13</version>
  </dependency>
  <dependency>
    <groupId>com.wwj</groupId>
    <artifactId>shop-common</artifactId>
    <version>1.0 the SNAPSHOT</version>
  </dependency>
</dependencies>
Copy the code

Create a configuration file:

server:
  port: 8071
spring:
  application:
    name: service-user
  datasource:
    url: jdbc:mysql:///springcloudalibaba? serviceTimezone=UTC
    driver-class-name: com.mysql.cj.jdbc.Driver
    username: root
    password: 123456
    type: com.alibaba.druid.pool.DruidDataSource

mybatis-plus:
  configuration:
    log-impl: org.apache.ibatis.logging.stdout.StdOutImpl # output SQL
Copy the code

Create a Controller:

@RestController
public class UserController {}Copy the code

Create Mapper:

public interface UserMapper extends BaseMapper<User> {}Copy the code

Create a Service:

@Service
public class UserServiceImpl implements UserService {}Copy the code

Create a startup class:

@SpringBootApplication
@MapperScan("com.wwj.mapper")
public class UserApplication {

    public static void main(String[] args) { SpringApplication.run(UserApplication.class, args); }}Copy the code

Finally create table:

CREATE DATABASE springcloudalibaba;

USE springcloudalibaba;

DROP TABLE IF EXISTS shop_order;

CREATE TABLE shop_order (
  `oid` int(11) NOT NULL,
  `uid` int(11) DEFAULT NULL,
  `username` varchar(255) DEFAULT NULL,
  `pid` int(11) DEFAULT NULL,
  `pname` varchar(255) DEFAULT NULL,
  `pprice` double DEFAULT NULL,
  `number` int(11) DEFAULT NULL.PRIMARY KEY (`oid`)
) ENGINE=InnoDB DEFAULT CHARSET=gbk;

DROP TABLE IF EXISTS `shop_product`;

CREATE TABLE `shop_product` (
  `id` int(11) NOT NULL,
  `pname` varchar(255) DEFAULT NULL,
  `pprice` double DEFAULT NULL,
  `stock` int(11) DEFAULT NULL.PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=gbk;

DROP TABLE IF EXISTS `shop_user`;

CREATE TABLE `shop_user` (
  `uid` int(11) NOT NULL,
  `username` varchar(255) DEFAULT NULL,
  `password` varchar(255) DEFAULT NULL,
  `telephone` varchar(255) DEFAULT NULL.PRIMARY KEY (`uid`)
) ENGINE=InnoDB DEFAULT CHARSET=gbk;
Copy the code

The entity class is created in the shop-common service and the project structure is as follows:

Service invocation –RestTemplate

The following simulates a business where a user places an order, first inserting test data:

insert into shop_product values(null.'millet'.'1000'.'5000');
insert into shop_product values(null.'huawei'.'2000'.'5000');
insert into shop_product values(null.'apple'.'3000'.'5000');
insert into shop_product values(null.'OPPO'.'4000'.'5000');
Copy the code

Write the Service:

@Service
public class ProductServiceImpl implements ProductService {

    @Autowired
    private ProductMapper productMapper;

    @Override
    public Product queryProductByPid(Integer pid) {
        returnproductMapper.selectById(pid); }}Copy the code

Write a Controller:

@RestController
@Slf4j
public class ProductController {

    @Autowired
    private ProductService productService;

    @RequestMapping("/product/{pid}")
    public Product product(@PathVariable("pid") Integer pid) {
        log.info("Call commodity service, query item {}", pid);
        Product product = productService.queryProductByPid(pid);
        log.info("Query successful, product information :{}", JSON.toJSONString(product));
        returnproduct; }}Copy the code

Start microservices, accesshttp://localhost:8081/product/1:Now that the goods Service is written, we’ll write the order Service, starting with the Service again:

@Service
public class OrderServiceImpl implements OrderService {

    @Autowired
    private OrderMapper orderMapper;

    @Override
    public void createOrder(Order order) { orderMapper.insert(order); }}Copy the code

Since the order service and the goods service are two separate modules that cannot call each other directly, we use the RestTemplate to make the call and register it in the container:

@Configuration
public class MyConfig {

    @Bean
    public RestTemplate restTemplate(a){
        return newRestTemplate(); }}Copy the code

Then write Controller:

@RestController
@Slf4j
public class OrderController {

    @Autowired
    private RestTemplate restTemplate;
    @Autowired
    private OrderService orderService;

    @RequestMapping("/order/product/{pid}")
    public Order order(@PathVariable("pid") Integer pid) {
        log.info("Call order service, item number :{}", pid);
        Product product = restTemplate.getForObject("http://localhost:8081/product/" + pid, Product.class);
        // Create an order
        Order order = new Order();
        // Simulate data
        order.setUid(1);
        order.setUsername("Test user");
        order.setPid(pid);
        order.setPname(product.getPname());
        order.setPprice(product.getPprice());
        order.setNumber(1);
        orderService.createOrder(order);
        log.info("Order created successfully, order info :{}", JSON.toJSONString(order));
        returnorder; }}Copy the code

Start the service and accesshttp://localhost:8091/order/product/1:Function is no problem, but the problem is obvious, use RestTemplate call, to call the service address is written in the program, even if the extraction into public variables, in the service address change or service name, still need to modify in the program, and micro services tend to cluster needs to be done, in view of these scenarios, we need to useService governance

Service Governance –Nacos

Service governance is the core and basic module of microservice architecture, which is used to realize automatic registration and discovery of each microservice. Typically all services are managed by a service registry that provides service registration, service discovery, and service culling, and we developed this using Nacos as the service registry.

Nacos is an open source product of Alibaba, it is an executable file, below we will install it, download address: github.com/alibaba/nac…

After downloading, unzip it and you’ll get a folder:Run startup. CMD in bin to start Nacos:This indicates successful startup and accesshttp://localhost:8848/nacos/You can enter the background management page of Nacos:The default username and password are nacOS. Click login to enter the system:Related operations in this system will be described later.

After starting Nacos, we need to register the service with Nacos, first introducing dependencies:

<dependency>
  <groupId>com.alibaba.cloud</groupId>
  <artifactId>spring-cloud-starter-alibaba-nacos-discovery</artifactId>
</dependency>
Copy the code

Add configuration:

spring:
  cloud:
    nacos:
      discovery:
        server-addr: 127.0. 01.: 8848
Copy the code

Finally, declare @enableDiscoveryClient annotation on the startup class to start the service and check the Nacos background:Click the list of services on the left, and when you see the service-product service, it means that the registration is successful. Register the order service in Nacos in the same way:After the service is registered, our call procedure needs to be modified:

@Autowired
private DiscoveryClient discoveryClient;

@RequestMapping("/order/product/{pid}")
public Order order(@PathVariable("pid") Integer pid) {
    log.info("Call order service, item number :{}", pid);
    // Get the service set
    List<ServiceInstance> instances = discoveryClient.getInstances("service-product");
    // Fetch the first service instance
    ServiceInstance serviceInstance = instances.get(0);
    String host = serviceInstance.getHost();    // Obtain the service IP address
    int port = serviceInstance.getPort();       // Get the service port
    Product product = restTemplate.getForObject("http://" + host + ":" + port + "/product/" + pid, Product.class);
    // Create an order
    Order order = new Order();
    // Simulate data
    order.setUid(1);
    order.setUsername("Test user");
    order.setPid(pid);
    order.setPname(product.getPname());
    order.setPprice(product.getPprice());
    order.setNumber(1);
    orderService.createOrder(order);
    log.info("Order created successfully, order info :{}", JSON.toJSONString(order));
    return order;
}
Copy the code

Inject the DiscoveryClient object, obtain the service instance from it, and then invoke the DiscoveryClient by obtaining the service IP address and port number.

Load Balancing -Ribbon

In order to ensure the high availability of the system, we often need to build a service cluster, but how are user requests distributed among the various services in the cluster? We quickly create a new commodity and service simulation cluster environment directly through IDEA:To start the service:Let’s implement load balancing:

// Randomly fetch the service instance
int index = new Random().nextInt(instances.size());
ServiceInstance serviceInstance = instances.get(index);
Copy the code

The implementation is very simple, just need a random number in the size of the service instance set as an index, and then fetch the service instance. Our custom load strategy is very limited, so we need an open source component called the Ribbon to help with load balancing. First add a @loadBalanced annotation to register the RestTemplate:

@Bean
@LoadBalanced
public RestTemplate restTemplate(a){
    return new RestTemplate();
}
Copy the code

Then the invocation changes again:

Product product = restTemplate.getForObject("http://service-product/product/" + pid, Product.class);
Copy the code

The Ribbon automatically discovers and invokes services. By default, the Ribbon allocates requests to each service in turn so that each service processes the same number of requests.

In addition to polling policies, the Ribbon provides a wealth of load policies:

  • BestAvailabl: Select the smallest Server for concurrent requests, check Server by Server, skip if the Server is flagged as an error, and then select the smallest Server in the Action request Equestcount
  • AvailabilityFilteringRule: Filter out back-end servers that repeatedly failed to connect and were labeled circuit tripped, and filter out back-end servers with high concurrency or use an AvailabilityPredicate that includes the logic to filter servers. Check the running Status of each Server recorded in Status
  • ZoneAvoidanceRule: Use Zone avoidancePredicate and AvailabilityPredicate to determine whether a Server is selected, the former predicate determines whether a Zone’s performance is available, and excludes the unavailable Zone (all servers), AvailabilityPredicate is used to filter out servers that have too many connections
  • RandomRule: Select a Server at random
  • RoundRobinRule: Round selection, round index, select the Server corresponding to the index
  • RetryRule: Retry mechanism on the selected load balancing policy. This means that if the Server selection fails during the configuration period, the system tries to use subRule to select an available Server
  • ResponseTimeWeightedRule: Functions as a WeightedResponseTimeRule, responseTime-weighted Rule is renamed as WeightedResponseTimeRule
  • WeightedResponseTimeRule: Assigns a Weight based on response time, the longer the response time, the smaller the Weight and the less likely it is to be selected

Create a new package on top of the main startup class and create a configuration class:

@Configuration
public class MyRule {

    @Bean
    public IRule iRule(a){
        return newRandomRule(); }}Copy the code

Note in the master boot class declaration @RibbonClient:

@EnableDiscoveryClient
@RibbonClient(name = "service-product",configuration = MyRule.class)
@SpringBootApplication
@MapperScan("com.wwj.mapper")
public class OrderApplication {

    public static void main(String[] args) { SpringApplication.run(OrderApplication.class, args); }}Copy the code

Name indicates the name of the service to be loaded, and Configuration indicates the rule configuration class.

Service invocation –OpenFeign

There are many disadvantages to using the Ribbon for service invocation. Although the service is registered with Nacos, we no longer need to care about the IP and port number of the service invocation, but the address parameters and other content still need to be concatenated, and the style is not suitable for our project. For this, we can use another component provided by SpringCloud. OpenFeign, with OpenFeign, we call remote services as easily as we call local services.

Add dependencies first:

<dependency>
  <groupId>org.springframework.cloud</groupId>
  <artifactId>spring-cloud-starter-openfeign</artifactId>
</dependency>
Copy the code

Then define an interface:

@FeignClient("service-product")
public interface ProductService {

    @RequestMapping("/product/{pid}")
    Product product(@PathVariable("pid") Integer pid);
}
Copy the code

The @feignClient annotation is used to declare that this is a remote service interface, with the value being the name of the service to be called remotely, and then define methods in the interface, generally the same as the method declarations provided by the remote service:

@RequestMapping("/product/{pid}")
public Product product(@PathVariable("pid") Integer pid) {
    log.info("Call commodity service, query item {}", pid);
    Product product = productService.queryProductByPid(pid);
    log.info("Query successful, product information :{}", JSON.toJSONString(product));
    return product;
}
Copy the code

Once the interface is declared, you can inject the interface directly and call the method:

@Autowired
private ProductService productService;

Product product = productService.product(pid);
Copy the code

Finally, declare the @enableFeignClients annotation on the startup class.

Service fault Tolerance –Sentinel

Business in micro service architecture, we will be split into a service, a service and the service can call each other between, but due to network or its own reason, service does not guarantee 100% available, if a single service problems, invoking the service can appear delay, at this time if there is a large number of network, can form task accumulation, eventually to paralyze the service.

There are a number of fault-tolerant solutions to these problems. Here are some common ones:

  • Isolation: it refers to the system is divided into several service modules according to certain principles, each module is independent of each other, no strong dependence; When a fault occurs, the problem and the impact can be isolated in a module, without the risk of diffusion, does not involve other modules, does not affect the overall system service; Common isolation methods are: thread pool isolation and semaphore isolation
  • Timeout: When the upstream service calls the downstream service, a maximum response time is set. If the downstream does not respond after this time, the request is disconnected and the thread is released
  • Traffic limiting: Limiting the input and output traffic of the system to protect the system. To ensure stable operation of the system, you need to limit the traffic and take a few measures to limit the traffic once the threshold is reached
  • Fusing: When the downstream service responds slowly or fails due to excessive access pressure, the upstream service can temporarily cut off the call to the downstream service in order to protect the overall availability of the system. This measure is called fusing at the expense of part to preserve the whole. Fusing generally has three states:
    • Fusible Off state: The state in which the fuse is in when the service is not faulty, which imposes no restrictions on the caller’s call
    • Fusing enabled: Subsequent calls to the service interface do not pass through the network, but directly execute the local fallback method
    • Semi-fusing state: Try to resume the service invocation, allow limited traffic to invoke the service, and monitor the success rate of the invocation. If the success rate reaches the expected, the service is recovered and enters the fusing closed state. If the success rate is still low, the circuit breaker is re-entered
  • Downgrading: Downgrading is providing a bottom-dropping solution for the service, which is used when the service cannot be called properly

Here, we use Alibaba’s open source Sentinel component to achieve service fault tolerance. Sentinel has undertaken alibaba’s core traffic driving scenarios in the past 10 years, such as SEC, message peak cutting and valley filling, cluster flow control, real-time fusing of unavailable downstream applications, etc.

First we need to download Sentinel at github.com/alibaba/Sen… Sentinel provides an independent backend system for setting up and monitoring service information. Download the Sentinel JAR package, which is developed by SpringBoot, so we can start it directly:

java -jar sentinel-dashboard-1.81..jar
Copy the code

The default port number is 8080http://localhost:8080/:The default user name and password are both sentinel. Next, we will plug the service into sentinel. First, we will introduce the dependency in the order service:

<dependency>
    <groupId>com.alibaba.cloud</groupId>
    <artifactId>spring-cloud-starter-alibaba-sentinel</artifactId>
</dependency>
Copy the code

Configure it:

spring:
  cloud:
    sentinel:
      transport:
        port: 9999  # specify the port number to interact with Sentinel
        dashboard: 127.0. 01.: 8080 # specify the Sentinel backend system address
Copy the code

Where, port is the port number used to interact with Sentinel. Because Sentinel may have data that needs to be transmitted to the service, a port needs to be given to Sentinel as long as it is not occupied. Dashboard is used to specify the address of the Sentinel backend system. Then write two control methods for testing:

@RequestMapping("/order/message")
public String message(a) {
    return "message";
}

@RequestMapping("/order/message2")
public String message2(a) {
    return "message2";
}
Copy the code

Sentinel is a lazy loading mechanism, so first access these two control methods and then look at the background monitoring:Here’s a quick primer on Sentinel use:Click the cluster point link on the left, and then flow control the /order/message resource. The flow control rule is set as follows:If the resource is accessed more than twice per second, it will be controlled directly.Next, we’ll look at each configuration in Sentinel.

Flow control rules

Traffic control monitors application traffic indicators, such as QPS or concurrent threads, and controls the traffic when it reaches a specified threshold to avoid being overwhelmed by instantaneous traffic peaks and ensure high availability of applications.

The setting method is to click the cluster point link in the left menu, and then click the flow control button on the resource to be set:A dialog box is displayed:The resource name is used to set a unique resource name. The default value is the resource access path, or it can be set to another value. For source This parameter is used to set the microservice for which traffic limiting is implemented. The default value is default, which means that traffic limiting is implemented for all microservices regardless of source. The threshold type is used to specify restrictions. For example, QPS limits traffic based on requests per second and threads based on the maximum number of concurrent threads. Click Threshold to set the maximum value of the previous threshold type. Now that we’ve seen the effects of QPS, what is the effect of thread count control?Set the thread number threshold to 2, which means that only two threads are allowed to access concurrently. We can use JMeter tool to test, and first create a thread group with two threads:Then set the request information:After running, access to the browser will be directly controlled:Some advanced functions can be set in flow control rules. Take QPS as an example:Advanced options can set the flow control mode and flow control effect. The effect we saw before is the fast failure effect of direct mode. Next, let’s test the association mode.So configuration later, when the associated resources over the configuration above threshold, which allows the maximum number of requests per second for 2, leads to the current resources are controlled, it can be used between the order and payment services, because pay the security requirement is very high, so we can set a rule, when payment service requests per second more than a certain value and then control the order service, Make it impossible for users to continue placing orders to ensure the availability of payment services.

The third mode is link mode, which is more fine-grained for the source. For the source, you can configure which microservice to limit traffic, and for the link, you can configure which interface to limit traffic. Then add a Service method first:

@Service
public class OrderMessageService {

    /** * Define resource *@return* /
    @SentinelResource("message")
    public String message(a){
        return "message"; }}Copy the code

Add the @SentinelResource annotation to the method to mark it as a resource with the annotation value of the resource name, and then we need to call it in the control method:

@RestController
public class OrderController2 {

    @Autowired
    private OrderMessageService orderMessageService;

    @RequestMapping("/order/message")
    public String message(a) {
        orderMessageService.message();
        return "message";
    }

    @RequestMapping("/order/message2")
    public String message2(a) {
        orderMessageService.message();
        return "message2"; }}Copy the code

The situation is that both control methods call the Service method, so we can set a link rule for the Service resource, specifying that/order/messageThe QPS of this link should not be greater than 2, otherwise, it can be controlled. Similarly, click the cluster point link on the left of the menu and then flow control the Message resource:The configuration is as follows:Finally, do not forget to configure one item in the configuration file, otherwise the link mode flow control will not take effect:

spring:
  cloud:
    sentinel:
      web-context-unify: false
Copy the code

The effect is as follows:Now that we have seen the flow control mode in the advanced options, let’s look at the flow control effect:Fast failure refers to that when the number of requests or threads exceeds the threshold, an exception is thrown directly. Warm Up refers to the warm-up. There will be a buffer stage from the beginning threshold to the maximum threshold. The threshold is 1/3 of the maximum QPS threshold at the beginning, and then gradually increases until the maximum threshold. Queuing allows requests to pass at an even speed. The single-node threshold is set to the number of requests that pass per second, and a timeout period needs to be set for other requests. If a request is not processed after the timeout period is exceeded, the request is discarded.

Drop rules

The degradation rule is to set the service to be degraded when certain conditions are met. Similarly, click the cluster point link in the left menu and then click the degradation button of the corresponding resource:Take a look at what needs to be set for the reversion rule:Sentinel provides three kinds of fuse breaker strategies, which we can understand first. The first one is the slow call ratio, which is selected as the threshold value. It needs to set the allowable slow call RT (i.e., the maximum response time), if the response time of the request is greater than this value, it is counted as slow call. If the number of requests per unit statistics period is greater than the minimum number of requests and the ratio of delayed calls is greater than the threshold, the requests will be fused automatically in the following fuse duration. After the fuse duration, the fuse will enter the probe recovery state (half-open state). If the response time of the next request is less than the set slow-call RT, the fuse will end. If the response time is longer than the set slow-call RT, the fuse will be disconnected again.

For example:It says the statistical time, i.e., number of requests for 1 seconds is greater than 1, wait a minute call ratio greater than 0.5 (here as long as the response time of the request is considered to be greater than 5 milliseconds slower calls), if the two conditions are met, the fuse drop 5 seconds, 5 seconds later, the probe will enter a state of detection, if the next request response time is less than 5 millisecond fuse is ended, If it is longer than 5 ms, the circuit breaker is turned off again. It should be noted that the default RT upper limit of Sentinel statistics is 4900 ms, and data beyond this threshold will be considered as 4900 ms. If RT larger than the upper limit really needs to be set, it can be achieved by carrying parameters when starting Sentinel:

-Dcsp.sentinel.statistic.max.rt=xxx
Copy the code

The second policy is the exception ratio. If the number of requests per unit statistical period is greater than the minimum number of requests and the exception ratio is greater than the threshold, the requests will be fused automatically in the following fuse duration. After the fuse period, the fuse enters the probe half-open state, terminating the fuse if the next request completes successfully (without error), otherwise it will be fused again. The threshold range for the abnormal ratio is [0.0, 1.0], representing 0-100%.

For example:It represents in 1 seconds if the number of requests is greater than 1, and the abnormal ratio greater than 25%, that is four times the request more than one request to throw the exception, then the abnormal rate is greater than 25%, the request will be fusing 5 seconds, 5 seconds after detecting probe into the state, if the end of the successful completion of a request is fuse, or fuse again.

The last policy is the number of anomalies. When the number of anomalies in a unit statistical period exceeds the threshold, the circuit breaker is automatically disabled. After the fuse period, the fuse enters the probe half-open state, terminating the fuse if the next request completes successfully (without error), otherwise it will be fused again.

Such as:It means that the number of requests is greater than 1 within 1 second, and the number of anomalies is also greater than 1, then the fuse is disconnected for 5 seconds. After 5 seconds, the detector enters the detection state. If the next request is successfully completed, the fuse is terminated; otherwise, the fuse is disconnected again.

Hot rules

A hotspot rule is a more fine-grained flow control rule that allows rules to be specific to parameters.

Write code:

@RequestMapping("/order/message3")
@SentinelResource("message3")
public String message3(String name, Integer age) {
    return "message3:" + name + "," + age;
}
Copy the code

There is a control method that accepts two parameters. We declare it as a resource using the @SentinelResource annotation, and then set the hotspot rule on the Sentinel console:It is important to note on which resource it should be set, since the annotation value is message3, the message3 resource should be set:Parameter index refers to the parameters in the method. 0 is the first parameter and 1 is the second parameter. Set the hotspot rule for the parameter whose index is 0.Parameter exceptions can also be set in the bottom, which means that the request with parameter age can not be more than 3 in 1 second, otherwise direct control; However, if the age value is 20, the traffic limiting threshold is set to 1000. That is, no more than 1000 requests can be received within one second.

Authorization rules

The source access control function of Sentinel can be used to determine whether a request is allowed to pass based on the source of the request.

Authorization rules require an identifier to match whether the source is legitimate, so we need to write code:

@Component
public class RequestOriginParserDefinition implements RequestOriginParser {

    /** * define the source of differentiation **@param request
     * @return* /
    @Override
    public String parseOrigin(HttpServletRequest request) {
        String serviceName = request.getParameter("serviceName");
        if (StringUtils.isEmpty(serviceName)) {
            throw new RuntimeException("serviceName is not empty");
        }
        returnserviceName; }}Copy the code

This method will fetch a serviceName attribute value from the request field and return it to Sentinel for matching, so we will set the authorization rule in Sentinel:An authorization rule can be whitelisted or blacklisted. If the authorization rule is whitelisted, all applications except the currently configured applications are controlled. If blacklisted, the opposite is true. Here we set the flow control application to A and whitelisted so that requests with serviceName=A will not be restricted and all other requests will be controlled:

Custom exception return page

In the previous rules, we found that when the resource is controlled, the error page is displayed in the same way, which makes it difficult to distinguish what the problem is, and it is not friendly to the user. Therefore, we need to customize the return page of the exception, as follows:

@Component
public class ExceptionHandlerPage implements BlockExceptionHandler {

    @Override
    public void handle(HttpServletRequest request, HttpServletResponse response, BlockException e) throws Exception {
        response.setContentType("application/json; charset=utf-8");
        ResponseData responseData = null;
        if (e instanceof FlowException) {
            // If the current limiting is abnormal
            responseData = new ResponseData(-1."Abnormal current limiting...");
        } else if (e instanceof DegradeException) {
            // If it is a degraded exception
            responseData = new ResponseData(-2."Degradation exception...");
        } else if (e instanceof ParamFlowException) {
            // If the parameter current limit is abnormal
            responseData = new ResponseData(-3."Abnormal parameter current limiting");
        } else if (e instanceof AuthorityException) {
            responseData = new ResponseData(-4."Authorization exception");
        } else if (e instanceof SystemBlockException) {
            responseData = new ResponseData(-5."System load abnormal"); } response.getWriter().write(JSON.toJSONString(responseData)); }}@Data
@NoArgsConstructor
@AllArgsConstructor
class ResponseData {
    private int code;
    private String message;
}
Copy the code

You can customize handling for each exception.

@SentinelResource

We’ve already used the @sentinelResource annotation to declare a resource, but it does more than that. It defines the processing logic when an exception occurs within the resource:

@Service
@Slf4j
public class OrderMessageService {

    @SentinelResource(value = "message", blockHandler = "blockHandler", fallback = "fallback")
    public String message(String name) {
        return "message";
    }

    public String blockHandler(String name, BlockException e) {
        log.error("BlockException... {}", e);
        return "BlockException";
    }

    public String fallback(String name, Throwable t) {
        log.error("Throwable... {}", t);
        return "Throwable"; }}Copy the code

The @Sentineresource annotation can specify two parameters to handle the exception logic, blockHandler and fallback. BlockHandler captures BlockException, which is the parent class of five exceptions, including flow control exception, degradation exception and hotspot exception. In other words, Sentinel will catch all exceptions thrown by it. This attribute needs to specify a processing method, and the declaration of this method is also specific. The return value and parameters must be the same as the original resource method declaration, but you can declare an additional BlockException parameter to retrieve exception information.

Fallback has a similar function to blockHandler, but its scope is wider. Any exception generated by the resource will be caught by Fallback and processed by fallback. Now we flow control the /order/message resource, and the flow control exception will enter our custom processing method:If both attributes are defined, blockHandler takes precedence over exception thrown by Sentinel. If we don’t want exception handling to be written with the business, we can also use another attribute, blockHandlerClass, to extract exception handling outside:

@SentinelResource( value = "message", blockHandlerClass = OrderMessageBlockHandler.class, blockHandler = "blockHandler", fallback = "fallback")
public String message(String name) {
    return "message";
}
Copy the code

Define the OrderMessageBlockHandler class externally:

@Slf4j
public class OrderMessageBlockHandler {

    public static String blockHandler(String name, BlockException e) {
        log.error("BlockException... {}", e);
        return "BlockException"; }}Copy the code

Note that the exception handling method must be declared static, as fallback can be.

Sentinel persistence

About the rules that the Sentinel is introduced here, but we still have a problem not solved, is each set of rules in the application after start-up failure, need to reset, this is because the Sentinel default set the rules stored in the memory, so if you want to the rules, we need to persist.

Write a class:

package com.wwj.config;

import com.alibaba.csp.sentinel.command.handler.ModifyParamFlowRulesCommandHandler;
import com.alibaba.csp.sentinel.datasource.*;
import com.alibaba.csp.sentinel.init.InitFunc;
import com.alibaba.csp.sentinel.slots.block.authority.AuthorityRule;
import com.alibaba.csp.sentinel.slots.block.authority.AuthorityRuleManager;
import com.alibaba.csp.sentinel.slots.block.degrade.DegradeRule;
import com.alibaba.csp.sentinel.slots.block.degrade.DegradeRuleManager;
import com.alibaba.csp.sentinel.slots.block.flow.FlowRule;
import com.alibaba.csp.sentinel.slots.block.flow.FlowRuleManager;
import com.alibaba.csp.sentinel.slots.block.flow.param.ParamFlowRule;
import com.alibaba.csp.sentinel.slots.system.SystemRule;
import com.alibaba.csp.sentinel.slots.system.SystemRuleManager;
import com.alibaba.csp.sentinel.transport.util.WritableDataSourceRegistry;
import com.alibaba.fastjson.JSON;
import com.alibaba.fastjson.TypeReference;
import org.springframework.beans.factory.annotation.Value;
import java.io.File;
import java.io.IOException;
import java.util.List;

public class FilePersistence implements InitFunc {

    @Value("spring.application.name")
    private String applicationName;

    @Override
    public void init(a) throws Exception {
        String ruleDir = System.getProperty("user.home") + "/sentinel-rules/" + applicationName;
        System.out.println(ruleDir);
        String flowRulePath = ruleDir + "/flow-rule.json";
        String degradeRulePath = ruleDir + "degrade-rule.json";
        String systemRulePath = ruleDir + "system-rule.json";
        String authorityRulePath = ruleDir + "authority-rule.json";
        String paramFlowRulePath = ruleDir + "param-flow-rule.json";

        this.mkdirIfNotExists(ruleDir);
        this.createFileIfNotExists(flowRulePath);
        this.createFileIfNotExists(degradeRulePath);
        this.createFileIfNotExists(systemRulePath);
        this.createFileIfNotExists(authorityRulePath);
        this.createFileIfNotExists(paramFlowRulePath);

        // Flow control rule
        ReadableDataSource<String, List<FlowRule>> flowRuleRDS = new FileRefreshableDataSource<List<FlowRule>>(
                flowRulePath,
                flowRuleListParser
        );
        FlowRuleManager.register2Property(flowRuleRDS.getProperty());
        WritableDataSource<List<FlowRule>> flowRuleWDS = new FileWritableDataSource<List<FlowRule>>(
                flowRulePath,
                this::encodeJson
        );
        WritableDataSourceRegistry.registerFlowDataSource(flowRuleWDS);

        // Demote rule
        ReadableDataSource<String, List<DegradeRule>> degradeRuleRDS = new FileRefreshableDataSource<List<DegradeRule>>(
                degradeRulePath,
                degradeRuleListParser
        );
        DegradeRuleManager.register2Property(degradeRuleRDS.getProperty());
        WritableDataSource<List<DegradeRule>> degradeRuleWDS = new FileWritableDataSource<List<DegradeRule>>(
                degradeRulePath,
                this::encodeJson
        );
        WritableDataSourceRegistry.registerDegradeDataSource(degradeRuleWDS);

        // System rules
        ReadableDataSource<String, List<SystemRule>> systemRuleRDS = new FileRefreshableDataSource<>(
                systemRulePath,
                systemRuleListParser
        );
        SystemRuleManager.register2Property(systemRuleRDS.getProperty());
        WritableDataSource<List<SystemRule>> systemRuleWDS = new FileWritableDataSource<List<SystemRule>>(
                systemRulePath,
                this::encodeJson
        );
        WritableDataSourceRegistry.registerSystemDataSource(systemRuleWDS);

        // Authorization rules
        ReadableDataSource<String, List<AuthorityRule>> authorityRuleRDS = new FileRefreshableDataSource<List<AuthorityRule>>(
                authorityRulePath,
                authorityRuleListParser
        );
        AuthorityRuleManager.register2Property(authorityRuleRDS.getProperty());
        WritableDataSource<List<AuthorityRule>> authorityRuleWDS = new FileWritableDataSource<List<AuthorityRule>>(
                authorityRulePath,
                this::encodeJson
        );
        WritableDataSourceRegistry.registerAuthorityDataSource(authorityRuleWDS);

        // Hotspot rules
        ReadableDataSource<String, List<ParamFlowRule>> paramFlowRuleRDS = new FileRefreshableDataSource<List<ParamFlowRule>>(
                paramFlowRulePath,
                paramFlowRuleListParser
        );
        AuthorityRuleManager.register2Property(authorityRuleRDS.getProperty());
        WritableDataSource<List<ParamFlowRule>> paramFlowRuleWDS = new FileWritableDataSource<List<ParamFlowRule>>(
                paramFlowRulePath,
                this::encodeJson
        );
        ModifyParamFlowRulesCommandHandler.setWritableDataSource(paramFlowRuleWDS);
    }

    private Converter<String, List<FlowRule>> flowRuleListParser = source -> JSON.parseObject(
            source,
            new TypeReference<List<FlowRule>>() {

            }
    );

    private Converter<String, List<DegradeRule>> degradeRuleListParser = source -> JSON.parseObject(
            source,
            new TypeReference<List<DegradeRule>>() {

            }
    );

    private Converter<String, List<SystemRule>> systemRuleListParser = source -> JSON.parseObject(
            source,
            new TypeReference<List<SystemRule>>() {

            }
    );

    private Converter<String, List<AuthorityRule>> authorityRuleListParser = source -> JSON.parseObject(
            source,
            new TypeReference<List<AuthorityRule>>() {

            }
    );

    private Converter<String, List<ParamFlowRule>> paramFlowRuleListParser = source -> JSON.parseObject(
            source,
            new TypeReference<List<ParamFlowRule>>() {

            }
    );

    private void mkdirIfNotExists(String filePath) throws IOException {
        File file = new File(filePath);
        if (!file.exists()) {
            file.mkdirs();
        }
    }

    private void createFileIfNotExists(String filePath) throws IOException {
        File file = new File(filePath);
        if (!file.exists()) {
            file.createNewFile();
        }
    }

    private <T> String encodeJson(T t) {
        returnJSON.toJSONString(t); }}Copy the code

Then need to create a folder in the resource directory, the name must be meta-inf, services, and create a new file, under the folder name must be com. Alibaba. CSP. Sentinel. Init. InitFunc, Write the full class name of the persistent class in the file:

com.wwj.config.FilePersistence
Copy the code

Finally, restart the service and set a flow control rule that will be persisted to the local disk:

Sentinel OpenFeign integration

Next, we use OpenFeign to integrate Sentinel to achieve service fault tolerance. First, we enable OpenFeign support for Sentinel:

feign:
  sentinel:
    enabled: true
Copy the code

Remember the OpenFeign interface we wrote about earlier? To implement remote calls, we need to write an interface and declare methods:

@FeignClient(value = "service-product", fallback = ProductServiceFallback.class)
public interface ProductService {

    @RequestMapping("/product/{pid}")
    Product product(@PathVariable("pid") Integer pid);
}
Copy the code

In general, this method declaration is the same as the declaration of the remote method to be called. To implement fault tolerance for this method, we first need to add the fallback attribute to the @FeignClient annotation, which specifies what happens after fault tolerance occurs, so create the ProductServiceFallback class:

/** * this is a fault-tolerant class that needs to implement the interface where Feign resides and implement all the methods in the interface */
@Service
public class ProductServiceFallback implements ProductService {

    @Override
    public Product product(Integer pid) {
        Product product = new Product();
        product.setPid(-1);
        product.setPname("Remote call failed, fault tolerant method executed");
        returnproduct; }}Copy the code

This class needs to implement the interface declared by Feign and implement all the methods in the interface. When the remote call of Feign interface fails, the method with the same name as that in the called Feign interface will be found in the class to call. At this time, the remote interface can know whether the call fails and do the corresponding processing:

@RequestMapping("/order/product/{pid}")
public Order order(@PathVariable("pid") Integer pid) {
    log.info("Call order service, item number :{}", pid);
    Product product = productService.product(pid);
    if (product.getPid() == -1) {
        // The remote invocation failed and fault tolerance was performed
        Order order = new Order();
        order.setOid(-1);
        order.setPname("Order failed");
        return order;
    }
    // Create an order
    Order order = new Order();
    // Simulate data
    order.setUid(1);
    order.setUsername("Test user");
    order.setPid(pid);
    order.setPname(product.getPname());
    order.setPprice(product.getPprice());
    order.setNumber(1);
    orderService.createOrder(order);
    log.info("Order created successfully, order info :{}", JSON.toJSONString(order));
    return order;
}
Copy the code

Test it out:In this case, the first order failed due to network problems of its own, which also tested the effect of a failed remote service call. However, it also has some disadvantages, even if the remote call fails, the console does not output any error message log, which is very bad for our error detection, we can use another fault tolerance method:

@Service
@Slf4j
public class ProductServiceFallbackFactory implements FallbackFactory<ProductService> {

    @Override
    public ProductService create(Throwable throwable) {
        return new ProductService() {
            @Override
            public Product product(Integer pid) {
                log.error("{}", throwable);
                Product product = new Product();
                product.setPid(-1);
                product.setPname("Remote call failed, fault tolerant method executed");
                returnproduct; }}; }}Copy the code

This method will carry an exception message, which we can output and then modify the remote interface:

@FeignClient(value = "service-product", fallbackFactory = ProductServiceFallbackFactory.class)
public interface ProductService {

    @RequestMapping("/product/{pid}")
    Product product(@PathVariable("pid") Integer pid);
}
Copy the code

If the call fails, the console will print an error log:

Service Gateway –Gateway

When clients access these micro-services, they must know the IP address and port of the service. However, services are often clustered, that is, a service usually has many services together. If clients want to achieve high availability, they must know the IP address and port of each service. Moreover, when implementing some authentication functions, we need to do a set of the same logic for each service involved, these problems are waiting for us to solve, so there is a good solution, of course, there is, it is the service Gateway: Gateway. The service gateway is the unified entrance of the system. It encapsulates the internal structure of application programs and provides unified services for users. Some common logic unrelated to business functions can be realized here, such as authentication, authentication, monitoring, routing and forwarding, and so on.

Start by creating a gateway service and importing dependencies:

<dependency>
  <groupId>org.springframework.cloud</groupId>
  <artifactId>spring-cloud-starter-gateway</artifactId>
</dependency>
Copy the code

Because the implementation of the Gateway relies on Netty and WebFlux, we cannot introduce the spring-boot-starter-Web dependency in the service, which includes Tomcat and will affect the Gateway function. Configure:

server:
  port: 7000
spring:
  application:
    name: shop-gateway
  cloud:
    gateway:
      routes: # route collection
        - id: product_route # Unique identifier of the current route
          uri: http://localhost:8081 Request the final address to forward to
          order: 1 # Priority of the route. The smaller the number, the higher the priority
          predicates: Predicate the condition to be satisfied by forwarding the request
            - Path=/product-serv/** # If the request Path meets the rule specified by Path, this route will be forwarded normally
          filters:  # filter, which handles requests during request delivery
            - StripPrefix=1 Remove a layer of paths before forwarding requests
Copy the code

Here the focus is on the right by the configuration of the first id is the unique identifier of the current routing, the request uri is need to forwarded to the address, here we set up a routing rules, namely, when the address is http://localhost:7000/product-serv/ * * will be which by forwarding to the goods and services, Therefore, predicates need to determine whether the current request starts with product-serv. If the condition is met, the product service needs to be accessed, such as a URL like this: http://localhost:7000/product-serv/product/1, the url of the after routing forwarding, however, is the state: http://localhost:8081/product-serv/product/1, it’s much more than we real service address is a layer of the product – serv path, so we in the filters will be removed.

Test it out:There was no problem, we accessed the gateway and got the goods and services data. However, the routing configuration here is problematic because the URI attribute value is written to death, and ultimately we should get the service information from Nacos rather than written to death in the program. So how to do this?

First introduce Nacos dependencies in gateway services:

<dependency>
  <groupId>com.alibaba.cloud</groupId>
  <artifactId>spring-cloud-starter-alibaba-nacos-discovery</artifactId>
</dependency>
Copy the code

A few simple configurations are required:

server:
  port: 7000
spring:
  application:
    name: shop-gateway
  cloud:
    nacos:
      discovery:
        server-addr: 127.0. 01.: 8848
    gateway:
      discovery:
        locator:
          enabled: true Enable the Gateway to discover services in NACOS
      routes: # route collection
        - id: product_route # Unique identifier of the current route
          uri: lb://service-product Get the named service from Nacos and follow the load balancing policy
          order: 1 # Priority of the route. The smaller the number, the higher the priority
          predicates: Predicate the condition to be satisfied by forwarding the request
            - Path=/product-serv/** # If the request Path meets the rule specified by Path, this route will be forwarded normally
          filters: # filter, which handles requests during request delivery
            - StripPrefix=1 Remove a layer of paths before forwarding requests
Copy the code

Built-in route assertion

Gateway provides several assertion factories for route matching, such as the Path I just used, which matches request paths, and several other types of route factories, as detailed below:

  • Assertion factory based on Datetime type

This type of assertion judgment according to the time, there are three main: AfterRoutePredicateFactory: receive a date parameters, to determine whether a request date later than the specified date BeforeRoutePredicateFactory: Receive a date parameters, to determine whether a request date before the specified date BetweenRoutePredicateFactory: receiving date of two parameters, to determine whether a request date format within a specified time period as follows:

– After = 2021-03-21 T23:59:59. 789 + 08:00 Asia/Shanghai

  • Assertion factories based on remote addresses

RemoteAddrRoutePredicateFactory: receive an IP address, to determine whether a request to the host address in the address section format for:

– RemoteAddr = 192.168.147.1/24

  • Cookie-based assertion factory

CookieRoutePredicateFactory: accepts two parameters, the Cookie name and regular expression, to determine whether a request a Cookie with the given name and value whether matches the regular expression format for:

-Cookie=chocolate,ch

  • A header-based assertion factory

HeaderRoutePredicateFactory: accepts two parameters, the title name and regular expression, to determine whether a request Header with the given name and value whether matches the regular expression format for:

-Header=X-Request-Id,\d+

  • Host-based assertion factory

HostRoutePredicateFactory: receive a parameter, the Host name pattern, judge whether the Host of the request to satisfy matching rules format for:

-Host=**.testhost.org

  • Assertion factories based on Method request methods

MethodRoutePredicateFactory: receiving a parameter to specify whether the type of request type matching format for:

-Method=GET

  • Assertion factory based on Path request Path

PathRoutePredicateFactory: receive a parameter, judge whether the request URI part meet the path rules format for:

-Path=/foo/(segment)

  • An assertion factory based on Query request parameters

Param QueryRoutePredicateFactory: accepts two parameters, request and regular expression, to determine whether a request parameter with the given name and value whether matches the regular expression format for:

-Query=baz,ba

  • Assertion project based on route weight

WeightRoutePredicateFactory: receive a/group name, weight, and then forward routing according to the weight format. Within the same group as follows:

routes: -id:weight_route1 uri:host1 predicates: -path =/product/** -weight =group3, 1-id :weight_route2 URI :host2 Predicates: -path =/product/** -weight =group3,9

User-defined route assertion

For particular business scenarios, the assertions provided by the Gateway often do not directly address our needs, so we need to customize the assertion factory, for example, to allow only users within a certain age range to access. Then we need to configure first:

spring:
  cloud:
    gateway:
      routes: # route collection
        - id: product_route 
          uri: lb://service-product 
          order: 1 
          predicates: 
            - Path=/product-serv/** 
            - Age = 18, 50
          filters: 
            - StripPrefix=1 
Copy the code

Then customize the assertion factory to implement the assertion method:

@Component
public class AgeRoutePredicateFactory extends AbstractRoutePredicateFactory<AgeRoutePredicateFactory.Config> {

    public AgeRoutePredicateFactory(a) {
        super(AgeRoutePredicateFactory.Config.class);
    }

    /** * reads the parameter values in the configuration file and assigns them to the configuration class **@return* /
    @Override
    public List<String> shortcutFieldOrder(a) {
        // The order of the property values must be the same as those in the configuration file
        return Arrays.asList("minAge,maxAge");
    }

    @Override
    public Predicate<ServerWebExchange> apply(AgeRoutePredicateFactory.Config config) {
        return new Predicate<ServerWebExchange>() {
            @Override
            public boolean test(ServerWebExchange serverWebExchange) {
                // Receive the age parameter
                String ageStr = serverWebExchange.getRequest().getQueryParams().getFirst("age");
                if(! StringUtils.isEmpty(ageStr)) {int age = Integer.parseInt(ageStr);
                    if (age > config.getMinAge() && age < config.getMaxAge()) {
                        return true;
                    } else {
                        return false; }}return false; }}; }/** * Configuration class, used to receive parameters configured in the configuration file */
    @Data
    public static class Config {
        private int minAge;
        private intmaxAge; }}Copy the code

Note that the custom assertion factory naming format must be the configuration name (Age) + RoutePredicateFactory in the configuration file.

The gateway current-limiting

The gateway is the common entrance of all requests, so traffic limiting can be carried out on the gateway. In addition, there are many ways of traffic limiting. Sentinel can be used to achieve traffic limiting on the gateway. Introducing dependencies:

<dependency>
  <groupId>com.alibaba.csp</groupId>
  <artifactId>sentinel-spring-cloud-gateway-adapter</artifactId>
</dependency>
Copy the code

The Gateway based on Sentinel current limit is accomplished by providing a Filter, when using only need to inject the corresponding SentinelGatewayFilter instances and SentinelGatewayBlockExceptionHandler instance:

package com.wwj.config;

@Configuration
public class GatewayConfiguration {

    private final List<ViewResolver> viewResolvers;
    private final ServerCodecConfigurer serverCodecConfigurer;

    public GatewayConfiguration(ObjectProvider
       
        > viewResolversProvider , ServerCodecConfigurer serverCodecConfigurer)
        {
        this.viewResolvers = viewResolversProvider.getIfAvailable(Collections::emptyList);
        this.serverCodecConfigurer = serverCodecConfigurer;
    }

    /** * Initializes a limited-flow filter **@return* /
    @Bean
    @Order(Ordered.HIGHEST_PRECEDENCE)
    public GlobalFilter sentinelGatewayFilter(a) {
        return new SentinelGatewayFilter();
    }

    /** * Configures the initial traffic limiting parameters */
    @PostConstruct
    public void initGatewayRules(a) {
        Set<GatewayFlowRule> rules = new HashSet<>();
        rules.add(
                new GatewayFlowRule("product_route") // The resource name corresponds to the route ID
                        .setCount(1) // Traffic limiting threshold
                        .setIntervalSec(1) // Statistical time window, in seconds
        );
        GatewayRuleManager.loadRules(rules);
    }

    /** * Configures the exception handler for limiting traffic **@return* /
    @Bean
    @Order(Ordered.HIGHEST_PRECEDENCE)
    public SentinelGatewayBlockExceptionHandler sentinelGatewayBlockExceptionHandler(a) {
        return new SentinelGatewayBlockExceptionHandler(viewResolvers, serverCodecConfigurer);
    }

    /** * Custom traffic limiting exception page */
    @PostConstruct
    public void initBlockHandlers(a) {
        BlockRequestHandler blockRequestHandler = new BlockRequestHandler() {
            @Override
            public Mono<ServerResponse> handleRequest(ServerWebExchange serverWebExchange, Throwable throwable) {
                Map<String, Object> map = new HashMap<>();
                map.put("code".0);
                map.put("message"."The interface is restricted...");
                returnServerResponse.status(HttpStatus.OK) .contentType(MediaType.APPLICATION_JSON_UTF8) .body(BodyInserters.fromObject(map));  }}; GatewayCallbackManager.setBlockHandler(blockRequestHandler); }}Copy the code

Traffic limiting mode is implemented for a route. The route ID is product_route.

Service configuration –Nacos

In micro service architecture, a system is often supported by a large number of micro service, and system configuration files to be distributed within the various micro service, management is very inconvenience, for this, we need to use the service configuration center to solve this problem, alibaba open-source Nacos component not only can be used as a service registry, also can be used as service configuration center unified management configuration.

Introducing dependencies:

<dependency>
  <groupId>com.alibaba.cloud</groupId>
  <artifactId>spring-cloud-starter-alibaba-nacos-config</artifactId>
</dependency>
Copy the code

At this point, delete the product microservice configuration file and create a new configuration file with the name of bootstrap.yml:

spring:
  application:
    name: service-product
  cloud:
    nacos:
      config:
        server-addr: 127.0. 01.: 8848 Configure the center address
        file-extension: yaml # config format
  profiles:
    active: dev # Environment identification
Copy the code

Then click on the configuration list in the Nacos background and click the plus sign to add the configuration:Fill in the relevant content:The configuration format is YAML, and the configuration is written in the following format. Note that the Data ID is named according to the following principle: service name + environment ID + configuration format, so the Data ID here isservice-product-dev.yaml

Run goods and services for testing:

According to logs, the configuration takes effect.

The current configuration works, but the microservice does not immediately notice when the configuration is changed in the Nacos background. Nacos provides automatic RefreshScope for this purpose by declaring the @refreshScope annotation on the controller:

@RestController
@RefreshScope
public class NacosConfigConotroller {

    @Value("${spring.application.name}")
    private String name;

    @GetMapping("/nacos-config")
    public String nacosConfig(a) {
        returnname; }}Copy the code

When there are more and more configurations, we can extract the duplicate configurations and implement configuration sharing, for example:You only need toMicroservice name. yamlJust as the Data ID, so that it becomes a common configuration that can be created laterservice-product-dev.yamlservice-product-test.yamlservice-produt-prod.yamlThey will all be sharedservice-product.yamlConfiguration. This method only applies to configuration sharing under the same microservice. If you want to share the configuration with other microservices, create a common configuration in the Nacos backgroundall-service.yamlAnd then configure it in the microservice that needs to use the configuration:

spring:
  application:
    name: service-product
  cloud:
    nacos:
      config:
        server-addr: 127.0. 01.: 8848 Configure the center address
        file-extension: yaml # File format
        shared-dataids: all-service.yaml # Public configuration ID to import
        refreshable-dataids: all-service.yaml # Dynamically refresh the specified configuration
  profiles:
    active: dev # Environment identification
Copy the code

Nacos further divides the configuration:

  • The namespace
  • After grouping
  • Configuration set

They make the configuration between microservices much clearer. Namespaces are used to distinguish configurations in different environments. For example:To add a namespace, click the namespace on the left menu bar, and then click The New namespace in the upper right corner. Three namespaces have been added: test, production, and development environment.If the microservice wants to read the configuration, it must declare a namespace:

spring:
  application:
    name: service-product
  cloud:
    nacos:
      config:
        server-addr: 127.0. 01.: 8848 Configure the center address
        file-extension: yaml # File format
        namespace: 07099085-e140-4db7-b342-5450d6d934a0
Copy the code

The namespace value is the string following the namespace:The named group is to classify the configurations of different microservices into the same group, for example, to group all the configurations of microservices of the whole project:Simply specify the Group when creating a new configuration. The last one is the configuration set. The concept of a configuration set is essentially a configuration file, which is the least partitioned part of the configuration.

The relationship is as follows: a namespace (environment) has multiple naming groups (items), and each naming group (items) has multiple configuration sets (configuration files).

Distributed transaction

Transactions are a familiar topic, so let’s take a look directly at transaction issues in distributed architectures. For a local transaction, this operation is very simple: reduce the account balance of Joe and increase the account balance of Joe, and manage the two operations as a transaction. However, there is such a problem in distributed, zhang SAN account and Li Si account may not be in the same database, or even on the same server, for this situation, how do we ensure transaction operation?

For distributed transactions, there are four solutions:

  1. Global transaction
  2. Reliable message service
  3. Best effort notice
  4. TCC transaction

The following are introduced respectively:

Global transaction

Global transactions present three roles:

  1. AP: Application Application system
  2. TM: Transaction Manager Transaction Manager
  3. RM: Resource Manager Indicates the Resource Manager

It divides the entire transaction into two phases:

  • Voting phase: All participants pre-commit the local transaction execution and provide feedback to the coordinator on its success
  • Execution phase: Based on feedback from all participants, the coordinator notifies all participants and synchronously performs commit or rollback

Under the single process as an example, the service in order to generate when our order will be first to the transaction manager to provide feedback, tell him the order to normal, and inventory services before inventory reduction will be the first to the transaction manager to provide feedback, when the transaction manager to receive the two micro service after the correct feedback, will inform the two micro service execution was operating at the same time, If one of the microservices provides incorrect feedback, the microservice can be notified to undo the action. This approach only reduces the probability of distributed transaction problems, but does not completely solve the problem because transaction problems can still occur when the transaction manager tells the microservice to perform operations.

Reliable message service

Reliable messaging services are implemented based on messaging middleware in the following ways:

  1. A message is sent to the messaging middleware before the order service inserts the order data
  2. The message middleware will persist the message after receiving it, but will not deliver it. When the persistence succeeds, it will give feedback to the order service
  3. After receiving feedback, the order service starts inserting the order data
  4. After the order data has been successfully inserted, a Commit or Rollback request is sent to the messaging middleware, which completes processing of the order service
  5. If receive the Commit request message middleware, to the inventory service delivery message, if received the Rollback requests, then discard messages directly, but if the two requests are not yet received, then the trigger message to check mechanism, take the initiative to call order service affairs about interface query the current state of the system, the interface may return three results, Message-oriented middleware will handle these three results differently:
    1. Commit: Posts the message to the inventory service
    2. Rollback: The message is discarded directly
    3. Processing: Continue to wait
  6. After the message is delivered to the inventory service, the message-oriented middleware will enter the blocked waiting state, and the inventory service will immediately perform the inventory reduction operation, and give feedback to the message-oriented middleware after the operation is completed
  7. If the messaging middleware receives correct feedback, the transaction is considered complete
  8. If the messaging middleware times out while waiting for confirmation feedback, it reposts the message until the inventory service returns the correct feedback

Best effort notice

Best effort notification is a further refinement of the reliable messaging service by introducing a local message table to record error messages and then adding periodic validation of failure messages to further ensure that messages are consumed by downstream services.

The process is as follows:

  1. In the same transaction that processes a service, a record is written to the local message table. That is, the service is processed and the record is written in the same transaction. When a record is written to the local message table, it indicates that the service is successfully processed
  2. Prepare a dedicated message sender to continually send messages from the local message table to the message middleware, retrying if a failure occurs
  3. After receiving the message, the message-oriented middleware delivers the message synchronously to the corresponding downstream service and triggers the service processing of the downstream service
  4. When the downstream service processes successfully, it reports back to the message-oriented middleware, which then removes the message and completes the transaction
  5. If the message fails to be delivered, the system uses the retry mechanism to retry the message. If the message fails to be retried, the system writes the message to the error table
  6. The message middleware needs to provide an interface to query failure messages. Downstream services periodically query failure messages and consume them

TCC transaction

TCC transactions are distributed transactions of compensation type, and TCC has three steps to realize distributed transactions:

  1. Try the business to be executed: This process does not execute the business, but completes the consistency check for all the business and reserves all the resources required for execution
  2. Confirm service execution: This process does not perform any service check and only uses the service resources reserved in step 1. In general, TCC is considered to be error-free, that is, as long as the first step is successful, then this step will be successful, if unfortunately it does go wrong, it needs to introduce retry mechanism or human intervention
  3. Cancel the service to be performed: Cancel the service resources reserved in the first step. In general, TCC assumes that this step is also errorless. If an error occurs, a retry mechanism or human intervention will be introduced

Seata

Seata is an open source component of Alibaba, which is used to solve the problem of distributed transactions. Its vision is to make the use of distributed transactions as simple and efficient as local transactions, and gradually solve all the problems of distributed transactions encountered by developers.

Here, the following single operation is still used to simulate the whole process. First, the control method is written:

@RestController
@Slf4j
public class OrderController {

    @Autowired
    private OrderService orderService;

    @RequestMapping("/order/product/{pid}")
    public Order order(@PathVariable("pid") Integer pid) {
        returnorderService.createOrder(pid); }}Copy the code

Provide this method in a Service:

@Service
@Slf4j
public class OrderServiceImpl implements OrderService {

    @Autowired
    private OrderMapper orderMapper;
    @Autowired
    private ProductService productService;
    @Autowired
    private RocketMQTemplate rocketMQTemplate;

    @Override
    public Order createOrder(Integer pid) {
        log.info("Call order service, item number :{}", pid);
        Product product = productService.product(pid);
        // Create an order
        Order order = new Order();
        // Simulate data
        order.setUid(1);
        order.setUsername("Test user");
        order.setPid(pid);
        order.setPname(product.getPname());
        order.setPprice(product.getPprice());
        order.setNumber(1);
        orderMapper.insert(order);
        log.info("Order created successfully, order info :{}", JSON.toJSONString(order));
        / / inventory reduction
        productService.reduceInventory(pid, order.getNumber());
        // Post an order success message to RocketMQ
        rocketMQTemplate.convertAndSend("order-topic",order);
        returnorder; }}Copy the code

This method invokes remote commodity service to realize inventory reduction operation before delivering message to message middleware, so the remote interface is written:

@FeignClient(value = "service-product")
public interface ProductService {

    @RequestMapping("/product/{pid}")
    Product product(@PathVariable("pid") Integer pid);

    @RequestMapping("/product/reduceInventory")
    void reduceInventory(@RequestParam("pid") Integer pid, @RequestParam("number") Integer number);
}
Copy the code

Accordingly, goods and services need to provide this interface:

@RestController
@Slf4j
public class ProductController {

    @Autowired
    private ProductService productService;

    @RequestMapping("/product/{pid}")
    public Product product(@PathVariable("pid") Integer pid) {
        log.info("Call commodity service, query item {}", pid);
        Product product = productService.queryProductByPid(pid);
        log.info("Query successful, product information :{}", JSON.toJSONString(product));
        return product;
    }

    @RequestMapping("/product/reduceInventory")
    public void reduceInventory(@RequestParam("pid") Integer pid, @RequestParam("number") Integer number) { productService.reduceInventory(pid, number); }}Copy the code

Here we call the Service method, so we write the Service:

@Service
public class ProductServiceImpl implements ProductService {

    @Autowired
    private ProductMapper productMapper;

    @Override
    public Product queryProductByPid(Integer pid) {
        return productMapper.selectById(pid);
    }

    @Override
    public void reduceInventory(Integer pid, Integer number) { Product product = productMapper.selectById(pid); product.setStock(product.getStock() - number); productMapper.insert(product); }}Copy the code

Now that the entire process is written, raise an exception to verify the transaction:

@Transactional
@Override
public void reduceInventory(Integer pid, Integer number) {
    Product product = productMapper.selectById(pid);
    product.setStock(product.getStock() - number);
    // Simulate an exception
    int i = 1 / 0;
    productMapper.insert(product);
}
Copy the code

Before save inventory reduction operation generated an exception, executive order operation will appear at this time of the order create success does not reduce the inventory, this is a very dangerous operation, imagine, a businessman was going to take out 100 mobile phones used for promotion, but because it leads to abnormal transaction problem, crazy enables users to place an order, And inventories remain ample.

Let’s use Seata to solve this problem. First download Senta.Github.com/seata/seata…Seata: Seata: Seata: Seata: Seata: Seata: Seata Then modify the registry. Conf file: Then you need to import the Seata configuration into Nacos, but the 1.4.1 version of Seata does not provide such a script. You need to get the script yourself.Github.com/seata/seata…You also need the config.txt file, which is also not available in Seata version 1.4.1.Github.com/seata/seata…Change this to your own database username and password:Delete the previous configuration here and add these two configurations:Please note that the config. TXT file needs to be placed in the root directory of seata. Now you can import all the configuration into Nacos.

nacos-config.sh -h localhost -p 8848 -g SEATA_GROUP -t 48285e4a-d213-47c1-81a3-b9ff577de2b8 -u nacos -w nacos
Copy the code

-h specifies the IP of Nacos, -p specifies the port number, -g specifies the naming group, and -t specifies the namespace. Here I create a seata namespace:-u and -w are the user name and password for logging in to Nacos respectively. After the script is executed, check the Nacos background:Now that the configuration is all imported, Seata is ready to start:

cd bin
seata-server.bat -p 9000 -m file
Copy the code

After the startup is complete, you can see a new service in the Nacos background:Seata commits or rolls back transactions by logging transactions, so first create a log table:

CREATE TABLE `undo_log` (
  `id` bigint(20) NOT NULL AUTO_INCREMENT,
  `branch_id` bigint(20) NOT NULL,
  `xid` varchar(100) NOT NULL,
  `context` varchar(128) NOT NULL,
  `rollback_info` longblob NOT NULL,
  `log_status` int(11) NOT NULL,
  `log_created` datetime NOT NULL,
  `log_modified` datetime NOT NULL,
  `ext` varchar(100) DEFAULT NULL.PRIMARY KEY (`id`),
  UNIQUE KEY `ux_undo_log` (`xid`,`branch_id`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;
Copy the code

Create this table in our business database, and then create the following tables in the SEATA database:

-- the table to store GlobalSession data
drop table if exists `global_table`;
create table `global_table` (
  `xid` varchar(128)  not null,
  `transaction_id` bigint,
  `status` tinyint not null,
  `application_id` varchar(32),
  `transaction_service_group` varchar(32),
  `transaction_name` varchar(128),
  `timeout` int,
  `begin_time` bigint,
  `application_data` varchar(2000),
  `gmt_create` datetime,
  `gmt_modified` datetime,
  primary key (`xid`),
  key `idx_gmt_modified_status` (`gmt_modified`, `status`),
  key `idx_transaction_id` (`transaction_id`)
);

-- the table to store BranchSession data
drop table if exists `branch_table`;
create table `branch_table` (
  `branch_id` bigint not null,
  `xid` varchar(128) not null,
  `transaction_id` bigint ,
  `resource_group_id` varchar(32),
  `resource_id` varchar(256) ,
  `lock_key` varchar(128) ,
  `branch_type` varchar(8) ,
  `status` tinyint,
  `client_id` varchar(64),
  `application_data` varchar(2000),
  `gmt_create` datetime,
  `gmt_modified` datetime,
  primary key (`branch_id`),
  key `idx_xid` (`xid`)
);

-- the table to store lock data
drop table if exists `lock_table`;
create table `lock_table` (
  `row_key` varchar(128) not null,
  `xid` varchar(96),
  `transaction_id` long ,
  `branch_id` long,
  `resource_id` varchar(256) ,
  `table_name` varchar(32) ,
  `pk` varchar(36) ,
  `gmt_create` datetime ,
  `gmt_modified` datetime,
  primary key(`row_key`)
);
Copy the code

Now you can write the business code to introduce dependencies in the order service and goods service:

<dependency>
  <groupId>com.alibaba.cloud</groupId>
  <artifactId>spring-cloud-starter-alibaba-seata</artifactId>
</dependency>
Copy the code

Then register a proxy data source with the container in both microservices:

@Configuration
public class DataSourceProxyConfig {

    @Bean
    @ConfigurationProperties(prefix = "spring.datasource")
    public DruidDataSource druidDataSource(a) {
        return new DruidDataSource();
    }

    @Primary
    @Bean
    public DataSourceProxy dataSource(DruidDataSource druidDataSource) {
        return newDataSourceProxy(druidDataSource); }}Copy the code

Then add a registry. Conf file to each of the microservices’ resource directories:

Registry {# file, NACOS, Eureka, Redis, ZK, Consul, ETCD3, SOFA type ="nacos"
  loadBalance = "RandomLoadBalance"
  loadBalanceVirtualNodes = 10

  nacos {
    application = "seata-server"
    serverAddr = "127.0.0.1:8848"
    group = "SEATA_GROUP"
    namespace = "public"
    cluster = "default"
    username = "nacos"
    password = "nacos"}} config {# file, nacos, Apollo, zk, consul, etcd3 type ="nacos"

  nacos {
    serverAddr = "127.0.0.1:8848"
    namespace = "public"
    group = "SEATA_GROUP"
    username = "nacos"
    password = "nacos"}}Copy the code

Create the bootstrap.yml file:

spring:
  application:
    name: product-service
  cloud:
    nacos:
      config:
        server-addr: 127.0. 01.: 8848
        namespace: 48285e4a-d213-47c1-81a3-b9ff577de2b8 # seATA namespace identifier
        group: SEATA_GROUP
    alibaba:
      seata:
        tx-service-group: ${spring.application.name}
Copy the code

We need to ensure that tx-service-group is consistent with the configuration in config. TXT:Now that all the preparation is complete, all you need to do to solve the GlobalTransactional problem is use the @globaltransactional annotation:

@GlobalTransactional // Global transaction control
@Override
public Order createOrder(Integer pid) {
    log.info("Call order service, item number :{}", pid);
    Product product = productService.product(pid);
    // Create an order
    Order order = new Order();
    // Simulate data
    order.setUid(1);
    order.setUsername("Test user");
    order.setPid(pid);
    order.setPname(product.getPname());
    order.setPprice(product.getPprice());
    order.setNumber(1);
    orderMapper.insert(order);
    log.info("Order created successfully, order info :{}", JSON.toJSONString(order));
    / / inventory reduction
    productService.reduceInventory(pid, order.getNumber());
    // Post an order success message to RocketMQ
    rocketMQTemplate.convertAndSend("order-topic",order);
    return order;
}
Copy the code

That’s all about SpringCloud Alibaba’s technology stack.