This is the ninth article in the Spring Cloud column, and knowing the first eight will help you understand it better:

  1. Spring Cloud first article | Spring Cloud preface introduces an overview and its commonly used components

  2. The second Spring Cloud | use and understand Eureka registry

  3. Spring Cloud third | build a high availability Eureka registry

  4. Spring Cloud fourth article | client side load balancing Ribbon

  5. The fifth canto of Spring Cloud fusing Hystrix | service

  6. Spring Cloud article 6 | Hystrix Dashboard monitoring Hystrix Dashboard

  7. Spring Cloud seventh article | declarative service invocation Feign

  8. Spring Cloud the eighth article | Hystrix cluster monitoring Turbin

  9. Spring Cloud 9 article | distributed service tracking Sleuth

I. Introduction to Sleuth

With the development of business, the scale of the system will become larger and larger, and the invocation relationship between micro-services will become more and more complicated. Usually a request by the client in the backend system passes through a number of different micro service calls to collaborate last request as a result, in a complex micro service architecture system, almost every front request form a complex distributed service call link, in any one rely on the service in every link delay or errors are likely to cause the high request Final failure. At this time, for each request, the whole link call trace becomes more and more important, through the implementation of request call trace can help us quickly find the root of the error and monitor and analyze the performance bottleneck on each request link.

Spring Cloud Sleuth provides a complete set of solutions in the Spring Cloud component. The applications of Spring Cloud Sleuth are described below

Sleuth Quick Start

1. In order to keep other modules clean, re-build a consumer (SpringCloud-consumer-sleuth) and provider (Springcloud-consumer-sleuth). Both the consumer and provider are the same as those used previously. The registry also uses the registry from the previous case (SpringCloud-Eureka-Server /8700) to review the case source code in detail.

2. After completing the above work, we added tracing capabilities for service providers and service consumers. Through the Encapsulation of Spring Cloud Sleuth, we added service tracing capabilities to the application very easily. Just add the Spring-cloud-starter-sleUTH dependency on both the service provider and the service consumer

<dependency>
  <groupId>org.springframework.cloud</groupId>
  <artifactId>spring-cloud-starter-sleuth</artifactId>
</dependency>Copy the code

3. Access the consumer interface and then view the console log display

Logs printed by consumers (Spring Cloud-consumer-sleUTH)

The 2019-12-05 12:30:20. 178 INFO [springcloud - consumer - sleuth, f6fb983680aab32b, f6fb983680aab32b,false] 8992 --- [nio-9090-exec-1] c.s.controller.SleuthConsumerController  : === consumer hello ===Copy the code

Logs printed by the provider (Spring Cloud-provider-sleuth)

The 2019-12-05 12:30:20. 972 INFO [springcloud - the provider - sleuth, f6fb983680aab32b, c70932279d3b3a54,false] 788 --- [nio-8080-exec-1] c.s.controller.SleuthProviderController  : === provider hello ===Copy the code

From the console output above, we can see that there are a few more examples [springcloud – consumer – sleuth, f6fb983680aab32b, c70932279d3b3a54, false] log information, and these elements is implementing distributed service is an important part of the track, the meaning of each value as follows:

  • The first value, SpringCloud-consumer-sleuth, records the name of the application, which is the property of the spring.application.name parameter configuration in application Properties

  • The second value: f6FB983680aAB32b, an ID generated by Spring Cloud Sleuth, called the Trace ID, which identifies a request link. A request link contains a Trace ID and multiple Span ids

  • The third value: C70932279D3b3a54, another ID generated by Spring Cloud Sleuth, called the Span ID, represents a basic unit of work, such as sending an HTTP request

  • The fourth value, false, indicates whether to output this information to a service such as Zipkin for collection and display. Trace ID and Span ID in the above four values are the core of Spring Cloud Sleuth to realize distributed service tracing. During the invocation of a service request link, the same Trace ID will be maintained and transmitted, thus connecting the request tracing information distributed in different microservice processes. Take the output above as an example. Springcloud-consumer-sleuth and SpringCloud-provider-sleuth belong to the same front-end service request resource. Therefore, their Trace ids are the same and they are in the same request link.

Three, tracking principle

Service tracing in distributed systems is not complex in theory and consists of the following two key points:

  1. In order to achieve the request tracking, when the request is sent to the entrance to the endpoint of distributed system, only need service tracking framework for the request to create a unique tracking ID, internal circulation at the same time in a distributed system, framework always maintain the unique identifier, until it is returned to the requestor, this unique identifier is former Trace ID mentioned in the article. By tracing the ID records, we can correlate the logs of all the request processes

  2. In order to count the time delay of each processing unit, when a request arrives at each service component, or when the processing logic reaches a certain state, it is also marked with a unique identifier, the Span ID mentioned earlier. For each Span, it must have two start and end nodes. The time delay of the Span can be calculated by recording the timestamp of the start and end Span. In addition to the timestamp record, it can also contain some other metadata, such as event names, request information, etc

In the SLEUTH Quick Start example, we easily implemented log level trace information access thanks to the spring-Cloud-starter-sleUTH component. After introducing the spring-cloud-starter-sleuth dependency into the project of the SpringBoot application, it will automatically build the tracking mechanism of each communication channel for the current application, such as:

  • Requests delivered via RabbitMQ, Kafka(or any other messaging middleware implemented by the Spring Cloud Stream binder)

  • Requests passed through the Zuul proxy

  • The request is made by RestTemplate

In the SLEUTH Quick Start example, spring-cloud-start is implemented because springCloud-consumer-sleuth requests to Springcloud-provider-sleuth are implemented through the RestTemplate The REQUEST is processed by the ER-sleUTH component. Before sending to springcloud – the provider – sleuth, sleuth will increase in the Header of the request for tracking important information need, mainly has the following several (more information about head can be defined by viewing the org. Springframework. Cloud. Sl Source code for euth.span).

  • X-b3-traceid: a required value that uniquely identifies a request link to Trace.

  • X-b3-spanid: A unique identifier for a unit of work (Span), required value.

  • X-b3-parentspanid: identifies the previous unit of work to which the current unit of work belongs. The value of Root Span(the first unit of work requested by the link) is empty.

  • X-b3-sampled: Indicates whether the sample is sent. 1 indicates that the sample needs to be sent and 0 indicates that the sample does not need to be sent.

  • X-b3-name: indicates the Name of the work unit

You can output these headers by making some changes to the Implementation of Spring Cloud-provider-sleUTH, as follows:

private final Logger logger = Logger.getLogger(SleuthProviderController.class.getName());

    @RequestMapping("/hello")
    public String hello(HttpServletRequest request){
        logger.info("=== provider hello ===,Traced={"+request.getHeader("X-B3-TraceId") +"},SpanId={"+request.getHeader("X-B3- SpanId") +"}");
        return "Trace";
    }Copy the code

With the above modification, restart the case again, and then visit us to view the log, and you can see that the provider outputs TraceId and SpanId information that is being processed.

Logs printed by consumers (Spring Cloud-consumer-sleUTH)

The 2019-12-05 13:15:01. 457 INFO [springcloud - consumer - sleuth, 41697 d7fa118c150, 41697 d7fa118c150,false] 10036 --- [nio-9090-exec-2] c.s.controller.SleuthConsumerController  : === consumer hello ===Copy the code

Logs printed by the provider (Spring Cloud-provider-sleuth)

The 2019-12-05 13:15:01. 865 INFO [springcloud - the provider - sleuth, 41697 d7fa118c150, 863 a1245c86b580e,false] 11088 --- [nio-8080-exec-1] c.s.controller.SleuthProviderController  : === provider hello ===,Traced={41697d7fa118c150},SpanId={863a1245c86b580e}Copy the code

4. Sampling collection

The Trace ID and Span ID have been used to track requests in distributed systems. The recorded Trace information is collected by the analysis system and used to monitor and analyze distributed systems, for example, to warn long delayed request links and query the call details of request links. At this point, we encounter a problem when interfacing with the analysis system: how much trace information should the analysis system collect when collecting trace information?

In theory, the more tracking information we collect, the better we can reflect the actual performance of the system and provide more accurate warnings and analysis. However, in a highly concurrent distributed system, a large number of requests and calls will generate massive trace log information. Collecting too much trace information will affect the performance of the entire distributed system to some extent, and saving a large number of log information also requires a lot of storage overhead. Therefore, abstract collection is used in Sleuth to label trace information with the collection identifier, which is the fourth Boolean type value we saw earlier in the log information, which represents whether the information is retrieved and stored by subsequent trace collectors.

public abstract class Sampler {
  /**
   * Returns true if the trace ID should be measured.
   *
   * @param traceId The trace ID to be decided on, can be ignored
   */
  public abstract boolean isSampled(long traceId);
}Copy the code

By implementing the isSampled method, Spring Cloud Sleuth calls it when the trace information is generated to generate a flag for whether or not the trace information is to be collected. Note that even if isSampled returns false, it simply means that the trace information is not exported to a follow-up remote analysis system (such as Zipkin). Trace activity for the request will still take place, so we will still see records identified as FASE collected in the log.

By default, Sleuth uses the sampling strategy implemented by SamplerProperties to configure and collect trace information in percentage requests. The percentage value can be set by configuring the following parameter in application.yml, which defaults to 0.1 to collect 10% of the request trace information.

Spring: sleuth: sampler: probability: 0.1Copy the code

During development debugging, all trace information is usually collected and output to the remote repository, which can be set to 1 or injected into the Sampler object SamplerProperties policy, for example

@Bean
  public Sampler defaultSampler() {
    return Sampler.ALWAYS_SAMPLE;
  }Copy the code

Because the value of tracking log information data tends to be very useful only for the most recent period of time, such as a week. Therefore, when designing the sampling policy, we mainly consider the principle of making full use of storage space within the log retention time window to implement the sampling policy without causing obvious performance impact on the system.

Integrate with Zipkin

Since the log files are stored discretely in the file system of each service instance, it is still quite cumbersome to divide our request links just by viewing the log files, so we need tools to help collect, store, and search the trace information centrally, such as the ELK logging platform, Although through the collection, storage, search and other powerful functions provided by the ELK platform, we have become very convenient to manage and use tracking information. But lack of ELK platform in the data analysis dimensions for the request link in each stage of the attention of time delay, many times we trace request link one reason is that in order to find out in the whole call link delay the bottleneck of high source, or in order to realize the distributed system monitoring such as delay time consumption related requirements, like ELK log analysis system at this moment It’s a little weak. Such problems can be easily solved by introducing Zipkin.

Zipkin is an open source Twitter project based on the Google Dapper implementation. We can use it to collect the tracking data of the request link on each server, and through the REST API interface it provides to assist the query tracking data to realize the monitoring program of the distributed system, so as to timely find the delay increase problem in the system and find out the root of the system performance bottleneck. In addition to the development-oriented API interface, it also provides convenient UI components to help us intuitively search trace information and analyze request link details, such as query the processing time of each user request within a certain period of time.

The following diagram shows Zipkin’s infrastructure, which consists of four core components:

  • Collector: A Collector component that processes trace information sent from external systems and converts it into a Span format processed internally by Zipkin for subsequent storage, analysis, presentation, and so on.

  • Storage: A Storage component that processes trace information received by the collector and stores it in memory by default. We can also modify this storage policy to store trace information in the database by using other storage components.

  • RESTful apis: API components that provide external access interfaces. For example, to display tracking information to clients, or external system access to achieve monitoring, etc.

  • Web UI: UI component, the upper application based on AP component. Through the UI component, users can easily and intuitively query and analyze tracking information.

Build server-Zipkin

In the Spring to F version of the Cloud, is no longer the need to build a Zipkin Server, you just need to download the jar, download address: dl.bintray.com/openzipkin/…

Zipkin’s Github address is github.com/openzipkin

2. After downloading the JAR package, run the JAR as follows

Java jar zipkin - server - 2.10.1 - exec. JarCopy the code

Visit http://localhost:9411 and you can see zipkin’s admin interface

3. Introduce and configure the Zipkin service for your application

We need to do some configuration to output trace information to Zipkin Server. We use the consumers (Springcloud-consumer-sleUTH) and providers (Springcloud-provider-sleuth) implemented in [II. Sleuth Quick Start] as examples to transform them and integrate Zipkin’s dependency

<dependency>
  <groupId>org.springframework.cloud</groupId>
  <artifactId>spring-cloud-starter-zipkin</artifactId>
</dependency>Copy the code

4. Add Zipkin Server configuration information to The consumer (SpringCloud-consumer-sleuth) and provider (Springcloud-provider-sleuth), as shown below. The default connection address is: http://localhost:9411

spring:
  zipkin:
    base-url: http://localhost:9411Copy the code

5. Test and analysis

Here we have completed all the basic work of the configuration Zipkin Server, and then visit a few consumer interface http://localhost:9090/consumer/hello, when there is tracking information in the log of the final value is true, The tracing information is output to the Zipkin Server, as shown in the following log

The 2019-12-05 15:47:25. 600 INFO [springcloud - consumer - sleuth, cbdbbebaf32355ab, cbdbbebaf32355ab,false] 8564 --- [nio-9090-exec-9] c.s.controller.SleuthConsumerController : = = = consumer hello 15:47:27. = = = 2019-12-05 483 INFO [springcloud - consumer - sleuth, 8 f332a4da3c05f62, 8 f332a4da3c05f62,false] 8564 --- [nio-9090-exec-6] c.s.controller.SleuthConsumerController : = = = consumer hello 15:47:42. = = = 2019-12-05 127 INFO [springcloud - consumer - sleuth, 61 b922906800ac60, 61 b922906800ac60,true] 8564 --- [nio-9090-exec-2] c.s.controller.SleuthConsumerController : = = = consumer hello 15:47:42. = = = 2019-12-05 457 INFO [acae9ebecc4d36d springcloud - consumer - sleuth, 1, 1 acae9ebecc4d36d,false] 8564 --- [nio-9090-exec-4] c.s.controller.SleuthConsumerController : = = = consumer hello 15:47:42. = = = 2019-12-05 920 INFO [springcloud - consumer - sleuth, b2db9e00014ceb88, b2db9e00014ceb88,false] 8564 --- [nio-9090-exec-7] c.s.controller.SleuthConsumerController : = = = consumer hello 15:47:43. = = = 2019-12-05 457 INFO [springcloud - consumer - sleuth, ade4d5a7d97ca16b, ade4d5a7d97ca16b,false] 8564 --- [nio-9090-exec-9] c.s.controller.SleuthConsumerController  : === consumer hello ===Copy the code

You can select the appropriate query criteria in the Admin interface of Zipkin Server and click the Find Traces button to query the trace information that was displayed in the log (or search for the Treac ID in the upper right corner of the page), as shown below:

Click on the trace information for the SpringCloud-consumer-SleUTH endpoint below to get details of the sleUTH trace, including the request time consumption we were concerned about, etc.

Click the “Dependency Analysis” menu in the navigation bar to view the system request link dependency diagram generated by Zipkin Server based on trace information analysis, as shown below

Zipkin stores data in ElasticSearch

In [V. Integration with Zipkin], the data collected by link is stored in the memory of Zipkin service by default. Once the Zipkin service is restarted, these data will be lost. Elasticsearch can be used to store data in MySQL, so it is not a good choice to use Elasticsearch.

1, the above steps using zipkin – server – 2.10.1 – exec. The jar is used to download, then the use of zipkin server version for the 2.19.2, download address: dl.bintray.com/openzipkin/…

Zipkin-server-2.19.2-exec. jar only supports ElasticSearch 5-7.x. Download the zipkin-server-2.19.2-exec.jar version and install elasticSearch 5-7.x.

3. Start the Zipkin service as follows:

Java - DSTORAGE_TYPE = elasticsearch - DES_HOSTS = http://47.112.11.147:9200 - jar zipkin - server - 2.19.2 - exec. JarCopy the code

There are also some other configurable parameters, see github.com/openzipkin/…

* `ES_HOSTS`: A comma separated list of elasticsearch base urls to connect to ex. http://host:9200.
              Defaults to "http://localhost:9200".
* `ES_PIPELINE`: Indicates the ingest pipeline used before spans are indexed. No default.
* `ES_TIMEOUT`: Controls the connect, read and write socket timeouts (in milliseconds) for
                Elasticsearch Api. Defaults to 10000 (10 seconds)
* `ES_INDEX`: The index prefix to use when generating daily index names. Defaults to zipkin.
* `ES_DATE_SEPARATOR`: The date separator to use when generating daily index names. Defaults to The '-'.
* `ES_INDEX_SHARDS`: The number of shards to split the index into. Each shard and its replicas
                     are assigned to a machine in the cluster. Increasing the number of shards
                     and machines in the cluster will improve read and write performance. Number
                     of shards cannot be changed for existing indices, but new daily indices
                     will pick up changes to the setting. Defaults to 5.
* `ES_INDEX_REPLICAS`: The number of replica copies of each shard in the index. Each shard and
                       its replicas are assigned to a machine in the cluster. Increasing the
                       number of replicas and machines in the cluster will improve read
                       performance, but not write performance. Number of replicas can be changed
                       for existing indices. Defaults to 1. It is highly discouraged to set this
                       to 0 as it would mean a machine failure results in data loss.
* `ES_USERNAME` and `ES_PASSWORD`: Elasticsearch basic authentication, which defaults to empty string.
                                   Use when X-Pack security (formerly Shield) is in place.
* `ES_HTTP_LOGGING`: When set, controls the volume of HTTP logging of the Elasticsearch Api.
                     Options are BASIC, HEADERS, BODYCopy the code

4. We modified the application. Yml file of SpringCloud-provider-sleUTH and SpringCloud-consumer-sleUTH to change the sampling probability to 1 for convenient testing

spring:
  sleuth:
    sampler:
      probability: 1Copy the code

5, a few times, and then visit http://localhost:9090/consumer/hello interface visit kibana again can see indexes have been created

6. You can see that data has been stored inside

7. Access Zipkin to see information

But there is no information in the dependency

Zipkin creates an index in ES that starts with zipkin and ends with zipkin. By default, the index is divided by day. When using the ES storage mode, the dependency information in Zipkin cannot be displayed

Zipkin-dependencies generate dependencies

Zipkin-dependencies generates a global call chain based on spark Job. Download it here

Version 2.4.1 of zipkin-Dependencies

Github address: github.com/openzipkin/…

Download address: dl.bintray.com/openzipkin/…

11. Start after downloading

This jar package do not start on Windows, can not start, start until you doubt life. Execute on Linux

The official website document gives a Linux example:

STORAGE_TYPE=cassandra3 java -jar zipkin-dependencies.jar `date -u -d '1 day ago' +%F`Copy the code

The STORAGE_TYPE is the storage type of elasticsearch. The date command is used to set the date and time of the system.

The startup command is:

ZIPKIN_LOG_LEVEL=DEBUG ES_NODES_WAN_ONLY=trueSTORAGE_TYPE = elasticsearch ES_HOSTS = http://47.112.11.147:9200 Java - Xms256m - Xmx1024m - jar zipkin - dependencies - against 2.4.1. Jar  `date -u-d '1 day ago' +%F`Copy the code

Add zipkin-dependencies to your zipkin-dependencies dependencies. Add zipkin-dependencies to your zipkin-dependencies dependencies. (zipkin:dependency-2019-12-17) (zipkin:dependency-2019-12-17) (zipkin:dependency-2019-12-17) (zipkin:dependency-2019-12-17) Therefore, to update dependencies in real time, you need to execute zipkin-dependencies periodically, such as scheduling with crontab in Linux.

The log is as follows:

[root@VM_0_8_centos local]# ZIPKIN_LOG_LEVEL = DEBUG ES_NODES_WAN_ONLY = true STORAGE_TYPE = elasticsearch ES_HOSTS = Java at http://47.112.11.147:9200 -xms256m -xmx1024m -jar zipkin-dependencies-2.4.1.jar 'date -u -d '1 day ago' +%F19/12/18 21:44:10 WARN Utils: Your hostname, VM_0_8_centos? Using 172.21.0.8 instead (on interface eth0) 19/12/18 21:44:10 WARN Utils: Set SPARK_LOCAL_IPif you need to bind to another address
19/12/18 21:44:10 DEBUG ElasticsearchDependenciesJob: Spark conf properties: spark.ui.enabled=false
19/12/18 21:44:10 DEBUG ElasticsearchDependenciesJob: Spark conf properties: es.index.read.missing.as.empty=true
19/12/18 21:44:10 DEBUG ElasticsearchDependenciesJob: Spark conf properties: es.nodes.wan.only=true
19/12/18 21:44:10 DEBUG ElasticsearchDependenciesJob: Spark conf properties: es.net.ssl.keystore.location=
19/12/18 21:44:10 DEBUG ElasticsearchDependenciesJob: Spark conf properties: es.net.ssl.keystore.pass=
19/12/18 21:44:10 DEBUG ElasticsearchDependenciesJob: Spark conf properties: es.net.ssl.truststore.location=
19/12/18 21:44:10 DEBUG ElasticsearchDependenciesJob: Spark conf properties: es.net.ssl.truststore.pass=
19/12/18 21:44:10 INFO ElasticsearchDependenciesJob: Processing spans from zipkin:span-2019-12-17/span
19/12/18 21:44:10 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
19/12/18 21:44:12 WARN Java7Support: Unable to load JDK7 types (annotations, java.nio.file.Path): no Java7 support added
19/12/18 21:44:13 WARN Resource: Detected type name in resource [zipkin:span-2019-12-17/span]. Type names are deprecated and will be removed in a later release.
19/12/18 21:44:13 WARN Resource: Detected type name in resource [zipkin:span-2019-12-17/span]. Type names are deprecated and will be removed in a later release.
19/12/18 21:44:13 WARN Resource: Detected type name in resource [zipkin:span-2019-12-17/span]. Type names are deprecated and will be removed in a later release.
19/12/18 21:44:16 WARN Resource: Detected type name in resource [zipkin:span-2019-12-17/span]. Type names are deprecated and will be removed in a later release.
19/12/18 21:44:16 WARN Resource: Detected type name in resource [zipkin:span-2019-12-17/span]. Type names are deprecated and will be removed in a later release.
19/12/18 21:44:16 WARN Resource: Detected type name in resource [zipkin:span-2019-12-17/span]. Type names are deprecated and will be removed in a later release.
19/12/18 21:44:16 WARN Resource: Detected type name in resource [zipkin:span-2019-12-17/span]. Type names are deprecated and will be removed in a later release.
19/12/18 21:44:17 WARN Resource: Detected type name in resource [zipkin:span-2019-12-17/span]. Type names are deprecated and will be removed in a later release.
19/12/18 21:44:17 WARN Resource: Detected type name in resource [zipkin:span-2019-12-17/span]. Type names are deprecated and will be removed in a later release.
19/12/18 21:44:17 WARN Resource: Detected type name in resource [zipkin:span-2019-12-17/span]. Type names are deprecated and will be removed in a later release.
19/12/18 21:44:17 WARN Resource: Detected type name in resource [zipkin:span-2019-12-17/span]. Type names are deprecated and will be removed in a later release.
19/12/18 21:44:17 WARN Resource: Detected type name in resource [zipkin:span-2019-12-17/span]. Type names are deprecated and will be removed in a later release.
19/12/18 21:44:17 WARN Resource: Detected type name in resource [zipkin:span-2019-12-17/span]. Type names are deprecated and will be removed in a later release.
19/12/18 21:44:18 DEBUG DependencyLinker: building trace tree: traceId=a5253479e359638b
19/12/18 21:44:18 DEBUG DependencyLinker: traversing trace tree, breadth-first
19/12/18 21:44:18 DEBUG DependencyLinker: processing {"traceId":"a5253479e359638b"."id":"a5253479e359638b"."kind":"SERVER"."name":"get /consumer/hello"."timestamp": 1576591155280041,"duration": 6191,"localEndpoint": {"serviceName":"springcloud-consumer-sleuth"."ipv4":"192.168.0.104"},"remoteEndpoint": {"ipv6":: : "1"."port": 62085}."tags": {"http.method":"GET"."http.path":"/consumer/hello"."mvc.controller.class":"SleuthConsumerController"."mvc.controller.method":"hello"}}
19/12/18 21:44:18 DEBUG DependencyLinker: root's client is unknown; skipping 19/12/18 21:44:18 DEBUG DependencyLinker: processing {"traceId":"a5253479e359638b","parentId":"a5253479e359638b","id":"8d6b8fb1bbb4f48c","kind":"CLIENT","name":"get","timest Amp: "1576591155281192," duration ": 3999," localEndpoint ": {" serviceName" : "springcloud - consumer - sleuth", "ipv4" : "192.168.0.104 "},"tags":{"http.method":"GET","http.path":"/provider/hello"}} 19/12/18 21:44:18 DEBUG DependencyLinker: processing {"traceId":"a5253479e359638b","parentId":"a5253479e359638b","id":"8d6b8fb1bbb4f48c","kind":"SERVER","name":"get /provider/hello","timestamp":1576591155284040,"duration":1432,"localEndpoint":{"serviceName":"springcloud-provider-sleut H, "" ipv4 192.168.0.104" : ""}," remoteEndpoint ": {" ipv4" : "192.168.0.104", "port" : 62182}, "tags" : {" HTTP. Method: "GET", "HTTP path ":"/provider/hello","mvc.controller.class":"SleuthProviderController","mvc.controller.method":"hello"},"shared":true} 19/12/18 21:44:18 DEBUG DependencyLinker: found remote ancestor {"traceId":"a5253479e359638b","parentId":"a5253479e359638b","id":"8d6b8fb1bbb4f48c","kind":"CLIENT","name":"get","timest Amp: "1576591155281192," duration ": 3999," localEndpoint ": {" serviceName" : "springcloud - consumer - sleuth", "ipv4" : "192.168.0.104 "},"tags":{"http.method":"GET","http.path":"/provider/hello"}} 19/12/18 21:44:18 DEBUG DependencyLinker: incrementing link springcloud-consumer-sleuth -> springcloud-provider-sleuth 19/12/18 21:44:18 DEBUG DependencyLinker: building trace tree: traceId=54af196ac59ee13e 19/12/18 21:44:18 DEBUG DependencyLinker: traversing trace tree, breadth-first 19/12/18 21:44:18 DEBUG DependencyLinker: processing {"traceId":"54af196ac59ee13e","id":"54af196ac59ee13e","kind":"SERVER","name":"get /consumer/hello","timestamp":1576591134958091,"duration":139490,"localEndpoint":{"serviceName":"springcloud-consumer-sle Uth, "" ipv4 192.168.0.104" : ""}," remoteEndpoint ": {" ipv6" : : : "1", "port" : 62085}, "tags" : {" HTTP. Method: "GET", "HTTP. Path" : "/ cons umer/hello","mvc.controller.class":"SleuthConsumerController","mvc.controller.method":"hello"}} 19/12/18 21:44:18 DEBUG DependencyLinker: root's client is unknown; skipping
19/12/18 21:44:18 DEBUG DependencyLinker: processing {"traceId":"54af196ac59ee13e"."parentId":"54af196ac59ee13e"."id":"1a827ae864bd2399"."kind":"CLIENT"."name":"get"."timestamp": 1576591134962066,"duration": 133718,"localEndpoint": {"serviceName":"springcloud-consumer-sleuth"."ipv4":"192.168.0.104"},"tags": {"http.method":"GET"."http.path":"/provider/hello"}}
19/12/18 21:44:18 DEBUG DependencyLinker: processing {"traceId":"54af196ac59ee13e"."parentId":"54af196ac59ee13e"."id":"1a827ae864bd2399"."kind":"SERVER"."name":"get /provider/hello"."timestamp": 1576591135064214,"duration": 37707,"localEndpoint": {"serviceName":"springcloud-provider-sleuth"."ipv4":"192.168.0.104"},"remoteEndpoint": {"ipv4":"192.168.0.104"."port": 62089}."tags": {"http.method":"GET"."http.path":"/provider/hello"."mvc.controller.class":"SleuthProviderController"."mvc.controller.method":"hello"},"shared":true}
19/12/18 21:44:18 DEBUG DependencyLinker: found remote ancestor {"traceId":"54af196ac59ee13e"."parentId":"54af196ac59ee13e"."id":"1a827ae864bd2399"."kind":"CLIENT"."name":"get"."timestamp": 1576591134962066,"duration": 133718,"localEndpoint": {"serviceName":"springcloud-consumer-sleuth"."ipv4":"192.168.0.104"},"tags": {"http.method":"GET"."http.path":"/provider/hello"}}
19/12/18 21:44:18 DEBUG DependencyLinker: incrementing link springcloud-consumer-sleuth -> springcloud-provider-sleuth
19/12/18 21:44:18 INFO ElasticsearchDependenciesJob: Saving dependency links to zipkin:dependency-2019-12-17/dependency
19/12/18 21:44:18 WARN Resource: Detected type name in resource [zipkin:dependency-2019-12-17/dependency]. Type names are deprecated and will be removed in a later release.
19/12/18 21:44:18 WARN Resource: Detected type name in resource [zipkin:dependency-2019-12-17/dependency]. Type names are deprecated and will be removed in a later release.
19/12/18 21:44:18 WARN Resource: Detected type name in resource [zipkin:dependency-2019-12-17/dependency]. Type names are deprecated and will be removed in a later release.
19/12/18 21:44:18 WARN Resource: Detected type name in resource [zipkin:dependency-2019-12-17/dependency]. Type names are deprecated and will be removed in a later release.
19/12/18 21:44:19 WARN Resource: Detected type name in resource [zipkin:dependency-2019-12-17/dependency]. Type names are deprecated and will be removed in a later release.
19/12/18 21:44:19 WARN Resource: Detected type name in resource [zipkin:dependency-2019-12-17/dependency]. Type names are deprecated and will be removed in a later release.
19/12/18 21:44:19 WARN Resource: Detected type name in resource [zipkin:dependency-2019-12-17/dependency]. Type names are deprecated and will be removed in a later release.
19/12/18 21:44:19 WARN Resource: Detected type name in resource [zipkin:dependency-2019-12-17/dependency]. Type names are deprecated and will be removed in a later release.
19/12/18 21:44:19 WARN Resource: Detected type name in resource [zipkin:dependency-2019-12-17/dependency]. Type names are deprecated and will be removed in a later release.
19/12/18 21:44:19 WARN Resource: Detected type name in resource [zipkin:dependency-2019-12-17/dependency]. Type names are deprecated and will be removed in a later release.
19/12/18 21:44:19 WARN Resource: Detected type name in resource [zipkin:dependency-2019-12-17/dependency]. Type names are deprecated and will be removed in a later release.
19/12/18 21:44:19 INFO ElasticsearchDependenciesJob: Processing spans from zipkin-span-2019-12-17
19/12/18 21:44:20 INFO ElasticsearchDependenciesJob: No dependency links could be processed from spans in index zipkin-span-2019-12-17
19/12/18 21:44:20 INFO ElasticsearchDependenciesJob: DoneCopy the code

12, there is a configuration: ES_NODES_WAN_ONLY=true

Whether the connector is used against an Elasticsearch instance in a cloud/restricted environment over the WAN, such as Amazon Web Services. In this mode, the connector disables discovery and only connects through the declared es.nodes during all operations, including reads and writes. Note that in this mode, performance is highly affected.Copy the code

The meaning of this configuration by public access to the cloud or some restrictive ES instances on the network, such as AWS, by declaring the configuration will disable found that the behavior of the other nodes, subsequent to read and write is only operated by the specified node, increased the attribute can access the cloud or restricted in a network of ES, But because all reads and writes go through this node, performance suffers significantly. Github of zipkin-Dependencies simply states that if this is true, only the values set in ES_HOSTS, such as ES cluster in Docker, will be used.

13. View the index generated in Kibana

14. Then by looking at the dependencies in Zipkin, we can see the information

Detailed reference case source: gitee.com/coding-farm…