Reading table:

  • 1. Gateway request process
  • 2. Eureka service governance
  • 3. Configure the configuration center
  • 4. Hystrix monitoring
  • 5. Service invocation link
  • 6. ELK log link
  • 7. Returns in a unified format

Java Microservices Framework selection (Dubbo and Spring Cloud?)

The entire technology component of Spring Cloud currently used by companies basically includes what is shown above, and I have to say that the whole Ecosystem of Spring Cloud is really powerful and easy to use.

Later, there will be time for interpretation of each component. This article mainly talks about the link diagram of Spring Cloud architecture, and arranges my ideas for reference.

1. Gateway request process

Spring Cloud Zuul is one of the most overlooked and important components in the entire Spring Cloud library. Spring Cloud Zuul can be integrated with the Eureka registry, We currently use Spring Cloud Zuul as follows:

  • The Filter Filter
  • The Router routing
  • Ribbon Load Balancing
  • Hystrix fusing
  • Retry try again

Some features come with Spring Cloud Zuul, such as Filter and Router, and some are combined with other Spring Cloud components, such as the Ribbon and Hystrix.

Filter is mainly introduced here, which can be divided into four filtering types:

  • pre: Zuul forwards requests before executing, our current implementation isAccessTokenFilter, for authorization verification of oAuth2.0 JWT.
  • Route: Zuul is used for routing, which is not used in the current project.
  • post: Zuul route forward after execution, that is, the backend service has been requested successfully, our current implementation isCustomResponseFilter, for unified request format encapsulation, such as code/ MSG /data, etc.
  • error: The above filters are executed when errors occur, our current implementation isCustomErrorFilterIn addition, the error filter does not seem to catch errors in the back-end service execution.

In addition, there are two ways to implement oAuth2.0 JWT authorization authentication:

  • Authorization configuration In back-end services (each service needs to be configured as a Resource Server, and the public key needs to be configured. The detailed configuration of interface authorization is described in the notes), Zuul only forwards authorization and does not verify authorization.
  • Authorization configuration in Zuul, that is, Zuul as a Resource Server, back-end services do not need to do anything, Zuul specific implementation isAccessTokenFilterThe logic inside is to manually parse the JWT and determine if it is correct, and then parse out the user information /Scope/Role, and then match the configuration in the authorization Map according to the current request API. If the match is incorrect, the 401 authorization error will be thrown directly.

We are currently using the second way, these two ways have pros and cons, the key lies in their own choice, why to use the second way? The purpose is to play the role of Zuul and conduct unified authorization verification on the external gateway.

For authorization Map, which stores the configuration of all service interfaces, sample configuration:

private static final Map ROUTE_MAPS;
static
{
    ROUTE_MAPS = new HashMap<String, String>();
    ROUTE_MAPS.put("eureka-client/home", "read:ROLE_ADMIN");
    ROUTE_MAPS.put("eureka-client/user", "read:ROLE_ADMIN");
    ROUTE_MAPS.put("eureka-client/error", "read:ROLE_ADMIN");
}Copy the code

This is our current configuration. It is a static Map that will be stored in the Spring Cloud Config configuration center later. It will be loaded when Zuul starts up and refreshed dynamically using Spring Cloud Bus.

There is much more to be said about the Zuul gateway, which will be explained at a later opportunity.

2. Eureka service governance

Eureka follows the AP principles (service availability and partition fault tolerance) and is the best service governance following the CAP distributed principle.

Nodes in Eureka cluster are level with each other, unlike Consul which has master/worker, Eureka nodes in the cluster are registered with each other in pairs. Therefore, it is better to deploy three nodes in Eureka cluster, which is also our current deployment mode.

In addition, the self-protection mechanism of Eureka can be referred to this article.

Services call each other, and loads can be used in two ways:

  • Feign: Declarative, as the name implies, is the need to define the interface, just as we normally do with object calls.
  • Ribbon: Soft load by injecting a load Handler into the RestTemplate and selecting a call through the load algorithm (service registration via Eureka).

We currently plan to use the Ribbon payload. Why? Take a look at the following code:

restTemplate.getForObject("http://eureka-client/hello", String.class);Copy the code

3. Configure the configuration center

We are currently using Spring Cloud Config for the configuration center, of course you can also use Polly (Ctrip open source), but Config also meets our needs, and we are using Git for the repository.

The Config configuration center provides data encryption. You can use RSA encryption, so that the configuration stored in Git is ciphertext. When the Config Client obtains the encryption configuration, the Config Server automatically decrypts it and returns it.

Configuration center usage scenarios, we are currently mainly in two places:

  • Configuration information for project startup, such as database connection strings, etc.
  • The configuration information of the service, that is, the configuration related to the service.

In addition, it should be noted that, by default, if the configuration in Git is updated, the Config Client will not update the configuration. Our current solution is to use Spring Cloud Bus to dynamically refresh the configuration (Config Server configuration). The specific process is as follows:

  • Add WebHooks scripts to Git, for examplecurl -X POST http://manager1:8180/bus/refreshAutomatically executed when the configuration in the Git repository is updated.
  • Configure the Spring Cloud Bus in the Config Server, accept Git’s configuration refresh request, and then use RabbitMQ broadcast to all Config Client subscribers to refresh the configuration information.

4. Hystrix monitoring

Hystrix is mainly used for service fuses/degradations/isolation. Hystrix is configured on the caller to trigger a Hystrix fuses when the called service is unavailable, and the specified Fallback method will be executed for special processing.

I previously thought that Hystrix fuses were triggered by service unavailability, i.e. service request timeout (such as service failure), but I have tested it myself and found that Hystrix fuses are also triggered by 500 service errors and Hystrix timeout Settings are automatically ignored.

We currently use Hystrix in two main places:

  • Internal service invocation: You can fuse an API interface.
  • Zuul gateway use: this is when Zuul routing is called, but there is a limitation that only the service can be fused, not for an API interface.

In the figure above, the monitoring process of Hystrix is mainly drawn. At present, we mainly use RabbitMQ for acquisition and transmission, turbine- Server for data flow aggregation, and Hystrix-Dashboard for graphical display.

5. Service invocation link

The concept of service invocation link is that when a service request is initiated, the data of the entire request link is recorded for query.

At present, almost all the implementations of service invocation links in the market are theoretically based on the paper of Google Dapper, among which the most important concepts are traceId and spanId.

  • TraceId Records the ID of the entire service link. The ID is created by the first requestor and unique in the service link.
  • SpanId Records the ID of the current service block, which is created by the current server.
  • ParentId Records the spanId of the last service request.

Here I describe our current service invocation link process:

  • H5 initiates A request to the Zuul gateway, where Zuul creates A global traceId and its own spanId, which it then carries to business service A and transmits to RabbitMQ using Spring Cloud Sluth.
  • Business service A receives the traceId and spanId transmitted by Zuul, sets Zuul’s spanId to parentId, generates its own spanId, and carries this data to business service B. And transport to RabbitMQ using Spring Cloud Sluth.
  • .

In the figure above, the whole process of service invocation link is described in detail. Here is the technology stack used:

  • Spring Cloud Sluth: Similar to SkyWalking’s probe concept, each service is configured to collect request data for all services (traceId and spanId) and then exploitstream-sluthandbinder-rabbitComponent to transfer request data to RabbitMQ.
  • Spring Cloud Zipkin: UI display for request links. Zipkin will read request data from RabbitMQ, store it in ElasticSearch, and then read directly from ElasticSearch in the next display.
  • Kibana: Kibana can also display ElasticSearch request data, but it is not graphical and requires index configuration.

6. ELK log link

ELK can refer to the previous posts:

  • Install Elasticsearch and Kibana for ELK
  • ELK Logstash and Filebeat setup
  • ELK Logstash and Filebeat configuration (Collection and filtering)
  • ELK: Elasticsearch, Kibana, Logstash and Filebeat (version 6.2.4)

The ELK process has been described in detail in the figure above. By default, THERE is no Filebeat in ELK stack. When Logstash is used for log collection, CPU and memory will occupy a large amount of resources, so we use lightweight Filebeat to collect logs. Filebeat is deployed on each service server and transfers the collected log data to a Logstash stash, which can be deployed on two or three servers for log filtering and analysis, and then transfers the processed log data to ElasticSearch storage.

7. Returns in a unified format

As for the uniform format return, it has been explained in detail in the figure, if you have a better solution, welcome to discuss.



Hello architecture



www.cnblogs.com/xishuai/









QQ space
Sina weibo
Tencent weibo
WeChat
More and more