This is the fourth article in a series that focuses on Sentinel usage scenarios, technology comparisons and implementations, and developer practices.

  • First review:

The flow of Dubbo defense weapon | Sentinel how to implement the service through the current limiting high availability – portal

  • Second review:

RocketMQ fuse | how Sentinel by uniform request and cold start to ensure the stability of the service – portal

  • Third review:

Technology selection: Sentinel vs Hystrix – Portal

Sentinel is a lightweight and highly available flow control component for distributed service architecture developed by Ali middleware team, which was officially opened in July this year. Sentinel mainly focuses on traffic and helps users improve service stability from multiple dimensions such as flow control, fuse downgrading and system load protection.

Sentinel Project Address:

Github.com/alibaba/Sen…

Sentinel console, as a powerful tool of Sentinel, provides multi-dimension monitoring and rule configuration functions. The Sentinel client is currently available in production environments, but some modifications are needed if you want to use the Sentinel console in production environments. This article describes how to adapt the Sentinel console for use in a production environment.

Using the Sentinel console in a production environment requires only two steps:

  1. Update push logic to support push to rule data source
  2. Modify the monitoring logic to support persistent monitoring data

Dynamic rule data source

Sentinel’s dynamic rules data source is used to read and write rules from it. Starting with version 0.2.0, Sentinel classifies dynamic rule data sources into two types: ReadableDataSource and WritableDataSource:

  • Read data sources are only responsible for listening or polling to read changes to remote storage.
  • The write data source is only responsible for writing rule changes to the rule source.

The common implementations of reading data sources are as follows:

  • Pull mode: The client regularly polls the pull rules to a rule management center, such as an RDBMS or a file. This approach is simple, but the downside is that changes may not be retrieved in a timely manner, and there may be performance issues with pulling too frequently.
  • Push mode: the rule center pushes uniformly, and the client monitors changes by registering listeners, such as using Nacos, Zookeeper and other configuration centers. This method has better real-time performance and consistency assurance.

In actual scenarios, different storage types correspond to different data source types. Data sources in push mode generally do not support writes. Data sources in pull mode are writable.

Let’s take a look at their use scenarios in conjunction with the Sentinel console and the corresponding points that need to be modified.

| the original situation

If the application does not register any data sources, the process of pushing rules directly from the Sentinel console is very simple:

The Sentinel console pushes the rules to the client through the API and updates them directly into memory. In this case, the application of the restart rule disappears and is only used for simple testing, not production. Typically in a production environment, we need to configure the rule data source on the application side.

| pull mode data source

Pull mode data sources (such as local files, RDBMS, and so on) are generally writable. When using need to register on the client data source: read the corresponding data source registered to the corresponding RuleManager, will write WritableDataSourceRegistry registered to transport data source. Take the local file data source as an example:

public class FileDataSourceInit implements InitFunc {

    @Override
    public void init() throws Exception {
        String flowRulePath = "xxx";

        ReadableDataSource<String, List<FlowRule>> ds = new FileRefreshableDataSource<>(
            flowRulePath, source -> JSON.parseObject(source, new TypeReference<List<FlowRule>>() {}) ); / / readable data source registered to FlowRuleManager FlowRuleManager. Register2Property (ds) getProperty ()); WritableDataSource<List<FlowRule>> wds = new FileWritableDataSource<>(flowRulePath, this::encodeJson); / / will be written to the data source registered to transport module WritableDataSourceRegistry. / / receive the console push rules, Sentinel will update into memory, and then rules are written to the file. WritableDataSourceRegistry.registerFlowDataSource(wds); } private <T> String encodeJson(T t) {returnJSON.toJSONString(t); }}Copy the code

The local file data source periodically polls the file for changes and reads the rules. This allows us to either update the rules locally by modifying the files directly or push the rules through the Sentinel console. Taking local file data source as an example, the push process is shown in the figure below:

The Sentinel console first pushes the rules to the client via the API and updates them to memory. The registered write data source then saves the new rules to a local file. There is generally no need to modify the Sentinel console when using pull mode data sources.

| push mode data source

For data sources in push mode (such as remote configuration center), the push operation should not be carried out by Sentinel data source, but should be pushed through the console. The data source is only responsible for obtaining the configuration pushed by the configuration center and updating it locally.

Assuming that writing is also done by the data source, the Sentinel client receives the rule pushed by the console, updates the new rule to memory, and pushes the rule to the remote configuration center. At this point, the data source listens for the new rule pushed by the configuration center and updates it into memory again. That is, it is not reasonable for an application to receive changes and update them once it has updated the rules locally and pushed them to a remote application. Therefore, the correct approach of push rule should be configuration central console /Sentinel console → configuration center → Sentinel data source → Sentinel, rather than push to the configuration center through Sentinel data source. The process is clear:

Note that since different production environments may use different data sources, the implementation pushed from the Sentinel console to the configuration center will require user adaptation. Taking ZooKeeper as an example, we can perform the following steps (assuming that the push dimension is the application dimension) :

  1. Implement a public ZooKeeper Client for pushing rules. The ZooKeeper address needs to be specified in the Sentinel console configuration item, and the ZooKeeper Client will be created upon startup.
  2. We need to set a different path for each rule for each application (appName) (which can be changed at any time); Or the convention is greater than the configuration. For example, the path mode is/sentinel_rules/{appName}/{ruleType}, um participant.sentinel_rules/appA/flowRule).
  3. The rule configuration page needs to be modified accordingly, directly forApplication of dimensionConfigure rules. When you modify the rules of multiple application resources, you can push them in batches or separately. The Sentinel console caches the rules in memory (e.gInMemFlowRuleStore), it can be modified to support the rule cache of application dimension (key is appName). Every time rules are added/modified/deleted, the rule cache in memory is updated first, and then the full rule is obtained from the rule cache when push is needed. Then push the rules to ZooKeeper through the Client implemented above.
  4. Application clients need to register the corresponding read data source to listen for changes, refer to the related documentation.

Monitor data persistence

Sentinel records second-level data about resource access (if no access is obtained, it is not recorded) and saves it in local logs. For details, see the second-level monitoring Log document. The Sentinel console pulls monitoring data from the second-level monitoring logs through the API reserved by the Sentinel client and aggregates them. At present, the monitoring data in Sentinel console is directly stored in memory after aggregation without persistence, and only the monitoring data of the last 5 minutes is retained. To monitor data persistence, you can extend the MetricsRepository interface (version 0.2.0), register as a Spring Bean and specify the corresponding Bean name in the appropriate location via the @Qualifier annotation. The MetricsRepository interface defines the following functions:

  • savesaveAll: Stores monitoring data
  • queryByAppAndResourceBetween: Queries the monitoring data of an application resource in a specified period of time
  • listResourcesOfApp: Queries all resources of an application

The default monitoring data type is MetricEntity, which contains the application name, time stamp, resource name, number of exceptions, number of request passes, number of request blocks, and average response time.

At the same time, users can extend and adapt to Grafana and other visualization platforms to better visualize monitoring data.

other

There are also issues to consider when using the Sentinel console in a production environment:

  • Permission control: In the production environment, permission control is very important. In theory, only AppOps or administrators have the permission to modify the application rules. The Sentinel console does not provide permission control and requires developers to modify it themselves.

Also check out Awesome Sentinel for some of the community’s extensions and solutions, and feel free to add some of the best implementations.


The original link

This article is the original content of the cloud habitat community, shall not be reproduced without permission.