I. Project background

In 2017, vivo’s Internet research and development team decided that the call chain system could be of great value to actual business, so they started their research and development work. Over the past three years, the overall framework of the call chain system has been evolving… This paper introduces the principle and practical experience of Vivo call chain system Agent technology.

The development of Vivo call chain system started from the study of Google’s classic article “Dapper, a Large-scale Distributed Systems Tracing Infrastructure”. We investigated relevant Systems in the industry: EagleEye, Distributed service tracking system (SGM), real-time application monitoring platform (CAT), Zipkin, PinPoint, SkyWalking, And So on. Through research and analysis, we focus on learning the way of burying points of SkyWalking. Next, I’ll step through some of the key technologies used in Agent.

Second, call chain introduction

1. Overall architecture

In order to facilitate readers to have an overall cognition, we first look at the overall architecture of vivo’s current call chain system in the figure below. Agent undertakes the burying point and collection of call chain data. Of course, this is the current latest architecture, which has some changes compared with the beginning of the project.

2. Core domain concepts

There are two core concepts inside the call chain, namely Trace and SPAN, which are both derived from Google’s initial article introducing Dapper. No matter the call chain products of domestic big companies or the implementation of open source call chain, domain models generally use these two concepts for reference. Therefore, if you want to have a good understanding of the call chain, These two concepts first require a clear understanding.

The image above simulates a simple scenario:

A request is initiated from the mobile terminal, and then routed to the back end, it is first forwarded by Nginx to service A for processing. Service A first queries data from the database, and then initiates A request to service B after simple processing. Service B returns the result to A after processing, and finally the mobile terminal receives the response successfully.

Based on the simulated scenario above, I give the definition:

Trace: indicates the complete link of the distributed system through which the invocation request of the same service logic passes.

We use traceId to mark a specific request invocation. Of course, traceId is distributed and unique, connecting the entire link. The generation rules of traceId will be introduced later. Note that request invocations of the same business logic can be interpreted as originating from the same interface. Because there are branch structures such as if/else in the program logic, a single call cannot completely reflect a trace link. Only the link reached by multiple calls of the same business logic can be regarded as a complete trace link after synthesis.

Span: a local request invocation.

A single call yields multiple spans, which make up an incomplete trace; The SPAN must be marked with the invocation link where the call is made (that is, the SPAN data must contain traceId information) and the hierarchy of the link where the call is made. SpanId increments atoms at the same level, splicing “across levels. And subsequences; For example, in the figure above, SPAN 1.1 and 1.2 belong to the same level, while SPAN 1 and 1.1 or 1.2 belong to the same level. The communication between B and D is an RPC call. The process has four steps: B initiates the call, D receives the request, D returns the result to B, and B receives the response from D. These four steps make up a full span, so B and D are each half of that span, so spanId needs to be passed across processes, which is explained later.

3. Call basic logic of data collection in the chain

The positioning of Vivo call chain system is the service layer monitoring, which is an important part of Vivo Internet monitoring system. Basic monitoring points include service exceptions, RPC call time, slow SQL, etc. If buried data needs to meet call time monitoring, at least in RPC call and slow SQL monitoring scenarios, buried data collection will be implemented in the form of AOP. Vivo invocation chain Agent. In addition to the JVM indicators collected directly USES the Java lang. Management ManagementFactory outside, other are similar to the form of AOP. The following is pseudocode:

beginDataCollection(BizRequest req); try{ runBusiness(); }catch(Throwable t){recordBizRunError(Throwable t); throw t; }finally{ endDataCollection(BizResponse resp); }Copy the code

3. Basic technical principles

Call chain Agent development, involves a large number of technical points, the following select some key to briefly introduce.

1. Rules for generating distributed ids (traceId)

TraceId plays a very important role in the call chain. It is used to concatenate spAs that are generated across multiple processes, as mentioned in the previous section. Agent sampling control, entry service identification, back-end Flink key indicator calculation, user query complete call link, global service log series and Kafka, HBase and ES data hashing all depend on it. Vivo call chain system traceId is a string with a length of 30. In the following figure, I have colored the sections with special meanings.

  • 0 e34:

The hexadecimal PID of the Linux operating system is used to distinguish multiple processes in a single machine. The traceId of different processes on the same machine cannot be the same.

  • C0a80001:

The hexadecimal ipv4 representation can identify the IP address of the machine that generated the traceId. For example, 127.0.0.1 is represented as 127.0.0.1->127 00 1-> 7F 00 00 01.

  • D:

Represents vivo’s internal business operating environment. In general, we distinguish between offline and online environments. The offline environment can be divided into development, testing, pressure testing, etc., and the D represents an online environment.

  • 1603075418361:

Millisecond timestamp. Used to increase the uniqueness of the time that can be requested through this read entry.

  • 0001:

Atom-increasing ids are mainly used to increase uniqueness of distributed ids. The current design can tolerate the concurrency of 10000 x 1000= 10 million in a single machine per second.

2. Full-link data transmission capability

Full-link data transmission capability is the cornerstone of functional integrity of Vivo call chain system and also the most important infrastructure of Agent. SpanId, traceId and link markers mentioned above depend on full-link data transmission capability. In the middle of system development, due to more specific positioning of call chain system, There are currently no actual functions that depend on link flags, which will not be covered in this article. At the beginning of the project, full-link data transmission capability was only used for cross-thread and cross-process transmission of Agent internal data, which has now been opened to the business side.

Common Java developers know that the ThreadLocal tool class in JDK is used for safe data isolation in multi-threaded scenarios and is used frequently. However, few people have used the existing InheritableThreadLocal in JDK 1.2, and I have never used it.

InheritableThreadLocal is used to copy data from ThreadLocalMap into child threads when a Thread is created using a new Thread(), but we rarely use the new Thread() method directly to create a Thread. Instead, the thread pool ThreadPoolExecutor is provided in JDK1.5, and InheritableThreadLocal isn’t available in a thread pool scenario. As you can imagine, once the thread or thread pool is crossed, important data such as traceId and spanId will be lost and cannot be passed on. As a result, the link of a request is disconnected and cannot be connected by traceId, which will be a heavy blow to the call chain system. So the problem has to be solved.

It is easy to pass data across processes, such as HTTP requests in HTTP headers, Dubbo calls in RpcContext and MQ scenarios in headers. However, data transfer across the thread pool cannot be achieved without intrusion to the business code. Vivo calls the chain Agent by intercepting the loading of ThreadPoolExecutor and modifying the bytecode of ThreadPoolExecutor through the bytecode tool. This is also a general open source call chain system does not have the ability.

3. Introduction to JavaAgent

At the beginning of this year, the access rate of call chain in Vivo’s Internet business reached 94%, which is a data we should be proud of, because at the beginning of the project, we consoled ourselves with the false belief that call chain, a big data system, does not need to serve all Internet businesses, or thought that it could serve some core business systems.

In my opinion, there are at least two core underlying logic to achieve such a high access rate:

  • One is that Agent uses JavaAgent technology to achieve non-intrusive and perceptive access by business parties.

  • Secondly, the stability of Agent has been recognized by the Internet business line. From the beginning of the project in 2017 to the end of 2019, there was only one related business end fault recovery.

However, everything was not so smooth at the beginning. At the beginning, Agent buried module needed to invade business logic. In the first version, SpringMVC and Dubbo were buried, requiring users to configure MVC filter and Dubbo filter in the code, which was extremely inefficient. I am still grateful to the brother who tried to cooperate with the first version of the business line. After we resolutely changed the JavaAgent scheme, I will introduce the JavaAgent technology.

Javaagent is a JVM parameter that the invocation chain uses to intercept class loading, modify the bytecode of the corresponding class, and insert the data acquisition logic code.

Developing JavaAgent applications requires the following knowledge:

  • Javaagent parameter usage;

  • Understand the Instrumentation mechanism of JDK (premain method, ClassFileTransformer interface) and the manifest.mf file about premain-class parameter configuration;

  • Use of bytecode tools;

  • Principle and application of class loading isolation technology.

Let me explain one by one:

(1) The configuration example of JavaAgent is as follows:

java -javaagent:/test/path/my-agent.jar myApp.jar
Copy the code

The jar (my-agent.jar in this case) configured for the JavaAgent parameter is loaded by AppClassLoader, which is described in the following chapters.

(2) the so-called Instrumentation mechanism refers to by the Java JDK. Lang, instrument, Instrumentation and Java. Lang. Instrument. ClassFileTransformer these two interfaces for class together Bytecode substitution, of course, the entry point of the substitution logic is the loading of the intercepting class. The Java JAR has a standard configuration file, meta-INF/manifest.mf, to which you can add k-V configuration. Here we need to configure the k is Premain – Class, v is a fully qualified name of the Java classes, the Java Class must have a method is public static void Premain (String agentOps, Instrumention instr). So when you start the executable JAR with a Java command, this is the method that gets executed, and we need to register the bytecode conversion logic in this method, and when a match is made to a particular class, it will execute the bytecode conversion logic, and inject your buried logic.

(3) Configuration in manifest.m file

The can-Retransform-classes parameter in the Instrumentation class means whether to allow the JVM to perform transformation logic. You Can read the JavaDoc in the Instrumentation class to understand more. The boot-class-path parameter is used to specify that classes in subsequent JARS are loaded by the BootstapClassLoader.

(4) Regarding the use of bytecode tool, Vivo called chain Agent with the following operations:

  • Modify the logic of the specified method (embed AOP-like logic);

  • Add instance field to class;

  • Having a class implement a particular interface;

  • Get class instance fields and static field values, read parent classes and interfaces, and so on.

4. Core model data structure

As mentioned above, span means a local call, which will generate half a span of data on both sides of the service invocation. The definition of half a span in memory (as defined at the end of 2017) is as follows:

public class Span { final transient AtomicInteger nextId = new AtomicInteger(0); // spanId increment String traceId for the same level; String spanId; long start; long end; SpanKind type; //Client,Server,Consumer,Producer Component component; //DUBBO,HTTP,REDIS...... ResponseStatus status = ResponseStatus.SUCCESS; int size; // Call result size Endpoint Endpoint; // Record IP, port, HTTP interface, redis command List<Annotation> annotations; // Log events, such as SQL, uncaught exceptions, exception logging Map<String, String> tags; // Record tag tag}Copy the code

By looking at the above definition you can get a general idea of how the various functions of the call chain are calculated.

5. Burial point details of each component

Here I have listed the components covered by the buried point of Vivo call chain Agent by the end of 2019, and the specific location of the buried point. It is understood that this year vivo call chain system into the 3.0 version, the addition of more than 8 buried components, the collection of more and more rich data.

6. Semi-automatic burying point capability introduction

After deep encapsulation of buried point capability, it is highly efficient to add a buried point of a new component to Agent. The general steps are as follows, which can be understood by combining with the figure above:

  • Debug the third party framework/component core logic execution process that needs to be buried, understand its execution process, select the appropriate AOP logic pointcut, pointcut selection should be easy to get the data of each field in span;

  • Create a buried section class, inherit a specific parent class, implement abstract methods, mark methods to cut into buried points in methods, and provide interceptor for implementing AOP logic;

  • The implementation of interceptor logic, in the openSpan method to obtain part of the data, in closeSpan to complete the rest of the data acquisition;

  • The class in which the interceptor logic is set/controlled can be loaded by the class loader thread.currentThread ().getContextClassLoader() and then turn on the implementation of the component’s embedded logic.

It can be seen that it is very easy to add a buried point of the current component. The mid-term goal of the 2018 2.0 version project is full automation. It is expected that automatic generation of some classes can be realized through configuration with as little code as possible, and the new buried point is more efficient.

Span data flow diagram

Let’s take a look at the full life cycle of span from creation to delivery to Kafka.

As you can see in the figure, after generating the full half-span (closeSpan() completes the call) (see the Core Domain Concepts section in The Introduction to Call Chains), the ThreadLocal space is cached first. After completing all logical processing for this thread, finish() dumps to Disruptor, which is periodically flushed to Kafka’s client cache by the disruptor consumer thread, and finally sent to the Kafka queue.

Disruptor: Kafka client disruptor: Kafka client disruptor: Kafka client disruptor: Kafka client disruptor: Kafka client disruptor: Kafka client disruptor The reason for this is simple. First, because disruptor is lockless and does not block and queue capacity is curable, thread-safe JDK disruptor either blocks or cannot limit initial capacity. Kafka client buffers do not meet this condition, and business threads should never be blocked. The second problem can be solved with a data structure called a LinkedList. The thread presses the openSpan at the first buried pointcut, pops the stack at closeSpan, and finishes if there is no data in the stack.

8. Rich internal governance strategies

At the beginning of the project, the main goal is the access volume of business and the applicability of product capabilities, without much consideration of internal governance. However, with the large amount of data, more consideration must be given to the governance of the Agent. The figure above shows the main internal governance capabilities of the Agent by the end of 2018. Let me give you some background on each of these governance capabilities.

** (1) ** Configure broadcast:

The placement and delivery capability is the cornerstone of other governance capabilities. When the Premain method is executed, the Agent will actively pull the configuration from the Vivo configuration center. If the configuration is changed, the Agent will also actively push the configuration down. In addition, there are many internal configurations that Agent relies on, and the implementation of internal configurations is based on the Observer listening mechanism in JDK.

** (2) ** Log policy:

In 2017, Vivo’s Internet business was just emerging, and the unified log center was weak. A large number of abnormal logs would impact the log center, so abnormal flow control was needed. In addition, the Agent can collect service logs of a specified level in response to service requirements. For example, due to unknown log printing specifications and confusion in log printing, some services want to collect logs of WARN or info level of a certain class in the call chain system for troubleshooting. In addition, the Agent itself needs to print logs, and the log printing code is embedded in the three-party framework after the bytecode enhancement. That is to say, when the service logic is executed in the three-party framework, the execution may be slow and the service performance may be affected. Therefore, asynchronous log output is required. Finally, log printing is implemented by the Agent itself. In order to avoid class conflict with the logging framework used by the service side, the third-party logging framework cannot be used.

** sampling strategy:

At the beginning of 2018, when less than 200 services were connected, the span data collected already occupied the capacity of 10 Kafka physical machines, and flow control was necessary, with sampling being the focus. However, the original sampling logic will bring new problems, that is, the business TPS is not accurate, so the DATA such as TPS will be collected independently later.

** (4) ** Downgrade:

This is easy to understand, which is to support dynamic control not to collect data for a service, or not to collect data for a component, or the business side wants to close the invocation chain while active.

** (5) ** Abnormal flow control

The call chain has buried the log component, and can also intercept the exceptions not captured by the business side. The data will be collected and stored in the call chain system. If there are too many exceptions, the system itself will not be able to support, so the exception flow control here means to control the same exceptions from being transmitted to the back end at a certain frequency.

** (6) ** Full process SPAN flow monitoring:

The Agent monitors the span flow process and counts (generation, queuing, queuing, Kafka entry success/failure, and data loss). When data loss is detected, you can increase the capacity of the memory lockless queue or decrease the Kafka sending interval. When Kafka sending fails, the Agent can monitor the span flow process and count (generation, queuing, queuing, Kafka entry success/failure, and data loss). This means there is a network or Kafka queue problem.

** data aggregation frequency control:

In 18 years, it is estimated that the span original data will grow to 150 billion pieces per day in the later period, and the call chain system does not have enough resources to process such a large amount of data. Therefore, we soon implemented the data aggregation capability on the Agent side, and threw the preliminary aggregated data to Flink for final calculation, reducing the pressure on Kafka and big data cluster.

** (8) **JVM sampling and Kafka send frequency control:

Agent will regularly collect JVM indicators, such as GC, CPU, MEMORY used by JVM, number of threads in each state and so on. After flink calculation, a broken line graph will be displayed on the page. The collection interval is strictly 5s. In addition, the span data generated by the Agent is first cached in the memory lockless queue, and then sent to Kafka periodically in batches. In order to take into account the real-time performance of alarms and the CPU loss of the Agent, the default frequency is 200ms, and remote control is also supported.

Iv. Agent stability guarantee

As mentioned above, the access rate of Agent reaches 94% among thousands of applications. I think one important reason is that its stability is recognized by the business side. Therefore, in order to ensure its own stability and not affect the business, for the Agent calling chain, first of all, it must reduce the interference to the execution of the business thread as much as possible, and secondly, it should consider the boundary problem as much as possible

1. The whole process is not blocked

The starting point for minimizing interference with the execution of business threads is to not block business threads. Let’s comb through the choke points for business threads and then discuss how to do this one by one.

** (1) ** Thread choke point 1 — Log print:

Disruptor processing. Use disruptor for non-blocking caching of logs, while adhering to the principle that logs can be discarded rather than blocked.

** (2) ** Thread choke point 2 — Buried logic:

  • **span generation is cached in ThreadLocal to efficiently dump to disruptor in bulk, avoiding multiple disruptor producer barriers competition;

  • Measures 2: the process of burying the necessary will use the reflection, but the reflection is pit (click here to understand), analysis of the source of reflection logic, correction of the use of reflection posture;

  • Action 3: ** Instead of using reflection when possible, bytecode technology lets buried classes implement custom specific interfaces that fetch object instance data by performing normal method calls.

  • ** Step 4: ** Pool large collection objects to avoid excessive large memory consumption during data dumps of ThreadLocal and Disruptor, and disruptor and Kafka.

** (3) ** Thread choke point 3 — span data send:

Similarly, use disruptor to address thread blocking.

2. Robustness

The consideration and solution of boundary problems are highly dependent on the personal experience and technical ability of the developer. I have listed a few key problems below, which are also the concerns of the business side.

(1) What if the Agent has logic problems?

Only one try-catch log is generated within 2 minutes for the same exception.

(2) What if the business thread cannot be blocked in time?

Downgrade, directly exit the single buried point process.

(3) What if the business is too busy and the CPU consumes too much?

  • Sampling control + frequency control + degradation;

  • Discard data directly;

  • Customize disruptor’s consumer wait strategy to balance high performance with high consumption.

(4) What if it consumes too much memory?

Strictly limit the count of memory data objects;

SoftReference is used for large memory consumption points that are difficult to control during data flow.

(5) What if Kafka is not connected/disconnected?

In addition to supporting degradation, you can exit the Agent directly if you fail to connect to the Agent. In addition, you can discard data directly if you disconnect from the Agent.

5. Introduction to difficult technology and key implementation

The following will briefly introduce some key and difficult technologies in Agent. One of the most difficult things to control is that classes in the Agent need to control which classloader is used to load them, otherwise you will definitely face various ClassNotFoundException.

1. Start process

Agent startup process looks simple, posted here to facilitate internal students to read the source code. Note that we start with the premain method as the entry point, which is loaded by AppClasssLoader. In the startup process, it is necessary to control which classes or modules in Agent are loaded by which classloader, and some classes are actively loaded by custom classloader. The connection of logical execution space of different classloaders is solved by using the Agent mode (InvocationHandler) in JDK, which will be introduced later.

2. Microkernel application architecture

Agent’s main responsibility is buried point and data acquisition, buried point and rightly is the most core logic in the Agent, the following simple introduce around the core of each function block function, class diagram, in addition to the isolation function, other function block can be directly removed without affecting the function of other modules, application follows the microkernel architecture model.

Logging: Custom implementation

  • Adapt to the environment, different environment different behavior;

  • Adapter slf4j;

  • Log levels are dynamically controllable.

  • Automatically identify the same error logs to avoid impact on the log center.

Monitoring: the cornerstone of reliability

  • Monitor the full life cycle of buried data (generation, enqueue, enqueue, kafka success/failure, memory queue consumption, data loss);

  • Monitor JVM sampling latency.

Policy control function block:

  • Broadcast configuration change events based on observer mode;

  • Controls sampling, log level, business log interception level, downgrade, exception flow control, monitoring frequency, JVM execution frequency, data aggregation.

Bytecode Conversion control function block:

  • Component enhancement plug-in (configurable);

  • Enhance the isolation of logic from each other;

  • Enhance high encapsulation of logic to achieve configuration;

  • The core process mimics the spring class inheritance system and is highly extensible.

Process control function block:

  • Highly modular within the application;

  • The SPI mechanism is highly extensible.

Class isolation control unit:

  • Customize multiple class loaders to load classes in different locations and JARS;

  • Compatibility with Tomcat and JDK class loader inheritance, actively let Tomcat or JDK specific class loader explicitly load classes;

  • Interfering with the parent delegate model of a class, controlling the loading of a particular class’s parent class or interface.

3. Core technology stack

The direction of the arrow in the figure indicates that the up-and-down technique becomes more difficult to use and the time required for research and tuning increases. Java probe technology is the JavaAgent introduced above. The selection report and background of ByteBuddy are described below. Disruptor requires a considerable amount of time to understand the technical background, read the source code, and tune, as described below. It was the biggest headache at the beginning of the project. I still remember the despair of dealing with ClassNotFoundException at the end of 17. It was far more than knowing how to customize class loaders and parent delegate. At the beginning, I bought several books with relevant knowledge introduction to study. Even if I only found a possible length of less than 1 page in the catalog of this book, I spent hundreds of yuan to buy the books.

4. Class loading and isolation control

It should be noted that the control objective of class loading isolation is to ensure that the tripartite package used by the Agent does not conflict with the tripartite package of the business side due to version, and to ensure that the problem of not finding classes occurs when the logic in the Agent is executed. The class loading isolation situation in the Agent is simply drawn here, which can be understood in combination with the above section.

Here I try to list the things you need to know:

  • The four major opportunities for class loading;

  • The loading and execution logic of the premain class;

  • JDK parental delegation model, and how to implement a custom loader, how to change the load order;

  • A study of all class loaders in the JDK. At the beginning misguided, anxious to study part of the JVM do not understand C++ source code;

  • Tomcat class loading architecture, the corresponding part of the source code to read;

  • Class loaders perform jumps in space.

Six, part of the selection report

The development of the whole call chain system involves the selection of many key technologies, and only two key technologies related to Agent are given here.

1. ByteBuddy, a bytecode manipulation tool

Bytecode programming is one of the best dark technologies available to the average Java programmer. What is bytecode programming? You must be familiar with javassist, ASM and other bytecode editing libraries, which are used to dynamically modify or generate Java bytecode during bytecode programming. Dubbo, for example, uses Javassist to dynamically generate bytecode for some classes. The main reason for choosing ByteBuddy was that the project started with the burying logic of SkyWalking, which was ByteBuddy. If I had to choose right now, I would prefer Javassist. Here are some of the pros and cons of a framework’s personal understanding.

(1) ByteBuddy

Based on ASM to do the encapsulation, the use of open source projects: Hibernate, Jackson.

Advantages:

  • It is very convenient to use in specific scenarios;

  • The authors of the ’17 framework are very active and support almost all the new features of the latest JDK;

  • Easy to customize extensions.

Disadvantages:

  • The domain model definition is confused, the class diagram design is complex, the inner class can be deep 8 layers, Eclipse can not decompile several classes, the source code is difficult to debug and read, extremely unfriendly to the depth of the user, we generally develop the use of inner class rarely more than 3 layers, imagine what the inner class is 8 layers deep!

(2) the ASM

Open source projects: Groovy/Kotlin compiler, CGLIB, Spring.

Advantages:

  • The written code is barbaric, and bytecode-oriented programming is Java language level black tech;

  • The vision is for performance and refinement, with only 28 classes in total code.

Disadvantages:

  • It is complicated to use and low coding efficiency;

  • You need to be familiar with the Java language bytecode instruction set, and you need to be familiar with the class file content layout.

(3) the Javassist

Open source projects: Dubbo, MyBatis.

Advantages:

  • Simple to use, fast and easy to use;

  • It is easy for programmers to understand the use of strings and then compiled bytecode.

  • Examples of official documentation are easy to understand and rich.

Disadvantages:

  • There is a gap between the native compiler and Javac, which is difficult to implement complex functions and new features of the new JDK.

Disruptor Disruptor Disruptor

The main reason for using Disruptor is that its high performance and ability to limit capacity without blocking is simply too satisfying for thread-safe-related collections in the JDK.

** (1) Main features: ** No blocking, low delay, high consumption.

(2) Usage Scenarios:

  • High concurrency non-blocking low latency system;

  • Segmented event-driven architecture.

(3) Why so fast?

  • Volatile and CAS lockless operations are used.

  • Cache line padding is used to avoid pseudo sharing.

  • Arrays preallocate memory, reducing the latency of memory requisition and garbage collection.

  • Fast pointer operation to convert modular operations into and operations (m % 2^n = m & (2^ n-1)).

(4) Precautions for use:

Consumer wait strategy: a comprehensive consideration of business thread blocking, CPU consumption, and data loss.

Seven,

It is obviously a difficult job to do well in the research and development of the call chain system. The difficulty lies not only in solving the technical difficulties of Agent, but also in the decision-making and mining of product capabilities, in how to meet product requirements with the least resources, and more importantly, in Java development, which did not know big data at the beginning, to do massive data calculation under the premise of limited resources.

Hope this paper can be a reference for companies and teams that are engaged in and will be engaged in the development of call chain system. Thanks for reading.

Author: Shi Zhengxing