I am 3Y, a markdown programmer with one year CRUD experience and ten years’ experience 👨🏻💻 known as a quality octuan player all the year round

Today we’ll talk about something everyone could use: data link tracing. System monitoring was introduced to quickly locate application operating system problems, but what about business problems? In this article you can see the annotated way to print logs and see how a simplified version of full link tracing is implemented.

No more BB, let’s do it

01, annotation log printing

I’ve already mentioned this in the first few Austin posts. I’ve been waiting for my gay friend @Mansandajiang to upload his log component library to the Maven library so I can use it. Recently, it has updated two versions and uploaded them to the Maven library, so I am here to access them

All this component library does is use annotations to print log information and support SpEL parsing, custom contexts, and custom functions. It supports something that sounds cool, but basically makes journaling a little more cool. We can write some stupid code and pretend to be fuckers. Who can stand that?! Who can stand that!

Now that I have defined the annotation on the method, when the method is called, it prints the following log:

Looks good, doesn’t it? With an annotation, I can print out the input information of the method. With bizType and bizId for our custom, we can easily locate the place to print the log, and he also kindly prints the return value of the response to the log.

At least on this interface, this is very much in line with the requirements of my scenario. Let’s go over a little bit of what the SEND interface does with a picture:

Printing input parameter information and return value at the interface level can locate many problems (understand), using annotations without interfering with our normal business code can print such good log information (this is installed).

Its implementation principle is not complicated, interested partners can pull the code to have a look, first look at the readMD code!!

GitHub:github.com/qqxx6661/lo…

In general, he reads the information about the #sendRequest input object using SpEL expressions, while annotation parsing uses Spring AOP. As for custom contexts and custom functions, I don’t use them here, at least not in the Austin project scenario. Oh, and it can also export logs to other channels (MQ). Unfortunately, I don’t need it in this scene either.

In the current implementation, THIS is the only interface I can use for this component, and I admit it works well in some scenarios.

But it has limitations: the printed log information is strongly dependent on method parameters: to print variables other than method parameters, you need to use a Context or custom function. The usage posture of custom functions is limited, and we cannot extract all the variables involved in logging into a function. If you use a Context, you still have to embed it in the business code, so why not just assemble the log type?

I once wondered if I was using it in a wrong posture. After discussing with my gay friends, I had to extract LogUtils for log printing in my application scenario.

02. Data link tracking

The logs printed from the interface above and the ability to quickly troubleshoot problems in the access layer are actually in the processing layer. Here is a review of what the processing layer is doing:

There are many platform filtering rules on the processing layer, most of which are not for message templates but for userIDS (receivers). During this process, it is especially important to document the execution of each user in each message template.

1. Locate and troubleshoot problems. If the customer reports that the user cannot receive SMS messages, it is usually caused in the process of processing (it may be deleted or there is a problem with the calling interface).

2. Analyze the overall link data of the template. The number of times a message template is sent in a day, the number of times it is filtered by each rule midway, the number of times it is successfully delivered, and the number of times the message is last clicked. All data except click data comes from the processing layer

Based on the above background, I designed a set of buried point rules, and the corresponding point position 📝 was marked on the processing key link

At present, the point information is incomplete. With the improvement of the system and access to various channels, the point information here will continue to increase. As long as we think there is something that needs to be recorded, we can increase it.

You may feel a little abstract when you see this, but I ask the interface to print the log and it is easy to understand:

// 1. Access layer prints logs (returnStr prints processing results, MSG prints entry and exit parameters)
2022-01- 0815:44:53.512 [http-nio-8080-exec-7] INFO  com.java3y.austin.utils.LogUtils - 
{"bizId":"1"."bizType":"SendService#send"."logId":"34df87fc-0489-46c1-b39f-cafd7652f55b"."msg":"{"code":"send","messageParam": {"extra":null,"receiver":"13288888888","variables": {"title":"yyyyyy","contentValue":"66661641627893157"}},"messageTemplateId": 1}"."operateDate":1641627893512."returnStr":"{"code":"00000","msg":"Operation is successful"}"."success":true."tag":"operation"}
​
Kafka state=10; Kafka state=10
2022-01- 0815:44:53.622 [org.springframework.kafka.KafkaListenerEndpointContainer#6-0-C-1] INFO  com.java3y.austin.utils.LogUtils - 
{"businessId":1000000120220108."ids": ["13288888888"]."state":10."timestamp":1641627893622}
​
// Select Kafka, Kafka, Kafka, Kafka, Kafka, Kafka
2022-01- 0815:44:53.622 [org.springframework.kafka.KafkaListenerEndpointContainer#6-0-C-1] INFO  com.java3y.austin.utils.LogUtils - 
{"bizType":"Receiver#consumer"."object": {"businessId":1000000120220108."contentModel": {"content":"66661641627893157"},"deduplicationTime":1."idType":30."isNightShield":0."messageTemplateId":1."msgType":10."receiver": ["13288888888"]."sendAccount":66."sendChannel":30."templateType":10},"timestamp":1641627893622}
​
// The processing layer prints logical filtering logs (state=20, indicating that this message has been discarded because it is configured to be discarded)
2022-01- 0815:44:53.623 [pool-8-thread-3] INFO  com.java3y.austin.utils.LogUtils - 
{"businessId":1000000120220108."ids": ["13288888888"]."state":20."timestamp":1641627893622}
​
Copy the code

My core logic for printing logs is as follows:

  • On the entry side (both the interface entry and the newly consumed Kafka entry) the raw information needs to be printed. With the raw information, we can locate and troubleshoot the problem, or at least help us reproduce it
  • Use some identifier during processing to indicate the process (10 means Kafka was successfully consumed, 20 means the message has been discarded…) And the log format is uniform so that we can uniformly clean the log information later

As for logging process is very simple, just extract a LogUtils class is good:

What about tracking clicks? It is fine, just splice businessId on the attached link. As long as we have the click data, we can determine the presence of the track_code_bid character on the link and find out which user clicked on which template message.

BusinessId will follow the life of the message, whether the log is the point or the raw log. The composition of businessId is through message template content + time only

03, subsequent

Now the corresponding data link information has been printed, but this is not enough, this is only to write the data link information to the local server, also need to consider the following situation:

1. The application server is usually a cluster, and the log data is recorded to different machines. You can only log in to each server for troubleshooting and locating problems

2. The link data needs to be in real time, and the Web background interface can be provided to quickly enable the business side to view the whole process by themselves

3. Link data must be stored offline for data analysis and backup (local logs are usually stored for less than 30 days).

The following functions will be implemented one by one. ELK will be connected first to provide a unified entry for querying log information and configure related service monitoring or alarms. Please look forward to it.

It’s not too much to like, is it? I’m 3y. See you in the next video.

Follow my wechat public number [Java3y] in addition to technology I will also talk about some daily, some words can only say quietly ~ [line interview + write Java project from zero] continuous high intensity update! O star!!!!! Original is not easy!! Three times!!

Austin project source code Gitee link: gitee.com/austin

Austin project source code on GitHub: github.com/austin