The introduction

The buried point automatic collection scheme is divided into three articles for the reason of length because it involves a lot of content

  • Automatic Buried Spot Collection Solution – Overview
  • Buried Point Automatic Collection Solution – Route Dependency Analysis
  • Automatic Buried Point Collection Solution – Buried Point Extraction

For the other two articles, please refer to the article of Dazhuan FE official account on November 5 and 9, 2020

The qr code of dazhuan FE official account is at the end of the article

This article is a long one, so if you are curious about the final result, you should first check out the screenshots in the background preview section

Pain points

Burying point, as the core means of tracking the effect of business on-line

Documentation is also important, but in many companies or teams it is maintained in the most primitive way

Some teams even have a unified buried platform, but the overall operating costs are high

1) Maintenance costs

Each time a requirement is raised, the PM needs to spend a lot of time maintaining buried documents

The new burial point is good, just add it.

If there is any deletion or modification of old buried points, please confirm with the developer first, and then update the document

If you don’t want to bother, add files instead of deleting them. After a long time, you know

2) Who will maintain buried documents

As mentioned earlier, adding new files is ok, mainly deleting or updating buried files, who will promote buried document update?

Develop students to promote PM?

PM looking for development to bury the update list?

Develop direct updates to buried documents?

Development students at the same time to record buried points update list?

Will developers and PMS fight each other?

3) Non-blocking process

The whole process, it’s really hard to achieve the goal with just one role

In the end, whoever does the maintenance, if there is no blocking process, it becomes all about self-awareness.

Depend on self-consciousness, that if can last, damned ~

4) The buried document is inaccurate

As a result of the above, the buried document is further and further away from the actual reporting code

Time is long nature cannot see, encounter personnel change or business adjustment especially

After the new student takes over, forehead direct three black line.

5) Developing students have strong resistance

PM: “Help me check the burial point of xx project, I need it urgently”

PM: “Could you please help me check the burial point of the XX activity I did before?”

PM: “Last requirement iteration, update the list of buried points and send me a copy”

.

Especially for backdated projects.

As an FE, do you feel the same way?

Triggered thinking

In any case, the online data is fed back through buried points, and business adjustments are based on the data

This is an objective fact.

So the pain point of burying documents must be solved.

Some students may ask why they don’t use third party embedding solutions (growingIO, magic strategy, etc.).

Because, no matter which scheme is adopted, there are two mainstream ways to report buried sites:

  1. Report to all buried sites (including regions)
  2. SDK actively reported (single buried point, buy more report)

The PM must configure the full buried point report based on the page structure (the main maintenance personnel are in the PM). If the page structure changes, the PM must configure the full buried point report again.

In addition, the current situation of the company is that it has its own data platform, but if it adopts full buried point reporting, there will be a great burden on the data analysis level, and each business line itself adopts the code active reporting method.

The SDK actively reports and encounters the problems mentioned earlier.

Maybe some students said: “This is what PM should do, I only take care of my part.”

There’s nothing wrong with that, but we’re essentially trying to make the team more effective

While working on this project, we also communicated with the PMS of many teams that maintaining a working buried document would take a considerable amount of effort

It also made me more determined!

Solution ideas

The core of all the previous questions are two:

  • There is no unified buried platform
  • There is no scientific model for buried collaboration

After many discussions within the group, the problem was subdivided and the corresponding solutions were as follows:

The problem

The solution

Maintaining role Ambiguity

The PM only provides buried points that need to be counted, and updates are automatically maintained at the code level

High maintenance costs

Through jSDoc code comments, use code to extract buried points and interpretation from files at compile time

Document availability

When the buried point is pulled out, report directly to the background of the buried point

Access to old projects

Provides uniform automatic annotation adding tool

New project access

The solution is integrated into scaffolding, while providing inspection loader, hard check integrity

Business to effect

Set up burying point background, providing burying point and data preview

You might not know what you’re looking at, but let’s take a look at the big picture

Explain the process:

  1. PM writing buried documentation (development only)
  2. Fe writes the reporting logic according to the documentation
  3. When the code goes live, build is executed, which triggers the WebPack buried point to pull out the plug-in
  4. Buried point extraction plug-in by page, collect all its buried points
  5. After collection, report to burial site service
  6. PM can view buried point and corresponding data through buried point platform

The process is very clear, but there are definitely some problems, so let’s explain them separately:

Question 1: How to collect buried points

In vue. Prototype, we have a common reporting buried point method:

Vue.prototype.$log = function (actionType, backup = null) {… }

  • ActionType Specifies the behavior burying point
  • Backup This parameter is optional

For example, if pv of a page is reported, the additional parameter is channel number, that is:

this.$log('PAGE_SHOW', { channel: 'xxx' })
Copy the code

Solution: Collect annotations using custom JSDoc plug-ins and custom tags

$log('PAGE_SHOW', {channel: 'XXX'})Copy the code

Where @log and @backup are jsdoc custom tags, having jSdoc monitor only the $log method also needs to be explicitly specified

So, in order to keep the entire plug-in behavior consistent, a common configuration file, js_doc.conf.js, needs to be introduced

js_doc.conf.js

Module. Exports = {/ / let the jsdoc recognition @ log tag: 'log', / / told plug-in enclosing $log is sent buried point method, defined as an array because some projects there may be multiple method: ['$log'], // pageType prefix, that is, prefix pageType to separate different items prefix: 'H5BOOK', // pageType generated rules, different items may have different rules pageTypeGen: function({ prefix, routeName, dir, path, fileName }) { return (prefix + routeName).toUpperCase(); }, // Public backup description, finally a single buried point Backup is a combination of public and private backup: 'UID: user ID ', // Project information, a buried point is reported using projectInfo: {projectId: 'project id, normally projectName ', projectName:' Chinese name ', projectDesc: 'project description ', projectIsShowOldMark: falseCopy the code

This configuration file is shared by the Js_doc plug-in, as well as the file dependency analysis plug-in mentioned below, and the tool for automatically adding buried annotations

The implementation of the js_doc plug-in,

It will be specially explained in the following “Buried point Automatic Collection scheme – Buried point Extraction”

, this paper mainly explains the principle of the overall scheme

With this step, you can collect each behavior buried point, along with the corresponding annotation and parameter description, namely actionType and corresponding annotation

Of course, to determine a burial site, there are two core factors

  • ActionType: Action burying point
  • PageType: The page identifier (sometimes called pageId by the team), which is used to locate the specific page

PageType is usually referenced to the current page route when the page is running, and that’s the problem

Question 2: How do I get pageType at compile time

There is also a common case, common components, as shown below

The A component is referenced by multiple pages

Or component A is referenced by secondary component B, and then directly or indirectly introduced to the page.

Buried points in component A must be collected in the corresponding pages 1 and 2.

What about this?

The question becomes:

  • How to determine page routing (that is, get pageType)
  • How to analyze file dependencies (that is, collect buried points in common components)

Solution:

  • By writing file dependency analysis plug-in, analyze the cross-reference relationship of each file
  • Analyze the routing file using AST syntax to get all the page routes, i.e. get the root component of the page
  • Use webPack compilation to analyze the file dependencies for the entire project, and finally, you get what components are referenced for each page

The specific implementation scheme of this part will be specially described in “Buried Point Automatic Collection Scheme – Route Dependency Analysis”

Sample:

Export default {routes: [// home {path: '/page1', name: 'page1', meta: {title: 'page1', desc: 'xx homepage'}, component: () = > import ('.. / views/page1 / index. The vue ')},... }Copy the code

Of course, the reality can be complicated:

  • The routing entry file contains multiple child routing files.
  • There are several ways to import components (static import, dynamic import)
  • Imported components can be written in a variety of ways
  • .

Writing complex cases, through ast syntax analysis, can be compatible, but it does need patience ~

At this point, core information collection is complete.

Previous jSDoc section: Get all actionTypes and corresponding comments for each file

Here we get all the page route references for the project and the corresponding component file dependencies

For pages that approaches: meta. Desc | | meta. Title | | name | | path

Since the page description is not necessarily the same as the real title, a separate desc field is added

Think about it, get these can be page by page, organizational behavior buried data!

In the process of access, we also encountered a problem, is

The rules for generating pageType of different services are inconsistent

With this in mind, we define the following in the configuration file:

module.exports = { ... // pageType prefix, that is, the project prefix pageType to distinguish different projects prefix: 'H5BOOK', // pageType specific generated rules, different projects may rule different pageTypeGen: function({ prefix, routeName, dir, path, fileName }) { return (prefix + routeName).toUpperCase(); },... }Copy the code

Prefix: as a project personalization prefix (mainly used to distinguish between projects because different projects may define the same route)

PageTypeGen: This method is used to generate pageType rules defined by the business itself. We return prefix, routeName, dir, path, fileName to the user. With these parameters, You can almost cover all route generation methods.

OK, final data structure form:

{// all pages of this project pageList: [// single page {// Routing information of this page routeInfo: {routeName: 'page1', description: ActionList: [{actionType: 'PAGE_SHOW', pageType: 'H5BOOK_page1', backup: 'channel: ', description: 'page display'},...]},...] , // projectInfo: {projectId: 'projectId ', projectName:' projectName ', projectDesc: 'project description information ', projectMark: 'project mark ', projectLogsMark:' buried point mark ', projectType: 'projectType ', projectIsShowOldMark: False, // whether to display old buried data, default false projectDefBackup: 'default parameters'}}Copy the code

After collecting buried sites, add project-related information, and then report it in batches to the background of buried sites for storage through interfaces.

Buried some background

We built the background service with EggJS + Mongoose

React + ANTD + bizChart (chart library) to build the background system

The background to preview

Displays a list of all automatic buried collection itemsClick to display the name of all pages (routing) under this itemClick to display all buried points of the pageCheck the data trend of a single buried point in recent seven days at the same time

The above is the core part of the solution, but after all, the solution idea of the whole solution is technology-driven

So we definitely have to worry about: how do we make the whole thing work sustainably?

Question 3: How to ensure that buried points are updated in time?

This problem is relatively easy, now that the Webpack plug-in, so

Solution: Update the build command to collect and report plug-ins when they go live

Buried point extraction plug-in, in Webpack compilation, collection of buried points automatically reported.

Ensure that buried documents are updated every time they go live. Keep documents and code in sync in real time

Q4: How can old projects quickly access buried site solutions?

Getting buried points into old projects is one of the biggest headaches of any development, especially making up buried points

To this end, we also put forward a set of convenient solutions

Solution: Use the command line tool to automatically complete comments

We wrote a command-line tool called AutoComment, installed globally

When you run this command in the project root directory, the tool automatically adds all. Vue,. Js, and. Ts files in the SRC directory

The command tool and the previous plug-in use the same configuration file js_doc.conf.js

Module.exports = {// let jsDoc identify @log tag: 'log', // let jsDoc identify @log tag: 'log', // let jsdoc identify @log tag: 'log', // let jsdoc identify @log tag: 'log', // let jsdoc identify this. }Copy the code

Read the tag and method attributes inside.

$log(‘ XXXX ‘, {… }), and there is no annotation method for automatic annotation addition

Add:

this.$log('PAGE_CLOSE')
Copy the code

After adding

/**
 * @log autocomment-PAGE_CLOSE
 */
this.$log('PAGE_CLOSE')
Copy the code

The logic for adding comments is simple:

  • The actionType parameter in this.$log() is extracted by ast analysis
  • Add the fixed prefix ‘autocomment-‘ as the final default comment

Why fixed prefixes?

Because it allows developers to quickly figure out which comments are automatically added by the tool.

Global search for ‘autocomment-‘

At the same time, in order to eliminate the psychological barriers of development students (after all, the tool directly change the source code, the heart is still not practical)

For this purpose, we also made the add summary page, the tool added comments, will automatically pop up

Use a git-history style to inform developers that we have changed these areas

Q5: How do you ensure that you follow the specification for embedding comments during development

Because it is annotated form collection, development students are easy to forget to write, then

Solution: Provide a monitoring loader for real-time detection of development

$log() (); $log(); $log()

The implementation is also the use of AST syntax analysis, reuse the previous algorithm can be, much the same.

Note: Buried point specification

Careful students may have noticed that this scheme does have several limitations:

ActionType must be a string

If it is a variable or an expression, the AST cannot collect it properly.

Therefore, if you need to report the content returned by the interface, you can put the value in backup. The interface is reported by parameter description

Unsupported scenarios:

Const resp = {... } this.$log(resp.actionType) $log(type === 1? 'actionType1' : 'actionTyp2')Copy the code

If you encounter this scenario, you need to change the writing style

Const resp = {... $log('respData', {type:}) $log('respData', {type:}) ActionType}) // Error demonstration 2 improvement: If (type === 1) {$log('actionType1')} else {$log('actionType2')}Copy the code

PS: Aesthetically speaking, the code does not look good… But, after all, function matters

In addition, the program also

Dynamic route addition is not supported

Of course, it is not impossible to support, if necessary, can be handled separately.

However, for ToC projects, the generic route definition approach is sufficient.

Ok, this is the core of the buried point automatic collection solution.

conclusion

At the beginning, I also discussed this plan in depth with many business development students

Some of the most notable issues are:

  • Buried documentation should be maintained by the PM, which transfers the PM’s work to the developer
  • Because of the need to write comments, resulting in reduced development efficiency, business acceptable
  • If something goes wrong, who is to blame

There are so many different situations and modes of cooperation that it is impossible to generalize

But our original intention is to improve the overall efficiency of collaboration.

At the very least, it saves the team the effort of maintaining documents and thinking of more valuable things, right

This solution has been well received by the teams that have been connected, and it does solve the core pain points.

We believe: technology is efficiency, technology and products should promote each other!