The introduction

Code specifications are an enduring topic in software development, and almost all engineers have encountered and thought about it at some point in their development process. As front-end applications become larger and more complex, more and more front-end engineers and teams are paying attention to JavaScript code specifications. Thanks to the flourishing front-end open source community, there are several mature JavaScript code specification checking tools, including JSLint, JSHint, ESLint, FECS, and many more. This article mainly introduces the current more common scheme — ESLint, it is a plug-in JavaScript code static inspection tool, its core is through the code parsing AST (Abstract Syntax Tree) pattern matching, locate the code that does not conform to the convention specification.

ESLint is not complicated to use. Install dependencies according to the ESLint documentation, configure them according to the individual/team code style, and statically check project code to find and fix nonconforming code either through command-line tools or with the editor’s integrated ESLint functionality. If you want to reduce configuration costs, you can also use open source configuration solutions such as ESlint-config-Airbnb or eslint-config-standard.

For independent developers, or small teams with strong execution and a more monolithic technical scene, it is possible to implement JavaScript code specifications at a lower cost by directly using some of the standard solutions provided by ESLint and its ecosystem. This process will run much smoother if you have some facilitators (such as Husky and Lint-staged). But for a large front-end team of dozens of people, scaling up to apply a uniform JavaScript code specification to hundreds of front-end projects becomes more complicated. If you just take advantage of existing open source configuration solutions, you might get less bang for your buck.

Problem analysis

Scaling up to a unified ESLint code specification can lead to a variety of problems stemming from the differences between large teams and small teams (or indie developers) :

On the technical level:

  • Broader technology scenarios: For large teams, the development scenarios are not limited to the traditional Web domain, but often involve node.js, React Native, applets, desktop applications (such as Electron) and other broader technology scenarios.
  • Technology selection is more decentralized: Engineering technology selection is often not uniform across teams, such as React/Vue, JavaScript/TypeScript, etc.
  • The increase in the number of projects and the decentralization of projects has led to an increase in the complexity of ESLint schemes: this can further increase project access costs, upgrade costs, and scheme maintenance costs.

At the team level, with the increase of personnel and the complexity of organizational structure:

  • Personnel styles are more diverse, and communication and coordination costs are higher.
  • Program advocacy is more difficult to reach, and it is difficult to ensure the implementation of the standard.
  • Implementation status and effectiveness are difficult to calculate and analyze.

Because there are many differences, we need to consider and solve more problems when designing specific programs to ensure the implementation of the specification. Based on the above analysis, we sorted out the following problems that need to be solved:

  • How to develop a unified code specification and the corresponding ESLint configuration?

    • Scenario support: How to support scenario differences? How do you ensure specification consistency of consistent parts (such as basic JavaScript syntax) across scenarios?
    • Technical selection support: How to ensure consistency of the underlying rules (e.g. indentation) while supporting the selection of different technologies?
    • Maintainability: Specifically in terms of rule configuration, can reusability be improved? Is the cost controllable during solution upgrade iterations?
  • How can code specifications be enforced?

    • How can we ensure the quality of code specification execution when the increase of staff and the complexity of organizational structure can lead to the failure of management-based execution control?
  • How to reduce application cost?

    • In the case of the increase in the number of projects and the discretization of engineering schemes, reducing the cost of access, upgrade and implementation of schemes can save a lot of manpower and facilitate the implementation of schemes.
  • How to know the application status and effect of the specification in time?

The solution

In order to achieve the unity of JavaScript code specifications within the team, we designed a complete set of technical solutions after analyzing and thinking about the problems existing in the large-scale application of the team. The solution includes four modules, including unified ESLint rule configuration for multiple scenarios, code integration delivery check, automatic access tools, and execution status monitoring analysis. Through coordination and cooperation of each module, the problems mentioned above are solved together, which not only reduces maintenance cost and improves execution efficiency, but also ensures the unification of code specifications.

  • The design of the overall scheme is shown in the figure below:

  1. Unified JavaScript specification for multiple scenarios: This module is the core of the whole solution. With the help of ESLint’s features and hierarchical classification structure design, it ensures the consistency of basic rules and supports the selection of different scenarios and technologies.
  2. Code Integration Delivery Check: This module is the guarantee for solution implementation, integrating code static check into continuous delivery workflow. In terms of specific design and implementation, the developer’s application execution cost is reduced through customized integration inspection tools while ensuring the delivery quality.
  3. Automated access and upgrade solutions: Provide “one-click” access/upgrade capabilities through command line tools while being integrated into team scaffolding, significantly reducing project access and maintenance costs.
  4. Performance monitoring and analysis: To understand the application status and effect of the solution by burying the tool operation and code integration delivery inspection process, collecting and analyzing the inspection results.

Plan implementation

The problems mentioned above can be effectively solved through the coordination and cooperation of all modules. However, further in-depth thinking is still needed to design a more reasonable implementation scheme for the specific implementation of each module. This chapter will introduce the four core modules of the scheme in detail.

Generic ESLint configuration scheme

This module mainly relies on the basic features of ESLint and adopts a hierarchical and classified structure design to provide a general configuration scheme for multiple scenarios and technical schemes, and makes the scheme easy to maintain and expand.

Introduction to ESLint features

Before designing the ESLint configuration scheme, let’s take a look at some of the features of ESLint.

1. The plugin

The following diagram briefly describes how ESLint works:

ESLint’s capabilities are more like an engine, driving the code inspection process by providing basic inspection capabilities and schema constraints. The original code is parsed by the parser, checked one by one by all rules in the pipe, and finally detected all non-conforming code and output as a report. With the help of plug-in design, not only can all rules be independently controlled, but also can customize and introduce new rules. ESLint itself is not strongly bound to parsers, so you can use different parsers to parse raw code. For example, babel-esLint supports newer versions, different phases of ES syntax, special syntax such as JSX, It is even possible to support typescript language checking with @typescript-esLint /parser.

2. The configuration capability is comprehensive, layered, and shareable

ESLint provides comprehensive, flexible configuration capabilities for parsers, rules, environments, global variables, and more; You can quickly introduce another configuration and combine it with the current configuration to create a new configuration. You can also publish configured rule sets as NPM packages for quick application in projects.

3. The community ecology is mature

There are many ESLint-based projects in the open source community, including plug-ins for various scenarios, frameworks, and various ESLint rule configuration schemes that can cover almost all scenarios of front-end development.

Standard configuration scheme design

Based on ESLint’s plug-in and cascading configuration features, as well as open source solutions for various scenarios and frameworks, we designed the ESLint configuration architecture as shown below:

The configuration architecture adopts a hierarchical and classified structure, wherein:

  • Base layer: Develop a common base syntax and format specification, and provide common code styles and syntax rule configurations, such as indentation, trailing commas, and so on.
  • Framework support layer (optional) : provides support for common technical scenarios and frameworks, including Node.js, React, Vue, React Native, and so on. This layer is configured with plugins from the open source community, and the rules of the various frameworks have been adjusted.
  • TypeScript layer (optional) : This layer provides support for TypeScript with typescript-ESLint.
  • Adaptation layer (optional) : Provides customization support for special scenarios, such as MRN (React Native customization solution inside Meituan), use with Prettier, or special rule appeal of some teams.

In a specific project, you can flexibly choose the level and type of collocation to obtain the ESLint rule set that matches the project. For example, for a React project that uses the TypeScript language, you can stack the base layer, the React branch of the framework layer, and the React branch of the TypeScript support layer together to create an ESLint configuration for that project. If the project no longer uses TypeScript, simply remove the TS-React layer.

As a result, the ESLint configuration set looks like this:

Considering maintenance, upgrade, and application costs, we ultimately chose to put all configurations in one NPM package rather than setting each type separately. Using the React project with TypeScript as an example, you only need to do the following configuration in your project:

Module.exports = {root: // install typescript, eslint-plugin-react, @typescript-eslint, etc.trueExtends: [// Because base layers are mandatory, the framework layer introduces the corresponding base layer by default. There is no need to introduce eslintrc.base.js separately'eslint-config-xxx/eslintrc.react.js'.'eslint-config-xxx/eslintrc.typescript-react.js']}Copy the code

This structural design through stratification and classification is conducive to later maintenance:

  • Changes to the base layer only need to be made in one place to take effect globally.
  • Adjustments to a part of the non-base layer do not have a relational impact.
  • If you want to extend support for a type, just focus on the special rule configuration for that type.

As we all know, using TSLint for code checking for TypeScript type projects is also a simple and convenient solution. However, in this scenario we still chose the combination of esLint + @typescript-esLint /parser + @typescript-eslint/eslint-plugin. The main reasons are as follows:

  • ESLint’s rule configuration is more detailed and comprehensive, providing more coverage.
  • The hierarchical and classified architecture can ensure the consistency of rules in basic grammar and style even if the framework or language is different, which is conducive to the style unity of different technical selection projects in the team.
  • The @typescript-esLint scheme iterates continuously, responds quickly to problems, and provides basically equivalent implementations of tsLint-related rules.

According to the latest update, TypeScript makes it clear in its 2019 roadmap that support and construction of Lint tools will focus on adaptation to ESLint.

Code integration check

Based on the construction of engineering infrastructure by the team, the code specification static check is integrated with the development workflow to ensure the implementation of the code specification.

In general, projects with ESLint can be developed at the same time with the editor’s integrated ESLint checking capabilities (such as VSCode’s ESLint plug-in) to find and fix syntax errors and style issues that do not conform to the specification in real time. However, this does not prevent the specification from being poorly implemented due to some subjective factors or omissions, so we consider automating static code checks at specific points in the development workflow to block the submission or delivery of nonconforming code.

There are many development workflow nodes that integrate static checking, and we mainly refer to the following two scenarios:

  • Code Commit checking: ESLint checking is triggered via githook when code is committed. Its advantages are that it can respond to the actions of developers in real time, give feedback, and quickly locate and repair problems; The drawback is that developers can actively skip checks.
  • Code Delivery checks: When code is delivered (with the help of the CI system’s delivery process features), ESLint checks are performed on code in the code inspection platform to block delivery if it fails. Its advantages are that it can be enforced and online tracking test reports; The drawback is that it is too “remote” from the developer’s development environment and the developer’s response processing costs are high.

If the two are combined, they may get twice the result with half the effort, as shown in the picture below:

The commonly used method for checking code Commit is husKY and Lint-staged. When code Commit is performed, Git staging files are checked via Githook. However, given the large number of projects and the large number of line-length files available to the team, lint-staged strategies can reduce some of these costs, but they still fall short. To do this, we optimized the method by customizing the local code commit checking tool precommit-ESLint, which has the following core features:

  • Incremental checks are performed to lines of code at a granularity of Warn and Error levels.
  • Simply install the tool as a project dependency without any configuration.
  • Reduces the intrusiveness of embedding scripts in pre-commit hooks.
  • Execution status burying points and acquisition were performed.

The effect is shown in the picture below:

At Meituan, we use the independently developed CI system and have customized the corresponding rules on the independently deployed Sonar system, which can basically meet the requirements and will not be repeated here. For independent teams, ESLint provides tools that make it easy to quickly build a code detection service or platform using Node.

Automatic access tool

This module uses CLI tools to automate solution access and reduce project access and upgrade costs. Without the help of automation tools, there is still a certain amount of work and complexity to access the above scheme in the project, and the general steps are as follows:

  1. Install Eslint.
  2. Install the corresponding ESLint rule configuration NPM package based on the project type.
  3. Install relevant plug-ins, parsers, and so on, depending on the project type.
  4. Configure.eslintrc files based on the project type.
  5. Install the code submission inspection tool.
  6. Configuration package. Json.
  7. Test and fix problems.

In this process, particular attention needs to be paid to versioning of dependencies: versioning compatibility between dependencies, such as typescript and @typescript-esLint/Parser; For example, if a plug-in version does not support a rule but the rule is still configured, the rule configuration becomes invalid. For developers unfamiliar with ESLint, compatibility, parsing exceptions, rule invalidation, etc., can be a huge waste of energy during configuration.

Therefore, when designing and developing automatic access tools, we comprehensively considered the compatibility of operation steps, dependent versions, rule sets and engineering schemes, and designed the following workflow:

No matter what the development scenario and framework selection, the tedious access process can be simplified into a single command, and the same is true when the project solution needs to be upgraded. As you can see in the following figure, the project is complete with ESLint, a uniform rule specification encoding, and automatic incremental checking when code is submitted:

Burial point and statistical analysis

The main purpose of statistical analysis is to master the implementation status and effect of program application, which should theoretically support two perspectives of engineering and market, as shown in the figure below:

Execution analysis is not complicated, the core is information collection and analysis. In this scenario, information is collected using the precommit-ESLint tool: After the git commit triggers the local code check, the script reports the check result (including whether the check passes, the number and level of error or warning messages, etc.). Statistical analysis of information is realized by log reporting and analysis platform, and Meituan uses CAT platform (if the team or company does not have a special platform, this part of function can be realized in the code inspection service platform mentioned above). To facilitate aggregate analysis of the data, we graded the number of problems that occurred during a code submission check:

  • Check passed: Check no code specification errors.
  • Error level 1: less than 10 code specification errors were detected.
  • Error level 2: The number of code specification errors detected is between 10-100.
  • Error level 3: The number of code specification errors detected is between 100 and 1000.
  • Error level 4: the number of code specification errors detected is more than 1000.

For example, as shown in the figure below, the statistics of code submission check results in the first week of 201903 (comprehensive sampling rate 0.2), it is obvious that among all failed checks, the number of errors less than 10 accounts for the largest proportion, and the repair cost is not high.

1. Submit check for abnormal distribution (only filter the information that fails the check)

The effect quality mainly analyzes the changes of engineering quality: on the one hand, the code can be checked to see whether the code quality has been improved in the continuous production process through the change trend of the pass rate of execution and the distribution of inspection results; On the other hand, as code inspection adopts incremental mode, it is necessary to conduct an overall analysis of the engineering code to obtain the proportion and change trend of the non-standard code of the whole project, so as to analyze and judge the quality effect from the engineering dimension (involving issues related to authority, engineering analysis method is not adopted in the team at present). Specific analysis will be presented in the application results of the scheme.

Application package

In addition to the above overall scheme, in order to ensure more convenient use by developers, we also carried out some supporting work:

  • Continuous maintenance and update: Iterate on a monthly basis to resolve application issues, rule disputes, and support new rules or solutions.
  • Engineered integration: The entire solution can be seamlessly integrated into each team’s scaffolding tool, automatically become the team’s default solution, and can be integrated during the project initialization phase.
  • Official website construction: Provides detailed usage documents, including rule information and access methods, and provides detailed descriptions of rules, environment dependencies, and changeLog for each version.
  • Common usage problems: Update and maintain FAQs to help subsequent access users quickly find and solve problems.

At present, this program has been connected to meituan takeout, catering platform, flash purchase, hazelnut, finance and other teams. Based on statistical analysis of buried sites (based on statistical data analysis in the last week of February 2019, with a comprehensive sampling rate of 0.2), we have obtained the following data:

  • As of the end of February 2019, the program has been connected to more than 200 front-end projects.
  • Integration checks (increments) are performed close to 1000 times per day.
  • Integration checks (increments) detect an average of 20,000-25,000 errors per day.
  • Integration checks code quality: Average pass rate was 75.562%, error level 1 rate was 15.644%, and 64.015% of all failed checks.

Meanwhile, we continue to make statistics on the change trend of the above data and track the improvement effect of code quality. Taking the data from December 2018 to March 2019 as an example (by the first week of March 2019, the time statistical scale is week) :

As can be seen from the figure, the pass rate of the integrated examination showed an overall upward trend in the last three months, but there was a significant decline in the pass rate of the integrated examination in the second and third weeks of January 2019. Analysis of project information revealed that a batch of new projects were brought in during the second week of January 2019, and code inspection specifications detected dozens of errors. But on the whole, the current integration check pass rate is basically stable at 75%-80%, and there is still room for improvement from the change trend.

After the implementation of the program, we did a user survey and found that the operation of the overall program is playing a positive role. On the one hand, the efficiency of multi-person collaboration is improved to some extent. No matter maintaining a project jointly or switching between multiple projects, the readability cost and formatting risk caused by inconsistent code styles are avoided. On the other hand, it will help you find and avoid some simple grammar mistakes.

Planning and thinking

This scheme has been applied stably. In addition to the existing functions, we are still considering whether further optimization can be made to provide richer capabilities. Some directions that have yet to be laid out:

  • Extending HTML and CSS code style check: although the construction of front-end frameworks and component libraries in recent years has reduced the need for HTML and CSS in business development (especially in the background business) to some extent, it is still necessary to standardize the code style of HTML and CSS. With this in mind, you can integrate HTML and CSS code static inspection schemes into your current scheme, not just JavaScript (or TypeScript).
  • Further encapsulation: The current overall solution exposes all dependencies and configurations to the project. It would be easier to use it if it were completely encapsulated in a tool, but the difficulty lies in balancing flexibility, editor support and other issues.
  • Code quality trend analysis with added engineering dimensions: the current code inspection strategy is incremental inspection, which can periodically inspect all the connected projects and analyze the code quality change trend of the projects based on the time line.
  • Further analyze the inspection results and statistical data to find some potential problems and provide assistance for promoting development quality improvement, such as:
    • Count the rules that developers turn off or adjust during the project, analyze the reasons why the rules that account for a high percentage of the rules are turned off, and then adjust the rules or promote the implementation of the rules.
    • Statistical distribution checks for the distribution of incorrect rules, sorts out the most frequently broken code rules, and publishes the corresponding best practices or manuals.

The above are some practices of meituan Takeout team in the large-scale application process of ESLint solution. We welcome your suggestions and communication together.

Author’s brief introduction

Peng Song, Terminal R&D engineer of Meituan Takeout Business Division.

Team to introduce

The terminal team of Meituan Takeout Business Division is responsible for several terminals and platforms that directly connect hundreds of millions of users, millions of merchants and tens of thousands of operation and sales. The goal is to continuously improve user experience and r&d efficiency while ensuring high stability and high availability of business.

In the user direction, a full-link high availability system is constructed, and the availability of multi-terminals such as clients, Web front-end and small programs is about 99%. The multi-terminal and high-reuse local dynamic framework is implemented in the home page, advertising, marketing and other core paths, improving the R&D efficiency by 30%.

In the direction of merchants, the company customized to keep alive from improving process priority, VoIP Push and DoZE, and provided multiple touch channels such as Shark, short chain and Push. The order arrival rate increased to more than 98%.

In terms of operation, the r&d efficiency was improved by 10% by standardizing the R&D process, building component library, Node service, front-end application management and page configuration, etc.

There are many positions in the team, welcome to join us, contact [email protected], marked “takeaway terminal team”.