The author is guangsheng Ant Financial · Data Experience technology team

Recently, I participated in a React + Typescript component project, which will be open source in the later stage and has high requirements on code quality and engineering, so engineering management is required. Through this process, I have summarized and shared some of the engineering infrastructure required for a React + Typescript third-party component.

This engineering management is mainly divided into the following aspects:

  • Static checking: TypeScript + ESLint
  • Development experience: Packaging tools and mono-repo administration
  • Code quality: testing

Static checking

Tools such as TS and ESLint are essentially static inspections of code to find hidden bugs early. After TS came along, TS had type checking that ESLint didn’t have, as well as syntax-checking capabilities that ESLint did, so currently we use ESLint mainly to standardize code styles by taking advantage of the large number of Lint rules in the community. Implement some best practices using tools. TS is mainly responsible for static checking of code for errors in syntax and semantics. In addition, TS itself is a brand new language, using TS can enjoy some language features that JS does not have.

From the AnyScript to the TypeScript

With TS, one important difference is whether strict is turned on in the configuration. If not, then you are using AnyScript, which has almost no type constraints and is not much different from JS. If a project is migrating from JS to TS, this option should be turned off, because the old JS code does not write types. But if it’s a new pure TS project, Strict must be turned on. Scaffolding projects like CRA now have strict mode on by default.

Turning on STRICT is easy, but the hard part is writing TS code elegantly in Strict. Here are some common problems in Strict mode and some types of tips:

noImplicitAny

The most common scenario in which this problem arises is with function arguments. If you are used to writing JS, you will most likely forget to write types when writing function parameters. Although TS can infer the return value type of a function, it cannot infer the parameter type of a function. If the parameter type is not written, the default type of the parameter is any, and noImplictAny is reported, because TS’s strict mode does not allow implicit any.

To solve this problem, get into the habit of adding types to function arguments, and not just an explicit any 😂. The new type is defined and referenced if it is already defined. Any should not be used in unusual scenarios, but more on that later. Any itself is an escape hatch that bypasses type checking, and using any would cause type checking to be bypasses at that point, making it less useful to use TS.

On the other hand, if you are writing code in statically typed languages, you will never forget to add typing. This problem in the habit of writing weak type of JS front-end students more common, more is a habit and type of thinking to develop the problem.

Let’s look at another scenario:

const props = {
  foo: "bar"
};

props["foo"] = "baz"; // Element implicitly has an 'any' type because expression of type 'string' can't be used to index type
Copy the code

The noImplicitAny error is also reported in this scenario. This is because we have not explicitly declared the index signiture of this object. The solution is:

interface Props {
  foo: string;
  [key: string]: Props[keyof Props];
}

const props: Props = {
  foo: "bar"
};

props["foo"] = "baz"; // ok
props["bar"] = "baz"; // error
Copy the code

[key:string]: any; But if the key is specified, you can use keyof to get the combined type of all keys of an interface, and index types to get the type of value. There is a restriction on the type of value compared to any.

strictNullChecks

By default (non-strict mode), undefined and NULL can be assigned to any value, so it is permissible to call a method on a property that may be undefined. In strict mode, null and undefined are treated as separate types and cannot be assigned to other types. So if a value might be null or undefined, we must take steps to provide information for type checking.

For example, there are the following scenarios:

class Component extends React.Component<{}, {} > {graph: Graph? ; componentDidMount() {this.graph = new Graph()
      this.init(this.graph)
    }

    init() {
      this.graph.on("click", () = > {})// Object is possibly 'undefined'
    }

    render() {
        return <div>foo</div>}}Copy the code

In GUI scenarios, many member variables will not have values until the component is initialized, such as this.graph in this scenario. Add one after the type of a variable parameter or attribute? It’s Optional. The table name attribute or parameter may be undefined.

To bypass this error, we need to use Type Guards:

init() {
    if (this.graph) this.graph.on("click", () = > {})// ok
}
Copy the code

But if there are too many calls to this.graph, writing if becomes cumbersome and hindering code reading. You can use type assertion to confirm that this.graph has a value:

init() {
    this.graph! .on("click", () = > {})// ok
}
Copy the code

Optional Chaining, which was recently released in TS 3.7, is a better solution:

init() {
   this.graph? .on("click", () = > {})// ok
}
Copy the code

! And? The use of these operators should be familiar to those of you who have written Swift. Now TS basically has a complete set of Nullable Value operation schemes.

To summarize, strictNullChecks in strict mode can be used as type Guards, type Assertion, and optional chaining to tell the compiler that strictNullChecks is safe.

The point is that Optional values are common in GUI programming, and we need to learn to deal with them and treat undefined and NULL as separate types.

Type definitions for third-party libraries

When referring to a third-party library, we need to be aware that the library does not provide a type definition file. The type definition file is typically described in the types field of the project package.json. For JS libraries, the type definition may also be a separate package, such as @types/ React.

With the type definition file, we can introduce the corresponding type definition when calling the API, and there will be type check and code prompt when calling, to improve the efficiency of using third-party libraries and find possible bugs in advance.

If not, consider whether to maintain a definition file yourself, but the cost is significant. So, a third party library without a defined file, we have to carefully consider whether to use this library in the TS project.

High-level types

The TS documentation has a chapter called “Advanced Types.” These are all advanced type features. In addition to the Type Guard, cross types, union types, and null-capable types mentioned above, the key is

  • Mapped types
  • Conditional Types
  • Index types

When using generics, these techniques allow us to “program” types. Imagine using type variables? Ternary expressions or methods like array.prototype.map.

For example, conditional types are used like this:

T extends U ? X : Y
Copy the code

If T is compatible with U (T contains all attributes that U has, and T can be assigned to U), the type is X, otherwise Y.

Take a look at the practical use of condition types. For example, the following functions may return string or null:

function process(text: string | null) :string | null {
  return text && text.replace(/f/g."p");
}
Copy the code

This is problematic because the return value may be null and there is no toUpperCase method.

/ / ⌄ Type Error! : (
process("foo").toUpperCase();
Copy the code

At this point we can use conditional types to solve the problem:

function process<T extends string | null> (
 text: T
) :T extends string ? string : null {... } process("foo").toUpperCase() // ok
process().toUpperCase() // error
Copy the code

When writing TS code, we can use these advanced typing techniques to make type checking more robust, avoid repeating type definitions, and write more elegant code.

Since this is not a feature article on TS, there are some tips on how to use TypeScript, such as mapping types, that are not covered in this article. See: Working With TypeScript and Working with TypeScript (2) and TS Learning Summary: Compiling Options && Typing tips

ESLint and Prettier

ESLint and Prettier are the more popular and ubiquitous tools. I won’t go into too much detail here, but will focus on how ESLint supports TypeScript.

ESLint + TypeScript: A new alternative to TSLint

TSLint announced in 2019 that future projects would be scrapped. TS officially recommends ESLint as Linter. We can make ESLint support parsing TS files with @typescript-eslint/parser. The @typescript-esLint /eslint-plugin is a custom Lint rule for TS under ESLint.

Using ESLint and Prettier in a TypeScript Project this article explains how to migrate from TSLint to ESLint.

There are also the following articles, which explain the configuration (ESLint + TS) :

  • Integrating Prettier + ESLint + Airbnb Style Guide in VSCode
  • Setting up ESLint with Prettier, TypeScript, and Visual Studio Code
  • From ESLint to TSLint and Back Again

For background on the TSLint to ESLint switch, see the typescript-ESLint project README, which goes into great detail

The advantage of using ESLint is that rulesets like Airbnb can be used directly for TS projects, backed by the ESLint ecosystem. The blogs listed above show how to configure Airbnb + typescript-ESLint + prettier rule sets. Allow projects to use typescript- ESLint to regulate TS code (t-specific Lint rules) and Airbnb to regulate React and JS code (TS is a superset of JS), Use the rule for Prettier to close rules that conflict with Prettier’s code style. The set of three is now the more complete and usable Lint rule.

Some Airbnb rules, such as requiring the React component to declare PropTypes, do not apply to TS projects, so they need to be turned off in the ESLint configuration file. There are plenty of other configurations like this, so instead of following Lint’s rules rigidly, we turn off inappropriate rules and just take the best of them.

JS + TS hybrid project ESLint configuration

In cases where there are JS and TS files, ESLint needs to validate ts-related rules only on TS files, otherwise many TS rules will also take effect while validating JS, causing confusion.

The solution is to use ESLint’s override:

"overrides": [{"files": "**/*.ts"."extends": [
        "eslint-config-airbnb"."plugin:@typescript-eslint/recommended"."prettier/@typescript-eslint"."prettier"."prettier/react"],}],Copy the code

Add TS rules only when dealing with TS files.

An issue related to this issue.

In addition, there are some JS rules that can cause problems when using TS files, such as github.com/eslint/esli… . The solution is also to use Override.

Pre commit hook

A Pre Commit hook is a Git hook that is set up to run before committing. Front-end projects typically use this opportunity to run static code reviews and code formatting, such as ESLint, Prettier. You can also run tests or TS compilations and so on.

Commit Hook is specifically mentioned here because it is essential. ESLint and Prettier would be useless without commit hooks.

Configuring pre-commit Hooks for Prettier and Linting on a TypeScript Project can be described in Configuring pre-commit Hooks for Prettier and Linting on a TypeScript Project.

Commit hooks can also be skipped with -n, so you should also add ESLint to CI to ensure that malformed code commits are found immediately.


Development experience

Whether it’s packaging tools or Mono-repo, these infrastructures actually enhance the developer experience. Save worry and effort in development, convenient and quick, one-click configuration, one-click upgrade, which is now the direction of front-end development experience upgrade. The development experience is an important consideration when choosing a build chain for the React component.

Packaging tools

For Module formats, we’ve heard about AMD, CommonJS, UMD, ES Module, etc. Since the module standard started out as CommonJS for Nodejs on a large scale, we wrote JS modules in CommonJS format a few years ago. Packaging tools such as Webpack are also compatible only with CommonJS modules. Later, ES Module was introduced in ES 2015, a standard that will be supported by future browsers and Nodejs. In terms of functionality, ES Module is syntax-concise, supports multiple exports, and allows build tools to do static dependency analysis, making tree-shaking possible.

Current build tools support the native ES Module format (previously converted to CommonJS with Babel). The component source code we write is ES Module. On the output side, JS libraries today generally provide ES Module versions. So we need to find the right way to pack.

Build ES Module: Rollup/Babel

We simply point the module field of package.json to the packaged ES Module file, and the build tool will use the Module field instead of the main field to build.

Next we need to choose the packaging tool, Webpack currently does not support the output of ES Module, may be supported in Webpack 5. So get rid of Webpack.

Rollup and Babel are two possible solutions.

Rollup is currently the most popular JS library packaging tool. React, Vue and other open source projects use Rollup. Rollup supports output in mainstream formats such as CommonJS, UMD and ES Module, and supports static resources such as CSS through plug-ins. The main difference between Rollup and Webpack is that Rollup is centered around building JS and is based on ES Modules from the start, with additional plug-ins introduced to be compatible with CommonJS code. Webpack focuses more on building all resources, with an emphasis on Code Splitting’s ability to focus on packaging Web applications. Rollup is lighter and more focused, and supports ES Module output, so Rollup is preferred when it comes to JS library packaging.

Babel itself is just a translation tool. However, Babel supports TS code translation through plug-ins, as well as JSX translation, so if it is a simple TS library, it can be translated directly with Babel, and the output is the original ES Module (because Babel does not parse modules at all, Just pure translation code). Note that Babel’s TS translation is only a translation, not a compilation, so no type errors are reported. Additional TSC runs are required to type the TS code. Other static resources are the same and need to run tasks separately.

Focus on Father packaged with JS library

Both tools can be used, but I won’t go into configuration here, because the trend is to build toolchains down, encapsulated as a unified portal. It can be built by running a command and simply configuring the necessary parameters. The next toolchain update also takes only one entry tool to update, rather than spending time maintaining the entire build. Umijs, Create React App, and Vue-CLI are examples of this.

Here to you a focus on JS library packaging tool: Father. Father can be simply understood as CRA or Umi in the FIELD of JS library. Father encapsulates Rollup and Babel toolchains.

In the simplest case, we just need to tell Father what format output is required to build successfully, for example:

father build --esm --cjs --umd --file bar src/foo.js
Copy the code

Therefore, the author used Father to package the React component in the project. If you’re interested in the Rollup and Babel build processes, check out the Father source code. It’s easy to read.

Mono-repo management: Lerna

Lerna is a tool for managing mono repo with multiple NPM packages. Mono Repo means that the source code of multiple projects is managed under the same repository.

Simply put, Lerna’s function is to run several commands simultaneously in multiple packages with one click. The startup order of commands is also choreographed at runtime based on the dependency topology between packages. In addition, Lerna’s bootstrap command can automatically link dependencies between packages to their own node_modules. This is arguably the biggest selling point of all. Prior to Lerna, developing multiple interdependent NPM packages locally required a bunch of NPM links, which were prone to problems.

The mono-repo approach itself is also designed to improve the efficiency of managing source code and sharing infrastructure in the case of multiple NPM packages. So Lerna actually improves the experience of developing front-end projects based on Mono Repo.

Front-end component library projects, Lerna is very suitable.


The code quality

In fact, the static checks mentioned earlier are also used to ensure code quality, which in this case refers to testing.

React component test technology selection

There are many test frameworks for the React component. I chose Jest. Because this is FB’s own tool, it is also a very popular testing framework. In addition to the testing framework, we also need a DOM Util for component rendering and DOM manipulation. The popular ones are Enzyme and react-testing-library.

In React 16, the Enzyme has some issues, such as not supporting useEffect in shallow mode. See: github.com/airbnb/enzy… . The React Testing Library is a lighter version of the React Test Util. His FAQ states his views on enzymes:

What about enzyme is “bloated with complexity and features” and “encourage poor testing practices”? Most of the damaging features have to do with encouraging testing implementation details. Primarily, these are shallow rendering, APIs which allow selecting rendered elements by component constructors, and APIs which allow you to get and interact with component instances (and their state/properties) (most of enzyme’s wrapper APIs allow this). The guiding principle for this library is:

The more your tests resemble the way your software is used, the more confidence they can give you. – 17 Feb 2018

The authors feel that testing should mimic what users do with your product, and should not encourage testing of implementation details.

Based on various factors, the author chose react Testing Library. In general, there is not much difference between the Util library and the Util library. What is easier to write is good.

If you’re not familiar with the React Testing Library, check out the react Testing Library website and this tutorial.

Testingjavascript.com/ is a test tutorial site for a larger view of the techniques involved in testing, if you are not sure about the classification and usefulness of testing.

Common testing techniques

React Testing Library test routines

Using react Testing Library is easy, we just call Render and render the component:

const { asFragment, queryByText, rerender } = render(
  <Graphin data={data} layout={layout}>
    <div>foo</div>
  </Graphin>
);
expect(queryByText(/foo/)).toBeTruthy();
Copy the code

The interesting thing is that render returns a Render result with some DOM Query Util and some other util. QueryByText, for example, gets DOM elements based on the text in the element as a selector. Other DOM Query apis can be seen here. One of the most widely used is queryByTestId, which can be accessed by queryByTextId by adding data-test-id to React.

As you can see, the React Testing Library encourages assertions based on attributes like the element’s text. This is the philosophy of the library, which wants developers to test it from the perspective of how users will use the product. Rather than looking at implementation details like DOM structure.

In addition to testing the rendered UI, we also need to trigger events, using apis like Act and fireEvent:

act((a)= > {
  fireEvent.click(getByText(/Click Me/), {});
});
Copy the code

The reason for the act wrapper is that React rendering in the browser is periodic, with batch updates. Therefore, writing a call that changes state in act ensures that the call completes the rendering cycle.

To update the props of the component, we need to use rerender returned in the Render result:

data = { id: "1" } // update props.data
rerender(<Graphin data={data} layout={layout}>/Graphin>)
Copy the code

You can then continue to assert using the same functions returned from the first call.

The last function is asFragment, which returns the COMPONENT’s DOM structure. This allows us to test the component using Jest’s Snapshot:

expect(asFragment()).toMatchSnapShot();
Copy the code

Mock browser events

In testing, it’s not unusual to encounter situations where you need mock functions or other objects. Mock Functions can use Jest’s mock Functions. More troublesome are the mocks of some browser events. Because Jest’s DOM implementation uses JSDom, it’s not a real browser environment. As an example, if you need to simulate a browser resize event, you can do this:

act((a)= > {
  // Change the viewport to 500px.
  (window as any).innerWidth = 500;
  (window as any).innerHeight = 500;
});
fireEvent(window.new Event("resize"));
Copy the code

Canvas test

If there is Canvas in the test target, there are two situations:

  • The content on the Canvas is the result of a rendering of the diagram library used by the component, and has nothing to do with the correctness of the component itself
  • The content on the Canvas is the target for testing, such as writing tests to a chart library

If it is the former, we can Mock out the Canvas. Jest-canvas-mock is very convenient for one-click Mock.

If it is the latter, we can run a real browser using jest-electron to test the Canvas drawing results.

When using jest-canvas-mock, we can also use the API attached to the canvas of the mock to get information about the drawing call on the canvas:

let canvas = getByTestId("custom-element").firstChild as HTMLCanvasElement;
let ctx = canvas.getContext("2d") as any;
ctx.__getPath(); // Obtain path information
ctx.__getEvents(); // Get event records
ctx.__getDrawCalls(); // Get draw call information
Copy the code

In this way, we can use this information to see whether the diagram drawing interface is called and whether the logic of the React component that calls the diagram rendering API is correct.

coverage

Jest configures collectCoverage: True to generate test coverage reports locally. Using http-server on a local server, you can see the following table:

Coverage is divided into four parts: statement, line, branch and function. When we talk about coverage, we usually mean line coverage, which is what percentage of the code itself is tested. But branch coverage is also important, which means we have tested all cases. Whether or not coverage is 100% depends, and if it’s a library like LoDash, there should be a metric. For a complex React component, ensure that the core link is overwritten.

High coverage doesn’t really matter if the test itself is poorly written. For example, just running the code through without any verification of the results, even if the code logic is broken, the test will pass and the coverage will be high, but such a test will not be useful.

To sum up, it is not blindly the pursuit of good numbers. Coverage reports play an auxiliary role in helping us see if our tests are missing functions, branches, and so on that should be tested. Again, the criteria for evaluating tests is whether they help us discover regression conditions every time we commit code later.


conclusion

This article is a summary of an engineering governance process for React + TypeScript components. If your project is also a React + TypeScript component and will be published as an NPM package for others to use, this article should provide some ideas for engineering.

For reasons of length, some of the details of the process require readers to read the tutorials and blogs linked to it, which are more focused and in-depth. This article focuses on the main aspects of React + TypeScript component engineering (static checking, development experience, and code quality) and some of the issues that need to be addressed.

Github blog post link