This article is reprinted fromTmall front-end blog, more exciting articles please enter the front end of tmall blog to view

preface

Why test

I didn’t like writing tests in the past, mainly because I thought writing and maintaining test cases was a waste of time. After actually writing the base components and the base tools for a while, there are many benefits to automated testing. Testing, of course, is all about improving code quality. The code has test cases, which, while not 100% bug free, at least means that the scenarios covered by the test cases are not problematic. There are test cases, run before release, can eliminate all kinds of inadvertence caused by functional bugs.

Another important feature of automated testing is rapid feedback. Faster feedback means more efficient development. Take the UI component for example. The development process is to open the browser to refresh the page and make sure that the UI component is working as expected. After the automatic test is connected, these manual clicks can be replaced by scripts. After the code watch is connected, each time you save the file, you can quickly know whether your changes affect the function, which saves a lot of time. After all, the machine can do things much faster than people.

With automated testing, developers can trust their code more. Developers will no longer be afraid to hand over their code to someone else to maintain, and they won’t have to worry about other developers “breaking” their code. Later generations can take a piece of code with a test case and modify it more easily. The test cases clearly explain the expectations and requirements of developers and users for this end of the code, which is also very conducive to code inheritance.

Consider the input-output ratio to do the test

To say so much about the benefits of testing does not mean that you should start by writing test cases that cover 100% of scenarios. I have always insisted on testing based on the input-output ratio. Maintaining test cases is also expensive (after all, few tests write business test cases for the front end, and the process automation tools used by the front end are not included in the tests). Consider writing test cases to ensure quality for parts that change infrequently and are frequently reused, such as base components and base models. Personally, I prefer to write a small number of test cases to cover 80%+ scenarios, ensuring that the coverage mainly uses the process. Some bugs in extreme scenarios can accumulate test cases in iterations, and scenario coverage will gradually approach 100%. But for business logic that iterates quickly and active pages that don’t live long, don’t spend time writing test cases. Maintaining test cases takes too long and costs too much.

Testing the Node.js module

For node.js modules, testing is relatively convenient, after all, the source code and dependencies are local, visible and tangible.

Testing tools

The main tools used for testing are test frameworks, assertion libraries, and code coverage tools:

  1. Test frameworks: Mocha, Jasmine, etc. Tests mainly provide clear and concise syntax for describing test cases and grouping test cases. The test framework captures the AssertionError thrown by the code and adds a lot of additional information, such as which use case died, why, etc. Test frameworks usually provide TDD (test driven development) or BDD (behavior driven development) test syntax to write test cases. For comparison Between TDD and BDD, you can see a well-known article The Difference Between TDD and BDD. Different test frameworks support different test syntax, for example Mocha supports both TDD and BDD, while Jasmine only supports BDD. Mocha’s BDD syntax is followed here as an example

  2. Assertion libraries: Should. Js, chai, expect. Js, etc. Assertion libraries provide semantic methods for making various judgments about values. Instead of using the assert library, you can use the native Assert library in Node.js. Here we use should.js as an example

  3. Code coverage: Istanbul et al., which runs the code on a syntagmatic branch, calculates the current test case’s coverage of the source code based on the information gathered after the run and the information at the time of the run.

A chestnut with fried eggs

Take the following node.js project structure as an example

.├ ── LICENSE ├─ ├─ download.txt ├─ download.txt ├─ download.txtCopy the code

NPM install –save-dev Mocha should Once you’re done, you can start your test tour.

For example, you currently have a js file in index.js

'use strict';
module.exports = () => 'Hello Tmall';Copy the code

So for such a function, first of all, we need to set a test case. Obviously, run the function and get the string Hello Tmall, even if the test passes. Then you can write a test case as Mocha writes it, so create a new test code in test/index.js

'use strict'; require('should'); const mylib = require('.. /index'); describe('My First Test', () => { it('should get "Hello Tmall"', () => { mylib().should.be.eql('Hello Tmall'); }); });Copy the code

Once the test case is written, how do you know the test results?

/node_modules/.bin/_mocha command line tool _mocha can be found directly at./node_modules/.bin/_mocha. Run it to execute the test:

So you can see the test results. We can also deliberately fail the test by modifying the test.js code to:

'use strict'; require('should'); const mylib = require('.. /index'); describe('My First Test', () => { it('should get "Hello Taobao"', () => { mylib().should.be.eql('Hello Taobao'); }); });Copy the code

You can see the following image:

/node_modules/.bin/_mocha –require should, Mocha will load should.js itself when starting the test. Test /test.js require(‘should’) manually; . More parameters can be found in the official Mocha documentation.

So what do these test codes mean?

Here, we first introduce the assertion library should.js, and then introduce our own code. Here, the IT () function defines a test case. Through the API provided by should.js, we can describe the test case very semantically. So what does Describe do?

What describe does is group test cases. In order to cover as many situations as possible, there are often many test cases. In this case, it is easy to manage by grouping. (Describe can be nested. Another very important feature is that each group can be pre-processed (before, beforeEach) and post-processed (after, afterEach).

If you change the index.js source code to:

'use strict';
module.exports = bu => `Hello ${bu}`;Copy the code

To test different BU, the test case is also changed to:

'use strict'; require('should'); const mylib = require('.. /index'); let bu = 'none'; describe('My First Test', () => { describe('Welcome to Tmall', () => { before(() => bu = 'Tmall'); after(() => bu = 'none'); it('should get "Hello Tmall"', () => { mylib(bu).should.be.eql('Hello Tmall'); }); }); describe('Welcome to Taobao', () => { before(() => bu = 'Taobao'); after(() => bu = 'none'); it('should get "Hello Taobao"', () => { mylib(bu).should.be.eql('Hello Taobao'); }); }); });Copy the code

/node_modules/.bin/_mocha

Before is executed before all test cases in each group, and after is executed after all test cases. To be granular with test cases, use beforeEach and afterEach, which are executed before and after each test case in the group, respectively. Since a lot of code needs to emulate the environment, you can do this preparation before or beforeEach and then recycle in after or afterEach.

Testing asynchronous code

The callback

It’s obvious that the code is synchronous, but a lot of times our code is executed asynchronously, so how do we test asynchronous code?

For example, here the index.js code becomes an asynchronous code:

'use strict';
module.exports = (bu, callback) => process.nextTick(() => callback(`Hello ${bu}`));Copy the code

As the source code becomes asynchronous, the test case has to be modified:

'use strict'; require('should'); const mylib = require('.. /index'); describe('My First Test', () => { it('Welcome to Tmall', done => { mylib('Tmall', rst => { rst.should.be.eql('Hello Tmall'); done(); }); }); });Copy the code

Here the function passing the second argument to IT adds a done argument. When this argument is present, the test case is considered asynchronous, and the test is considered finished only when done() is executed. What if done() is never executed? Mocha triggers its own timeout mechanism, which automatically terminates the test after a certain amount of time (2s by default, which can be set by the –timeout parameter) and treats the test as a failure.

Of course, before, beforeEach, after, and afterEach also support asynchrony, using the same method as IT: done as the first argument to the function passed in, and then executed after execution.

Promise

Usually, we feel low when we write callback directly, and it is easy to appear callback pyramid. We can use Promise to do asynchronous control, so how do we test asynchronous code under Promise control?

To do this, return a Promise object:

'use strict';
module.exports = bu => new Promise(resolve => resolve(`Hello ${bu}`));Copy the code

Of course, if the CO party can also directly use co package:

'use strict';
const co = require('co');
module.exports = co.wrap(function* (bu) {
  return `Hello ${bu}`;
});Copy the code

The corresponding modification tests are as follows:

'use strict'; require('should'); const mylib = require('.. /index'); describe('My First Test', () => { it('Welcome to Tmall', () => { return mylib('Tmall').should.be.fulfilledWith('Hello Tmall'); }); });Copy the code

In 8.x.x, Should. Js comes with Promise support. You can test Promise objects using fullfilled(), Rejected (), fullfilledWith(), rejectedWith(), and other apis.

Note: When testing the Promise object with should, be sure to return, be sure to return, or the assertion will be invalid

Run tests asynchronously

Sometimes, not just a test case needs to be asynchronous, but the entire test process needs to be executed asynchronously. One way to test the Gulp plug-in, for example, is to run the Gulp task first and then test whether the generated file is as expected. So how do you perform the entire test process asynchronously?

In fact, Mocha provides an asynchronous startup test. You only need to add –delay after the command to start Mocha, and Mocha will start asynchronously. In this case we need to tell Mocha when to start running the test case, simply by executing the run() method. Change test/test.js to the following:

'use strict'; require('should'); const mylib = require('.. /index'); setTimeout(() => { describe('My First Test', () => { it('Welcome to Tmall', () => { return mylib('Tmall').should.be.fulfilledWith('Hello Tmall'); }); }); run(); }, 1000);Copy the code

/node_modules/.bin/_mocha causes the following:

So add –delay try:

The familiar green is back!

Code coverage

Now that you’re done with unit testing, you’re ready to try out code coverage. Istanbul: NPM install –save-dev Istanbul. Istanbul also has a command line tool, which is available at./node_modules/.bin/ Istanbul. It’s easy to do a code coverage test on the Node.js side. Just start Mocha with Istanbul, like the test case above. /node_modules/.bin/ Istanbul cover./node_modules/.bin/_mocha — –delay

This is the code coverage result, because the code in index.js is relatively simple, so it is 100%.

'use strict';
module.exports = bu => new Promise(resolve => {
  if (bu === 'Tmall') return resolve(`Welcome to Tmall`);
  resolve(`Hello ${bu}`);
});
Copy the code

The test case also changed:

'use strict'; require('should'); const mylib = require('.. /index'); setTimeout(() => { describe('My First Test', () => { it('Welcome to Tmall', () => { return mylib('Tmall').should.be.fulfilledWith('Welcome to Tmall'); }); }); run(); }, 1000);Copy the code

/node_modules/.bin/ Istanbul cover./node_modules/.bin/_mocha — –delay

When Mocha is run using Istanbul, the Istanbul command puts its parameters before — and the parameters that need to be passed to Mocha after —

As expected, coverage is no longer 100%, so I want to see which code is running and which is not. What do I do?

Once you’ve run your project, you’ll see a new folder called Coverage. This is where you’ll see the code coverage results. It looks like this:

. ├ ─ ─ coverage. Json ├ ─ ─ the lcov - report │ ├ ─ ─ base. The CSS │ ├ ─ ─ index. The HTML │ ├ ─ ─ prettify the. CSS │ ├ ─ ─ prettify the js │ ├ ─ ─ │ ├─ ├─ ├─ ├─ index.js.htm │ ├─ ├─ index.js.htmCopy the code
  • Package. json and lcov.info: JSON files describing test results, which can be read by tools to generate visual code coverage results, and will be mentioned later when integrating continuous integration.

  • Lcov-report: Through the coverage results page generated after the above two files are processed by the tool, you can see the code coverage very intuitively

Here open coverage/lcov-report/index.html can see the file directory, click the corresponding file to enter the file details, you can see the coverage of index.js as shown in the figure:

Here are four metrics by which code coverage can be quantified:

  • Statements: the execution of executable statements

  • Execution of if, for example, produces two branches, and we only run one of them

  • Functions: function execution

  • Lines: line execution

In the following code section, code that has not been executed is marked in red. This code is often marked in red and is a breeding ground for bugs. We want to eliminate this red color as much as possible. To do this we add a test case:

'use strict'; require('should'); const mylib = require('.. /index'); setTimeout(() => { describe('My First Test', () => { it('Welcome to Tmall', () => { return mylib('Tmall').should.be.fulfilledWith('Welcome to Tmall'); }); it('Hello Taobao', () => { return mylib('Taobao').should.be.fulfilledWith('Hello Taobao'); }); }); run(); }, 1000);Copy the code

/node_modules/.bin/ Istanbul cover./node_modules/.bin/_mocha — –delay. Goal accomplished, you can sleep soundly

Integrated into the package. The json

Now that a simple Node.js test is done, these tests can be written into the package.json scripts field, for example:

{
  "scripts": {
    "test": "NODE_ENV=test ./node_modules/.bin/_mocha --require should",
    "cov": "NODE_ENV=test ./node_modules/.bin/istanbul cover ./node_modules/.bin/_mocha -- --delay"
  },
}Copy the code

NPM run test can run unit tests, and NPM run cov can run code coverage tests

Test multiple files separately

Often our projects have many files, and the recommended approach is to test each file individually. For example, if the code is in./lib/, each file in./lib/ should correspond to a test file named _spec.js in./test/

Why is that? Can’t you run the index.js entry file directly for testing?

Testing directly from the entry file is actually a black box test, where we don’t know what’s going on inside the code, just to see if a particular input gets the desired output. This usually covers some of the main scenarios, but some of the edge scenarios inside the code are difficult to solve directly by entering specific data from the entry. For example, the code needs to send a request, the entry is just a URL, the URL itself is correct or not is only one aspect, the network condition and server condition at that time is unpredictable. Passing in the same URL, either because the server hung up or because of network jitter, could throw an error that failed the request, and if the error is not handled, it could lead to failure. So we need to open up the black box and do a white box test on every little piece of it.

Of course, not all modules are so easy to test. Node.js is often used to write and build plug-ins and automation tools, typically Gulp plug-ins and command-line tools.

Test the Gulp plug-in

Gulp is now the most used front-end build, and its concise API, streaming build philosophy, and ability to operate in memory have made it highly sought after. Although there are upstarts like Webpack, Gulp is still the dominant front-end builder with its thriving ecosystem. Gulp is currently used as a code building tool in the front end of Tmall.

Using Gulp as a building tool, it is inevitable to develop Gulp plug-ins to meet the building requirements of business customization. In essence, the building process is to modify the source code. If there are bugs in the process of modification, it may directly lead to online failures. Therefore, for Gulp plug-ins, especially those that modify the source code, careful testing must be done to ensure quality.

Another chestnut with an egg

For example, here is a Gulp plug-in, the function is to add a note to the front of all JS code // tmall recruitment, interested please send your resume to [email protected], the code of Gulp plug-in is like this:

'use strict'; const _ = require('lodash'); const through = require('through2'); const PluginError = require('gulp-util').PluginError; const DEFAULT_CONFIG = {}; module.exports = config => { config = _.defaults(config || {}, DEFAULT_CONFIG); return through.obj((file, encoding, callback) => { if (file.isStream()) return callback(new PluginError('gulp-welcome-to-tmall', `Stream is not supported`)); File.contents = new Buffer(' // contents = new Buffer, '[email protected]\n${file.contents. ToString ()}'); callback(null, file); }); };Copy the code

How do you test a piece of code like this?

One way is to simply fake a file and pass it in. Internally, Gulp actually reads the file from the OS via viny-fs and makes a virtual file object, which is then handed over to a Transform created through2 to overwrite the stream. The orchestrator controls the execution order between the outer tasks to ensure the order of execution. Of course, a plugin doesn’t need to care about Gulp’s task management mechanism, only that a vinyl object passed in can be handled correctly. So all we need to do is forge a virtual file object and pass it to our Gulp plugin.

Design test cases first, considering two main scenarios:

  1. The virtual file object is in stream format and should throw an error

  2. The virtual file object is in Buffer format, which can normally process the content of the file. The processed file is added with // tmall recruitment. If you are interested, please send your resume to [email protected]

For our first test case, we need to create a vinyl object in streaming format. For each second test case, we need to create a Vinyl object in Buffer format.

Test/SRC /testfile.js = test/ SRC /testfile.js

'use strict';
console.log('hello world');Copy the code

The source file is so simple that the next task is to wrap it into a vinyl object in stream format and a Vinyl object in Buffer format, respectively.

Construct a virtual file object in Buffer format

You can use vinyl-fs to read files on the operating system and generate vinyl objects. Gulp also uses it internally. Buffer is used by default:

'use strict'; require('should'); const path = require('path'); const vfs = require('vinyl-fs'); const welcome = require('.. /index'); describe('welcome to Tmall', function() { it('should work when buffer', done => { vfs.src(path.join(__dirname, 'src', 'testfile.js').pipe(welcome()).on('data', function(vf) {vf.contents.toString().should. Be. Please send your resume to [email protected]\n'use strict'; \nconsole.log('hello world'); \n`); done(); }); }); });Copy the code

After testing the Buffer format, we have tested the main function. How do we test the stream format?

Build a virtual file object in stream format

Solution 1: Use vinyl-fs as above and add a buffer: false:

Change the code to look like this:

'use strict'; require('should'); const path = require('path'); const vfs = require('vinyl-fs'); const PluginError = require('gulp-util').PluginError; const welcome = require('.. /index'); describe('welcome to Tmall', function() { it('should work when buffer', done => { // blabla }); it('should throw PluginError when stream', done => { vfs.src(path.join(__dirname, 'src', 'testfile.js'), { buffer: false }) .pipe(welcome()) .on('error', e => { e.should.be.instanceOf(PluginError); done(); }); }); });Copy the code

In this way, vinyl-FS reads files directly from the file system and generates vinyl objects in streaming format.

If the content doesn’t come from a file system, but from an existing readable stream, how do you wrap it into a vinyl object in a stream format?

Such requirements can be met with vinyl-source-stream:

'use strict'; require('should'); const fs = require('fs'); const path = require('path'); const source = require('vinyl-source-stream'); const vfs = require('vinyl-fs'); const PluginError = require('gulp-util').PluginError; const welcome = require('.. /index'); describe('welcome to Tmall', function() { it('should work when buffer', done => { // blabla }); it('should throw PluginError when stream', done => { fs.createReadStream(path.join(__dirname, 'src', 'testfile.js')) .pipe(source()) .pipe(welcome()) .on('error', e => { e.should.be.instanceOf(PluginError); done(); }); }); });Copy the code

We first create a readable stream through fs.createreadStream, and then wrap this readable stream into a vinyl object in streaming format through vinyl-source-stream, which we hand over to our plugin for processing

Throw pluginErrors when Gulp plugins execute errors. This allows plugins such as Gulp-Plumber to perform error management and prevent errors from terminating the build process. This is very useful for Gulp Watch

Simulate Gulp running

The object we forged was able to pass the functional test, but the source of the data was forged by the user, not the way the user used it on a daily basis. If you test in a way that is closest to what users are using, the results will be more reliable and authentic. So, how to simulate the real Gulp environment to test the Gulp plug-in?

First, simulate our project structure:

├─ build │ ├─ ├─ testfile.js ├─ SRC ├─ testfile.jsCopy the code

A simple project structure, source code in SRC, gulpfile to specify tasks, build results in build. Set up a shelf in the test directory as we normally do, and write gulpfile.js:

'use strict'; const gulp = require('gulp'); const welcome = require('.. /index'); const del = require('del'); gulp.task('clean', cb => del('build', cb)); gulp.task('default', ['clean'], () => { return gulp.src('src/**/*') .pipe(welcome()) .pipe(gulp.dest('build')); });Copy the code

To simulate Gulp running in the test code, there are two options:

  1. Run the gulp command directly using the spawn or exec provided by the child_process library, and then test the build directory for the desired result

  2. Run the Gulp task by directly obtaining the Gulp instance from the gulpfile in the current process, and then test whether the desired result is found in the build directory

Open the child to test some pit, Istanbul test code coverage constantly can’t across processes, so the child to test, first of all need the child to execute commands and Istanbul, then also need to collect coverage data manually, when open more child process still need to do your own coverage results data merged, very trouble.

So what do you do without opening the child process? Exports of module. Exports = gulp; exports of module. Exports of module. Then require gulpfile to obtain the Gulp instance, and submit the Gulp instance to run-sequence to run by calling the internal closed apigulp. run.

Instead of opening the child process, we put the Gulp running process in the before hook, and the test code looks like this:

'use strict'; require('should'); const path = require('path'); const run = require('run-gulp-task'); const CWD = process.cwd(); const fs = require('fs'); describe('welcome to Tmall', () => { before(done => { process.chdir(__dirname); run('default', path.join(__dirname, 'gulpfile.js')) .catch(e => e) .then(e => { process.chdir(CWD); done(e); }); }); it('should work', function() { fs.readFileSync(path.join(__dirname, 'build', 'testfile.js')).tostring ().should.be.eql(' // \nconsole.log('hello world'); \n`); }); });Copy the code

In this way, code coverage tests can be the same as regular Node.js modules, since there is no need to open child processes

Test the command line output

Double chestnuts with fried eggs

Of course, front-end writing tools are not limited to Gulp plugins. There are also occasional auxiliary commands that run directly on the terminal and display the results directly on the terminal. For example, a simple command-line tool implemented using Commander:

// in index.js 'use strict'; const program = require('commander'); const path = require('path'); const pkg = require(path.join(__dirname, 'package.json')); program.version(pkg.version) .usage('[options] <file>') .option('-t, --test', 'Run test') .action((file, prog) => { if (prog.test) console.log('test'); }); module.exports = program; // in bin/cli #! /usr/bin/env node 'use strict'; const program = require('.. /index.js'); program.parse(process.argv); ! program.args[0] && program.help(); // in package.json { "bin": { "cli-test": "./bin/cli" } }Copy the code

Intercept the output

To test the command line tool, you need to simulate the user entering the command. This time, again, you need to create a fake process.argv and hand it to Program.parse. The data is directly console.log, how to intercept?

This can be done using Sinon to intercept console.log, and sinon is nice enough to provide mocha-sinon for easy testing, so test.js looks something like this:

'use strict'; require('should'); require('mocha-sinon'); const program = require('.. /index'); const uncolor = require('uncolor'); describe('cli-test', () => { let rst; beforeEach(function() { this.sinon.stub(console, 'log', function() { rst = arguments[0]; }); }); it('should print "test"', () => { program.parse([ 'node', './bin/cli', '-t', 'file.js' ]); return uncolor(rst).trim().should.be.eql('test'); }); });Copy the code

PS: Since the command line output often uses a library like colors to add colors, remember to uncolor these colors when testing

summary

So much for node.js-related unit testing, but there are many scenarios like server testing that I can’t do. Of course, the main work of the front end is to write the page, so I’ll talk about how to test the components on the page.

Test page

For front-end code running in the browser, testing is much more difficult than node.js modules. Node.js module pure JS code, using V8 to run in the local, testing a variety of dependencies and tools can be quickly installed, and front-end code not only to test JS, CSS and so on, more troublesome things need to simulate a variety of browsers, the more common front-end code test scheme has the following:

  1. Build a test page and run it directly on the virtual machine in various browsers (such as the company’s F2etest). The downside of this solution is that it’s not easy to do code coverage testing, it’s not easy to do persistent integration, and it’s a lot of human work

  2. Using PhantomJS to build a fake browser environment to run unit tests has the benefit of addressing code coverage issues and continuous integration. The downside of this solution is that PhantomJS is, after all, Qt’s WebKit, not a real browser environment, and PhantomJS also has various compatibility pits

  3. Use Karma to call various local browsers for testing. The advantage is that you can test across browsers and also test coverage. However, for continuous integration, you need to note that you can only test PhantomJS, after all, the integrated Linux environment cannot have a browser. This is arguably the best way to test front-end code you’ve seen so far

Gulp is used to test the React component. Webpack is used to test the React component

叒 a chestnut with fried eggs

The front-end code is still JS, and you can also use Mocha+ should.js to do unit tests. Open Mocha and should.js on node_modules and you’ll find that these excellent open source tools are already available to run directly in browsers: Mocha /mocha.js and should/should.min.js are simply introduced with script tags, and Mocha also needs to introduce its own style mocha/mocha.css

First look at our front-end project structure:

. ├ ─ ─ gulpfile. Js ├ ─ ─ package. The json ├ ─ ─ the SRC │ └ ─ ─ index. The js └ ─ ─ the test ├ ─ ─ test. The HTML └ ─ ─ test. JsCopy the code

For example, SRC /index.js defines a global function:

window.render = function() { var ctn = document.createElement('div'); ctn.setAttribute('id', 'tmall'); Ctn. appendChild(document.createTextNode(' tmall front-end recruitment, interested please send resume to [email protected]')); document.body.appendChild(ctn); }Copy the code

The test page test/test.html looks something like this:

<! DOCTYPE html> <html> <head> <meta charset="utf-8"> <link rel="stylesheet" href=".. /node_modules/mocha/mocha.css"/> <script src=".. /node_modules/mocha/mocha.js"></script> <script src=".. /node_modules/should/should.js"></script> </head> <body> <div id="mocha"></div> <script src=".. /src/index.js"></script> <script src="test.js"></script> </body> </html>Copy the code

Head introduces the test framework Mocha and the assertion library should.js. The test results will be displayed in the

container, and test/test.js is our test code.

Testing on the front page is not much different from testing on Node.js, except that you specify the UI used by Mocha and manually call Mocha.run () :

mocha.ui('bdd'); describe('Welcome to Tmall', function() { before(function() { window.render(); }); It (' Hello ', function () {document. GetElementById (" tmall "). TextContent.. Should be. Eql (' Tmall front-end hiring, Please send your resume to [email protected]'); }); }); mocha.run();Copy the code

Open the test/test.html page in your browser and you can see the result:

Open this page in a different browser to see the current browser test. This works with the largest number of browsers, but before you cross machines, remember to upload resources to a place that all test machines can access, such as a CDN.

Now that the test page is available, try PhantomJS

Use PhantomJS for the tests

PhantomJS is a mock browser that can execute JS and even has a WebKit rendering engine, but doesn’t have a browser interface for rendering results. We can use it for many things, such as taking screenshots of web pages, writing crawlers to crawl asynchronously rendered pages, and, next, testing pages.

Of course, instead of using PhantomJS directly, we use Mocha-Phantomjs for testing. NPM install –save-dev Mocha-Phantomjs /node_modules/.bin/ mocha-phantomjs. /test/test.html

Unit testing is done, and code coverage is done

Coverage point

First step, rewrite our gulpfile.js:

'use strict';
const gulp = require('gulp');
const istanbul = require('gulp-istanbul');

gulp.task('test', function() {
  return gulp.src(['src/**/*.js'])
    .pipe(istanbul({
      coverageVariable: '__coverage__'
    }))
    .pipe(gulp.dest('build-test'));
});Copy the code

The coverage result is saved to __coverage__, and the code is placed in the build-test directory. For example, the SRC /index.js code is generated after gulp test, and it looks like this:

var __cov_WzFiasMcIh_mBvAjOuQiQg = (Function('return this'))(); if (! __cov_WzFiasMcIh_mBvAjOuQiQg.__coverage__) { __cov_WzFiasMcIh_mBvAjOuQiQg.__coverage__ = {}; } __cov_WzFiasMcIh_mBvAjOuQiQg = __cov_WzFiasMcIh_mBvAjOuQiQg.__coverage__; if (! (__cov_WzFiasMcIh_mBvAjOuQiQg['/Users/lingyu/gitlab/dev/mui/test-page/src/index.js'])) { __cov_WzFiasMcIh_mBvAjOuQiQg['/Users/lingyu/gitlab/dev/mui/test-page/src/index.js'] = {"path":"/Users/lingyu/gitlab/dev/mui/test-page/src/index.js","s":{"1":0,"2":0,"3":0,"4":0,"5":0},"b":{},"f":{"1":0},"fn Map":{"1":{"name":"(anonymous_1)","line":1,"loc":{"start":{"line":1,"column":16},"end":{"line":1,"column":27}}}},"statem entMap":{"1":{"start":{"line":1,"column":0},"end":{"line":6,"column":1}},"2":{"start":{"line":2,"column":2},"end":{"line ":2,"column":42}},"3":{"start":{"line":3,"column":2},"end":{"line":3,"column":34}},"4":{"start":{"line":4,"column":2},"e nd":{"line":4,"column":85}},"5":{"start":{"line":5,"column":2},"end":{"line":5,"column":33}}},"branchMap":{}}; } __cov_WzFiasMcIh_mBvAjOuQiQg = __cov_WzFiasMcIh_mBvAjOuQiQg['/Users/lingyu/gitlab/dev/mui/test-page/src/index.js']; __cov_WzFiasMcIh_mBvAjOuQiQg.s['1']++; window.render=function(){__cov_WzFiasMcIh_mBvAjOuQiQg.f['1']++; __cov_WzFiasMcIh_mBvAjOuQiQg.s['2']++; var ctn=document.createElement('div'); __cov_WzFiasMcIh_mBvAjOuQiQg.s['3']++; ctn.setAttribute('id','tmall'); __cov_WzFiasMcIh_mBvAjOuQiQg.s['4']++; Ctn. appendChild(document.createTextNode(' tmall front end recruit \uFF0C send resume to [email protected]')); __cov_WzFiasMcIh_mBvAjOuQiQg.s['5']++; document.body.appendChild(ctn); };Copy the code

What the hell is this?! Anyway, just run it. Change the code introduced in test/test.html from SRC /index.js to build-test/index.js to ensure that the compiled code is used at runtime.

Write a hook

The run data is stored in the __coverage__ variable, but we need hook code to retrieve the contents of this variable after the unit test. Put the hook code under test/hook. Js and it will say:

'use strict'; var fs = require('fs'); module.exports = { afterEnd: function(runner) { var coverage = runner.page.evaluate(function() { return window.__coverage__; }); if (coverage) { console.log('Writing coverage to coverage/coverage.json'); fs.write('coverage/coverage.json', JSON.stringify(coverage), 'w'); } else { console.log('No coverage data generated'); }}};Copy the code

/node_modules/.bin/ mocha-phantomjs. /test/test.html –hooks./test/hook. The coverage result is also written to coverage/coverage.json.

To generate the page

With the resulting coverage results you can generate coverage pages, starting with a coverage overview. /node_modules/. Bin/Istanbul report –root coverage text-summary

It’s the same recipe, and I want it to taste familiar. /node_modules/. Bin/Istanbul report –root coverage/lcov-report/index.html Click to enter SRC /index.js:

A racing boat! So we can do coverage tests on the front-end code

Access the Karma

Karma is a test integration framework that makes it easy to integrate test frameworks, test environments, coverage tools, and more in the form of plug-ins. Karma already has a fairly complete plugin system, here try PhantomJS, Chrome, FireFox, first need to install some dependencies using NPM:

  1. Karma: Framework ontology

  2. Karma-mocha: Mocha testing framework

  3. Karma-coverage: coverage test

  4. Karma-spec-reporter: outputs test results

  5. Karma-phantomjs-launcher: PhantomJS environment

  6. Phantomjs-prebuilt: Latest version of PhantomJS

  7. Karma-chrome-launcher: Chrome environment

  8. Karma – Firefox -launcher: Firefox environment

Once installed, we can start our Karma journey. As in the previous project, we will clear the files, leaving only the source file and the source file, and add a karmap.conf.js file:

. ├ ─ ─ karma. Conf. Js ├ ─ ─ package. The json ├ ─ ─ the SRC │ └ ─ ─ index. The js └ ─ ─ the test └ ─ ─ test. JsCopy the code

Karma. Conf. js is the configuration file for the Karma framework, and in this case, it looks something like this:

'use strict';

module.exports = function(config) {
  config.set({
    frameworks: ['mocha'],
    files: [
      './node_modules/should/should.js',
      'src/**/*.js',
      'test/**/*.js'
    ],
    preprocessors: {
      'src/**/*.js': ['coverage']
    },
    plugins: ['karma-mocha', 'karma-phantomjs-launcher', 'karma-chrome-launcher', 'karma-firefox-launcher', 'karma-coverage', 'karma-spec-reporter'],
    browsers: ['PhantomJS', 'Firefox', 'Chrome'],
    reporters: ['spec', 'coverage'],
    coverageReporter: {
      dir: 'coverage',
      reporters: [{
        type: 'json',
        subdir: '.',
        file: 'coverage.json',
      }, {
        type: 'lcov',
        subdir: '.'
      }, {
        type: 'text-summary'
      }]
    }
  });
};Copy the code

What do these configurations mean? Here’s a list of them:

  • Frameworks: The testing frameworks used here are still familiar and familiar with Mocha

  • Test.html is no longer available in the test directory above, all contents that need to be loaded are specified here. If it is a CDN resource, you can write the URL directly, but it is recommended to use local resources as much as possible, so that the test is faster and can be tested even without Internet. In this example, the first line loads the assertion library should.js, the second line loads all the code under SRC, and the third line loads the test code

  • If preprocessors are configured in this file, the file will be processed first and then the result will be loaded. In this example, we need to add coverage to all resources in the SRC directory (gulp-istanbul is used to do this, but karma-coverage is very convenient and does not require a hook). Webpack will also be used here when testing the React component

  • Plugins: List of installed plug-ins

  • Are: browsers that need to be tested. Here we choose PhantomJS, FireFox, and Chrome

  • Reporters: What code reports need to be generated

  • CoverageReporter: How do we generate coverage reports, where we expect to generate the same report as before, including coverage pages, lcov.info, coverage.json, and prompts on the command line

/node_modules/karma/bin/karma start –single-run, you can see the following output:

As you can see, Karma first starts a local service on port 9876, then launches PhantomJS, FireFox and Chrome respectively to load the page, collects the test results and outputs them separately, so that cross-browser testing is resolved. To add a browser, install the corresponding browser plug-in and are built on. Very flexible.

So what if I don’t have Internet Explorer on my MAC and want to test it? /node_modules/karma/bin/karma start Start the local server, then use another machine to open the corresponding browser to directly access the local 9876 port (of course this port is configurable), the same method can be used for mobile terminal test. This solution takes into account the advantages of the first two solutions, make up for their shortcomings, is the best front-end code testing solution seen so far

React Component Test

Last year React took the world by storm, and Of course Tmall kept up with The Times technologically. React component system has been formed. Almost all new businesses are developed with React, while old businesses are constantly migrating to React. React+ WebPack test React+ WebPack test

React Web, not React Native

In fact, Tmall does not use WebPack at present, but uses Gulp+Babel to compile React CommonJS code into AMD module, in order to be more flexible in the use of new and old businesses. Of course, some businesses use Webpack and go online

Yi a fried egg chestnut

Create a React component with a directory structure that looks like this:

. ├ ─ ─ demo ├ ─ ─ karma. Conf., js ├ ─ ─ package. The json ├ ─ ─ the SRC │ └ ─ ─ index. The JSX ├ ─ ─ the test │ └ ─ ─ index_spec. JSX ├ ─ ─ webpack. Dev. Js └ ─ ─ webpack. Pub. JsCopy the code

React SRC /index.jsx

import React from 'react';
class Welcome extends React.Component {
  constructor() {
    super();
  }
  render() {
    return <div>{this.props.content}</div>;
  }
}
Welcome.displayName = 'Welcome';
Welcome.propTypes = {
  /**
   * content of element
   */
  content: React.PropTypes.string
};
Welcome.defaultProps = {
  content: 'Hello Tmall'
};
module.exports = Welcome;Copy the code

JSX = test/index_spec.jsx = test/index_spec.jsx = test/index_spec.jsx = test/index_spec.jsx

import 'should'; import Welcome from '.. /src/index.jsx'; import ReactDOM from 'react-dom'; import React from 'react'; import TestUtils from 'react-addons-test-utils'; describe('test', function() { const container = document.createElement('div'); document.body.appendChild(container); afterEach(() => { ReactDOM.unmountComponentAtNode(container); }); it('Hello Tmall', function() { let cp = ReactDOM.render(<Welcome/>, container); let welcome = TestUtils.findRenderedComponentWithType(cp, Welcome); ReactDOM.findDOMnode(welcome).textContent.should.be.eql('Hello Tmall'); }); });Copy the code

React TestUtils is a great tool for testing React. The React TestUtils library provides many methods for finding nodes and components. Most importantly, it provides an API for simulating events, which is one of the most important functions of UI testing. Learn more about TestUtils on React.

You have your code, you have your test cases, and then you have to run. Karmap.conf. js is not the same as above, first of all, it needs a plugin karmap-webpack, because our React component needs webpack, the code will not run at all. Also note that code coverage testing has changed. Babel builds ES6 and ES7 source code and generates a lot of polyfill code. Therefore, if you do a coverage test on the code after the build is completed, the polyfill code will be included, so the measured coverage is obviously unreliable. This problem can be solved by isparta-loader. The React component’s karma. Conf.js looks something like this:

'use strict';
const path = require('path');

module.exports = function(config) {
  config.set({
    frameworks: ['mocha'],
    files: [
      './node_modules/phantomjs-polyfill/bind-polyfill.js',
      'test/**/*_spec.jsx'
    ],
    plugins: ['karma-webpack', 'karma-mocha',, 'karma-chrome-launcher', 'karma-firefox-launcher', 'karma-phantomjs-launcher', 'karma-coverage', 'karma-spec-reporter'],
    browsers: ['PhantomJS', 'Firefox', 'Chrome'],
    preprocessors: {
      'test/**/*_spec.jsx': ['webpack']
    },
    reporters: ['spec', 'coverage'],
    coverageReporter: {
      dir: 'coverage',
      reporters: [{
        type: 'json',
        subdir: '.',
        file: 'coverage.json',
      }, {
        type: 'lcov',
        subdir: '.'
      }, {
        type: 'text-summary'
      }]
    },
    webpack: {
      module: {
        loaders: [{
          test: /\.jsx?/,
          loaders: ['babel']
        }],
        preLoaders: [{
          test: /\.jsx?$/,
          include: [path.resolve('src/')],
          loader: 'isparta'
        }]
      }
    },
    webpackMiddleware: {
      noInfo: true
    }
  });
};Copy the code

Here are the main differences from the previous karmap.conf.js:

  1. Due to webpack’s packaging capabilities, we import component code directly in the test code, so there is no need to manually import component code in the files

  2. Pre-processing requires webpack for each test file

  3. Add webPack compiler configuration, preLoaders need to be defined when compiling the source code, and code coverage is done using isparta-Loader

  4. Add a webpackMiddleware configuration where noInfo doesn’t need to output a bunch of webPack compilation messages

/node_modules/karma/bin/karma start –single-run:

Very well, the results are in line with expectations. Open coverage/lcov-report/index.html

Goose sister sound!! Coverage tests done directly on JSX code! The React component test is now almost complete

summary

The main difficulty of front-end code testing is how to simulate a variety of browser environments. Karma provides a good way for us to automatically open and test local browsers, and provide direct access to pages that are not local browsers. There are many kinds of browsers on the front end, especially on the mobile end, so it is difficult to achieve perfection, but we can achieve the coverage of the mainstream browsers in this way, and ensure that most users have no problems every time they go online.

Continuous integration

With the test results, the next step is to plug those test results into continuous integration. Continuous integration is a very good multiplayer development practice where code push triggers hooks that automatically compile, test, and so on. After continuous integration is added, each push code and Merge Request will generate corresponding test results, so that other members of the project can clearly know whether the new code has affected the existing function. After automatic alarm is added, errors can be found quickly at the code submission stage, improving the efficiency of development iteration.

Continuous integration provides an almost blank virtual machine at each integration, copies the code submitted by the user to the machine, reads the continuous integration configuration under the user project, automates the installation environment and dependencies, generates reports after compilation and testing, and releases virtual machine resources after a period of time.

Open source continuous integration

The most well-known open source continuous integration service is Travis, while code coverage is provided by Coveralls. If you have a GitHub account, you can easily access Travis and Coveralls. After checking the project requiring continuous integration on the site, automated tests are triggered every time the code push is performed. Both sites automatically generate small images of the results after running the test

Travis reads the Travis. Yml file under the project. A simple example:

Language: node_js node_js: - "stable" - "4.0.0" - "5.0.0" script: "NPM run test" after_script: "NPM install [email protected] && cat. / coverage/lcov info | coveralls." "Copy the code

Node_js defines which versions of Node.js will be tested. For example, this definition means that tests will be performed on the latest stable version of Node.js, 4.0.0, or 5.0.0

Script is the command used by the test. In general, you should write all the commands you need for the development of the project in the scripts of package.json. For example, our test method, /node_modules/karma/bin/karma start –single-run would be written like this:

{
  "scripts": {
    "test": "./node_modules/karma/bin/karma start --single-run"
  }
}Copy the code

After_script, on the other hand, is a command that is run after the test is complete, where coverage results need to be uploaded to Coveralls by installing the Coveralls library and fetching lcov.info to be uploaded to Coveralls

See the Travis website for more configurations

After this configuration, the results of each push can be viewed on Travis and Coveralls to see the build and code coverage results

summary

Project access continuous integration can be very useful when multiple people are developing the same warehouse. Each push can automatically trigger a test, and an alarm will be generated if the test fails. If the requirements are managed by Issues+Merge Request, and each requirement has one Issue+ one branch, submit Merge Request after the completion of development, and the project Owner is responsible for merging, the project quality will be more guaranteed

conclusion

This is just a small part of what you know about front-end testing, and there’s a lot more you can dig into, and testing is just one part of front-end process automation. Today, with the rapid development of front-end technology, front-end projects are no longer like slash-and-burn in those days. More and more software engineering experience is integrated into front-end projects, and front-end projects are running at a high speed in the direction of engineering, process and automation. There are more excellent automation schemes to improve development efficiency and ensure development quality that need to be explored.