Based on my experience in some products and projects over the years, I have some opinions on the design of front-end engineering system:

  • Front-end development should be “self-contained”, including operation and maintenance deployment, log monitoring, etc
  • There are different solutions for different scenarios, and it is not a set of large and comprehensive framework. For example, multi-page mode and responsive design and development are adopted for web pages (sites) that are mainly for product promotion and display. Applications that focus on user interaction and do not have strong SEO requirements are developed in single-page mode
  • The product is componentized. In order to improve the reusability, the granularity of the components is subdivided as much as possible, with low coupling and high cohesion
  • Avoid reinventing the wheel and introduce some excellent open source resources to complement each other

Based on the above thinking, front-end engineering system can be roughly divided into three chunks:

  • Node service layer: mainly does data proxy and Mock, URL routing distribution, and template rendering
  • Web application development layer: Focuses on the Web interaction experience
  • Front-end operation and maintenance layer: construction and deployment, log monitoring, etc

The Node services layer

Data brokers

Generally, data sources in Web applications fall into two categories:

  • Ajax requests from user interactions (client-initiated)
  • The server template renders the initial data required

For the former, the traditional approach is to directly provide API for the client to call, but in the face of microsertization has gradually become the mainstream today, the back-end system tends to split into many back-end services, providing different apis, direct calls face many problems such as request authentication and cross-domain. As a transfer station, Node uses HTTP-proxy to transmit HTTP requests and responses to the front and back ends, serving as a bridge.

Wouldn’t it be better to request the back-end API directly? Cross-domain problems can be solved directly by CORS (Cross-origin Resource Sharing).

First of all, CORS has compatibility issues (IE10 and below is not supported). Secondly, there are some limitations or limitations:

Note that access-Control-allow-Origin cannot be set to an asterisk if cookies are to be sent, and must specify an explicit domain name consistent with the requested web page. At the same time, cookies still follow the same origin policy, only the Cookie set with the server domain name will be uploaded, cookies of other domain names will not be uploaded, and (cross-source) document. Cookie in the original web page code can not read cookies under the server domain name.

In view of the performance problem, I took the API of the company’s project to do some tests. It is not very precise, but I can get a general idea. In No throtting and Regular 4G mode, proxy (HTTP-proxy request), Direct (request back-end address request) and Node (Request module request) are used respectively. A delay comparison was made between GET and POST requests. Before the comparison, I have probably known that the delay of directly requesting the back-end address is the lowest. After all, with a layer of Node, performance will be lost. Here are the results:

No throtting GET:

No throtting POST:

Regular 4 g GET:

Regular 4 g POST:

It can be seen that direct is the fastest among GET requests, followed by Proxy, Node Request, and POST. However, the delay gap is basically around 30ms. Whether it is acceptable depends on how high the performance requirements of your system architecture are. For the current size of our company, I think the number of users can be ignored.

But why use Node to proxy HTTP requests? My reasons:

  1. Mock can smooth transition between front-end development and back-end interface in the early stage, so that the client is unaware. You can even switch back and forth between mock data and real data, which is very flexible
  2. When the number of back-end services increases, especially internal systems, cross-domain invocation becomes a problem. Node proxy forwarding fundamentally solves the cross-domain problem without worrying about Cookie loss and other problems. And in the Node layer for HTTP request and corresponding can do further interception and customized according to business requirements of these secondary processing, suitable for a wider range of scenarios

When I did the company’s simulated stock trading project before, I probably used drops like this:

The account system The market system The trading system
/api/account /api/stock /api/trade
var url = require('url'); App. use('/ API /account', proxy(config.api.accout, {forwardPath: function (req, res) { return url.parse(req.url).path; }})); Use ('/ API /stock', proxy(config.api.stock, {forwardPath: function (req, res) { return url.parse(req.url).path; }})); Use ('/ API /trade', proxy(config.api.trade, {forwardPath: function (req, res) { return url.parse(req.url).path; }}));Copy the code

As for server-side rendering, it has always been done with back-end templates such as JSP or PHP. Node itself is a server, and the template itself is a combination of the visual and data layers. Under the condition of reasonable of separation of front and back side, the front end engineers to write the server-side template is more suitable for, because the Node service layer should not contain complex data operation (cpu-intensive Node at the scene), where data sources are more direct and clear, at most to do with the data processing business. But for the front end, understanding business logic is a must, and a must, that we advocate.

Another point is that in the Web application development described in the following sections, the development and construction tools themselves are based on Node, and the front-end engineer knows best how to introduce static resources into the template page after the construction. So although the template belongs to the category of the server, but with the front end is more closely combined.

The Mock data

Generally speaking, in the early stage of project development, when the task of back-end interface implementation is not completed, the front end can only wait for the page to be written. At this point, if the back end is free to build some fake data, so that the front end can continue to develop debugging. But front-end engineers are encouraged to mock their own data. Why?

  • Better understand the business
  • As the first level of consumer of the data interface, the front end should have a better understanding of what data structure is appropriate for page presentation. For example, for a field, is it better to use an array or a string concatenation, etc
  • Mock is more efficient and easy to implement: JSON-server, mockJS

However, neither of these simple mock tools is suitable for the project scenario of the current company because most of our interfaces are not restful, so there are two problems with using them:

  • Json-server only supports restful, and only generates JSON files, which is not flexible enough
  • Mockjs is flexible and efficient, but does not persist data

A perfect solution would be a combination of the two plus support for non-restful mocks, which may need to be customized in the future. PS. I recently saw a mock service by Service Workers on Github: service-mocker. Whether it works or not, give it a thumbs up.

Url routing

The front end makes the routing and enables the front end engineers to understand the business more comprehensively and deeply. This is design and requires experience.

For specific routing, it is basically a business logic controller:

router.route('/:catalogId')
  .get(catalogController.findOne)
  .put(validate(paramSchema.updateCatalog), catalogController.update)
  .delete(catalogController.remove);
Copy the code

There are also some simple routes that are responsible for page rendering (especially for single-page applications, which basically follow the client-side route) :

app.get('/', (req, res) => {
  res.render('index', {
    ip: req.connection.remoteAddress
  });
});
Copy the code

Server side template rendering

A server template is just a shell for a single-page application:

<! DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"/> <title>Single Page Application</title> <script>window.serverData={foo: 'Data from server '}</script> </head> <body> <div id="app"></div> <script SRC ="// CDN/file-5917b08e4C7569d461b1.js "></script> </body> </html>Copy the code

We see that simple server-side data is provided only through window.serverData.

For multi-page applications, we will introduce layout, include and other template schemes to extract common pages to achieve maximum reuse:

. / views ├ ─ ─ layout │ ├ ─ ─ the default. The jade │ ├ ─ ─ the bootstrap. Jade │ └ ─ ─... ├ ─ ─ the include │ ├ ─ ─ topnav. Jade │ ├ ─ ─ the header. The jade │ ├ ─ ─ footer. Jade │ └ ─ ─... ├── customs │ ├─ └ ─ ─ index. JadeCopy the code

As mentioned above, one of the advantages of Node to do server-side template rendering is to facilitate the interface with static resources. How to do this? Json static resource mapping table: asset-manifest.json

{
  "main.css": "static/css/main.ad87bbd6.css",
  "main.css.map": "static/css/main.ad87bbd6.css.map",
  "main.js": "static/js/main.a3907cec.js",
  "main.js.map": "static/js/main.a3907cec.js.map",
  "static/media/yay.jpg": "static/media/yay.44dd3333.jpg"
}
Copy the code
const manifest = require('./asset-manifest.json');

app.locals.assets = {
  mainCss: manifest['main.css'],
  mainJs: manifest['main.js'],
  ...
};
Copy the code
<! DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"/> <title>Single Page Application</title> <link rel="stylesheet" href="<%= assets.mainCss %>"> <script>window.serverData={foo: 'data from the server'} < / script > < / head > < body > < div id = "app" > < / div > < script SRC = "< % = assets. The mainJs % >" > < / script > < / body > < / HTML >Copy the code

Web application development layer

This part should be the expertise of the front end engineer and also the embodiment of its core value.

Componentization and engineering

Web application development consists of three main parts: HTML, CSS, and JS, with the exception of JS, which has been modularized in recent years, HTML and CSS require manual maintenance (which is difficult to maintain). Along with copy-and-paste, code redundancy over time is like the C disk space on Windows, which keeps getting fat.

So modern Web applications have this development mindset: Everything in JS is the better way. It is the modularity of JS that is the cornerstone of the componentized implementation of Web functional modules (including interfaces and interactions). You can write HTML and CSS in JS (JSX) or all three together as a template (Vue) :

JSX writing:

import React, { PropTypes } from 'react';
import styles from './Preview.css';

const Preview = ({url, user}) => {
  return (
    <div className={styles.card}>
      <img src={url} alt="" className={styles.normal}/>
      <Profile user={user} />
    </div>
  );
};

Preview.propTypes = {
  url: PropTypes.string,
  user: PropTypes.object
};

export default Preview;
Copy the code

Vue in single file form:

<template>
  <div id="app">
    <img src="./assets/logo.png">
    <h1>\</h1>
  </div>
</template>

<script>
export default {
  name: 'app',
  data () {
    return {
      msg: 'Welcome to Your Vue.js App'
    }
  }
}
</script>

<style>
#app {
  font-family: 'Avenir', Helvetica, Arial, sans-serif;
  -webkit-font-smoothing: antialiased;
  -moz-osx-font-smoothing: grayscale;
  text-align: center;
  color: #2c3e50;
  margin-top: 60px;
}
h1 {
  font-weight: normal;
}
</style>
Copy the code

In either case, Components can’t be used directly in a browser (future Web Components can, but buildable Components definitely have an advantage). If we think of the browser as the Runtime environment for a Web application, as opposed to the JVM as the Java Runtime, the front-end component code is java-level source code, compiled and packaged into a War like Java.

So, in order to promote development efficiency and code quality, we don’t usually write HTML browser can read directly, CSS and js, but through a modular “premium” js, all the resources in series, forming a series of business or non-business components, they can rely on injection and output module, through the transformation, through a merger or reorganization, The result is code that the browser can parse.

How to better “series”, these are engineering problems, but also as a modern front-end engineer essential quality.

MDV

MDV = Model-driven view. From The launch of Angular by Google in 2010, to the launch of React by Facebook in 2013, to the recent creation of small and beautiful Vue by Chinese people, new ideas of Web development run through all the way: The transition from manual MANIPULATION of the DOM to data binding + Virtual DOM. This is the ultimate in everything in JS, where DOM manipulation is eliminated and the js plain Object-based virtual DOM is used directly:

var a = React.createElement('a', {
  className: 'link',
  href: 'https://github.com/facebook/react'
}, 'React');
Copy the code

React pioneered the virtual DOM with two killer features that greatly improved the inefficiencies of native DOM manipulation (especially on mobile) : Batching and diff. Batching collects all the DOM operations and submits them to the real DOM at once; The time complexity of diFF algorithm is also reduced from O(N3) of standard DIFF algorithm to O(n).

In addition to the performance improvement, there is also a big improvement in development efficiency due to the change in thinking: In MDV mode, it is natural to regard web page as a State Machine. UI = F (State), changes on the interface are caused by State changes, and the source of State changes must be M, that is, the data model. We leave manual DOM manipulation to the data binding of the MVVM framework, and interface changes are done automatically by data changes, which is not only very efficient, but also gives us more control over the flow of data. By introducing strict functional programming and immutable data, it also makes the results predictable and easy to unit test.

Of course, in addition to many new concepts to learn, there are a number of challenges such as how components communicate with each other and how asynchronous data is managed.

The wheel

In this case, wheels refer to components. The front-end component library must be tailored to the business characteristics of the company, which can be non-business interface components, functional modules, or both. Its purpose is to facilitate reuse and management. We define those involved in the company’s business as private components, and those not involved in business can be open source if we do well. What are the benefits of open source? Everybody gets it.

In addition, we also embrace excellent open source solutions in the industry, such as Ant-Design of Ant Financial, Material-UI of Google, etc. On the shoulders of giants, we can develop our projects and iterate our products more efficiently and quickly.

Multi-terminal and cross-platform

It’s important to emphasize that our client, the browser, is the environment in which our front-end code runs. We can imagine that THE JS language created in 10 days is able to be so popular and enduring, completely depends on the browser the world’s largest number of the largest division of the most extensive natural JS parser credit. It’s everywhere in the world. You have to use it. So understanding the language host, of course, is also crucial.

Depending on the browser kernel, the terminal environment that the front end should focus on should include:

  • Mobile terminal: wechat X5 kernel, iOS Webkit, Android Webkit
  • PC: Trident (IE), Gecko (FireFox), Webkit (Safari), Bink (Chrome), Presto (Opera, abandoned)

It can be seen that the most important battlefield is Webkit (Webkit on the mobile terminal mainly depends on the system SDK). In China, due to the relationship between wechat, we also need to pay more attention to X5 (especially the recent launch of wechat small program).

According to the platform:

  • Mobile: Cordova, React Native, Weex
  • Desktop: Node-webKit (nw.js), Electron
  • Wearable device: WebVR

As for the challenges of terminal adaptation and cross-platform, the former lies in some compatibility problems. According to the previous engineering scheme, HTML, CSS and JS are regarded as “compiled” resources. After building by “compilation tool”, the tool can automatically generate corresponding adaptation codes according to different kernels to reduce the development cost. The latter has high requirements for front-end engineers, requiring a combination of knowledge and capabilities, but bringing huge benefits. It is worth mentioning that React aims to write once and run everywhere, and it is moving in this direction.

In general, the main technical characteristics of Web application development are:

Front-end operations layer

test

There are two things to do before code build:

  • Lint at the code level
  • Unit Test at the functional level

Lint is made according to the rules of the code specification, and different scenarios have different rules. For example, node environment or browser environment, using modular mechanism is commonJS or AMD, etc., should be configured separately. The main purpose here is to ensure that the code has no obvious errors and a consistent style.

Unit tests are not common in traditional Web projects that focus on the interface rather than the interaction (page state management), but in some slightly more complex front-end projects, Unit tests become more important. My previous company had high requirements on the speed of project development, so I seldom did this part. However, some mainstream front-end frameworks come with test libraries and simple examples, such as Create-React-app, which comes with JEST.

CI/CD

Front-end deployment is mainly divided into static and dynamic parts:

  • Static resources mainly refer to some static resources, and their deployment is relatively simple, that is, to put them on the CDN server.
  • Dynamic is the stuff of the Node service layer. For deployment of Node Apps, refer to this article I wrote earlier.

There are many options for automating to improve efficiency, using git hooks notifications to the CI server. Since I have been using Gitlab in my company, I have only tried Gitlab-CI before. It is quite pleasant to use it together with Docker.

Other common ones are Jenkins, Travis CI, etc.

Expansion and stability

How to deal with the program crash? There are already a lot of process restart modules on Node, such as PM2, Forever, etc.

To better utilize CPU core resources, we also enable multiple application instances on a single server. As we all know, nginx or HaProxy clusters have one master and many slaves. The port of the host forwards requests to the slave machine through the load balancing algorithm. The same principle applies to creating multiple instances on a single machine and managing these instances in a cluster.

Here, take PM2 as an example:

And then we manually create a crash, and you can see, you can see the first thread, restart is 1, which means that when it crashes it automatically creates a new thread to continue service.

Performance Log Monitoring

Monitoring can be large or small. At the very least, it can be a tool. At the very most, it can be a system that not only has visual diagrams, but also supports a series of alarm rules. IO, Sentry. IO, etc. These services are powerful and cover almost all requirements and are easy to integrate. Of course, everything that works is basically paid for.

Write in the last

The Private NPM and Private Docker Registry, which were not mentioned in the article, are marked with dotted lines, because it is difficult to do at present, because our code base is Intranet, and the company is a heavy user of Ali Cloud. If Docker is to be put into use, It probably wouldn’t work without its own IAAS infrastructure to support it. Of course, these are not a few small companies can play, have the opportunity to try, but as a technical person, the primary task is to implement the solution and solve the actual business problems.

View on GitHub