Author/Zhihu: Kang Dongyang

Current front-end code deployment process

The development process of front-end engineering is as follows:

  • Local business development, packaged build configuration (Webpack);

  • Online construction machine for code packaging construction;

  • The deployment system deploys the new code to the online server;

The process is pretty much the same as most teams today, where the front-end code is repackaged and the hash in the file name is changed so that the client can get the latest code without being cached.

Looking for pain points

The front-end is an area of software development with fewer technical problems and more engineering problems.

From the perspective of the iterative maintenance of an application, there is nothing wrong with the above process. However, in cloud music, in addition to the master station application, there are also various business line applications and various H5 applications. All of these could add up to hundreds.

Now, let’s do some simple math: Assuming that the minimum time it takes to go live from approval to build and package to deployment is 30 minutes, the time it takes to go live for all 100 applications:

100 * 30 = 3000 minutes = 50 hours = 2.08 days

However, many of these are not worth the time to go live, such as changing a configuration in the configuration center, upgrading the SDK for Web APM, modifying the help manual copywriting, etc.

If there was a way to get things going quickly, rather than going through a rigorous rollout process, the cumulative cost savings would be a significant benefit.


New mode: Direct code delivery

In the face of the pain point, we tried a new model: make a script dynamic so that developers can change the content of the script through the interface and update the code directly online.

Let’s take a look at the following two images, which show the steps of the sentry platform using a delivery system access error:

  • Fill in some forms required by sentry platform for an application on the interface. These values are used for Sentry SDK initialization.

  • Click the Publish code button on the platform to update the code directly.

If the version or configuration is updated later, you can reconfigure it on the system and deliver it.

This mode is straightforward to operate, and most importantly, the time for code updates is significantly reduced, down to 5 minutes or even real-time.

At present, 5 minutes is our practical value, and this time is decided by ourselves. The specific details will be explained later. Now let’s debunk the magic of delivering web front-end code.


Take the bold step: script api-ization

When the browser requests a script from the server, the browser only gets a text. However, the server will tell the browser in response that this is a script, so the browser will compile and execute the returned result as a script.

With this in mind, we can fill in the API URL when the script tag references a script:

Then, the back-end interface route is defined as follows:

Finally, organize the script content in controller and return:

There are a few points to understand here:

Ctx.set (‘cache-control’, ‘max-age=0’) : tell the browser not to cache the returned result in order to get the latest code every time the browser accesses it;

ctx.body = ‘console.log(“Hello Code Puzzle”); ‘: This is where you organize your code. In real business, code is not that simple, and we’ll explain how we do it in a little bit later.

Ctx. type = ‘application/javascript’ : with this, the browser will treat the result returned as ascript, not a normal string.

The principle is simple and the implementation is not so difficult, but the actual production situation is often not so simple, and there is a problem that we ignore: stability.

The above approach, while treating the script as an API, also directly exposes the service to the user. In the face of heavy traffic or malicious attacks, your service will appear weak and helpless, or even directly crash.

So, we need a stability solution.

Stability scheme

To prevent the service from facing the client, we can set up a line of defense in front of the service. Common practices now include:

  • API: Access Gateway Gateway;

  • Static resources: Add them to the CDN.

Although script API is made here, it still returns script in essence. Access does not require login or permission, so CDN is more appropriate here. The architecture is as follows:

The two features of CDN are cache and back source. By using the back source mechanism, we can embed CDN in front of script service. The main process can be divided into three parts:

Client access script: The script address is no longer an API address but a script address of the CDN domain name. When accessing a script on the CDN, the client is directly returned if it matches the cache. If the CDN does not match the cache, the source is triggered.

Return source mechanism: pre-configure CDN return source to a service, we can organize script content in this service, consistent with the above controller code;

Obtain script content: Organize code according to user operations on the platform and store script content in the MySQL database. In order to improve the read and write speed when CDN returns to the source, we will back up the script content in Redis. MySQL is equivalent to a fallback.

One point to note in this scenario is that the script address accessed by the client is still a constant address, so the return script cannot be permanently cached and needs to be set for a shorter time.

This is where the 5 minutes mentioned above comes in, and 5 minutes is an acceptable practice value.

After access to CDN, we can directly enjoy the benefits of CDN:

  • According to the characteristics of cache and back source, most of the traffic is intercepted and the pressure of the source station is relieved.

  • Because CDN service has nodes all over the country, it can effectively solve the problem of cross-regional access and reduce the access delay.

  • The powerful computing capability of CDN can intercept malicious attacks and reduce the impact of “broadcast storm”.

Manage the code delivered by the organization

Interface manipulation = Operation schemawe

When configuring a module on the interface, it is equivalent to operating a schema in the back, as shown in the following figure:

Introduce dynamic scripting in your application

Finally, the schema of all modules will be integrated into an SDK, which can conduct specific logic processing according to different module data. The code structure of THE SDK is as follows:

In this code, the function body of all application SDKS is the same, only the modules data passed in changes based on user actions.

Finally, add this SDK script to your application:

Different processing types

In the SDK, we perform different operations based on the types of processing in vars, such as inserting scripts, inserting HTML, defining variables, etc., which can be customized according to different business requirements.

Insert script:

Schema is:

The corresponding processing functions of SDK are:

Of course, it also omits the substitution of some user-filled variables in the onload function, so that the SDK for some common technical modules can be initialized in the startup script.

Define variables:

Schema is:

The corresponding processing functions of SDK are:

Insert the HTML

The corresponding processing functions of SDK are:

By abstracting different behaviors, we can achieve our goals by performing various operations according to a single schema.

System optimization

So far, we’ve managed to deliver code dynamically, but there’s still room for improvement with the current design.

Hierarchical configuration

Now we don’t have to walk every time changes are published on-line process, but when our gm module changes, such as sentry domain name is changed, then for each application we all need to be done in a distributed system configuration, operation is 100 times 100 application operation, and these operations are the same, is redundant.

To address this situation, we added the ability for hierarchical configuration:

In the system, increased the project this concept, can take it as a folder, the project has many applications, when we need to configure the configuration, you just need to operate at the project level, the system will automatically configure incorporated into applications and complete code issued by the operation, the merge configuration can think of is:

Connect the deployment system and integrate common modules

Even with this distribution system, there are still some inconveniences when we use different public platforms. The cloud music public technology platform mainly includes:

  • Sentry: Wrong platform

  • Wapm: Front-end monitoring platform

  • Hawk-eye: code detection platform

  • Front-end deployment system

  • Distributed system

Each public technology platform is independent of each other, so there is a different way to use it, which leads to developers creating a new application:

  • Create an application in the deployment system

  • Create an application on the Sentry platform and get the key

  • Create an application on the WAMP platform and get the key

  • Create an application in the delivery system and configure sentry, WAPM and other modules

We have zero tolerance for this kind of tedious, repetitive operation, and in cloud music, the deployment system is the gathering point for all applications, so we can use it to do a few things:

  • When users create applications, the same applications are automatically created on various public platforms.

  • The deployment system sends the initial configuration of each public platform to the delivery system.

  • The deployment system automatically injects dynamic scripts.

conclusion

The front-end module dynamic delivery system is a solution to improve efficiency after the number of cloud music applications reaches a certain level. It may move towards open source in the future. If you have any good ideas, we welcome to build it together.