Front-end early chat conference, a new starting point for front-end growth, jointly held with the Nuggets. Add wechat Codingdreamer into the conference exclusive push group, win at the new starting line.


The 14th | front-end growth promotion, 8-29 will be broadcast live, 9 lecturer (ant gold suit/tax friends, etc.), point I get on the bus 👉 (registration address) :


The text is as follows

  • Team set up 3 years | small Jue landing – how to promote infrastructure projects
  • Team set up 4 years | hall master – how to push the front team infrastructure
  • Team was founded five years | how Scott – when people stretch thin project to promote infrastructure
  • Team was founded six years | taro – how to implement infrastructure value breakthrough in big front team
  • Team set up 12 years | chung chi – how to design large front team construction route

This is the text version of the fifth lecturer-Chongzhi’s lecture. Let’s see how he speaks. The video and PPT will be released in the public account in the future.

Good afternoon, everyone. Today I am going to share with you the theme of “The Road to Integration of Infrastructure Design for Large Front-end Teams”. I will mainly share with you the achievements and insights of our team on front-end infrastructure accumulated over the years.

To introduce myself

My name is Chongzhi. I am a front-end technology expert of Ali Mom. I have been working for more than 8 years and now I focus on front-end engineering of the team.

Business introduction

The main business is some docking business background management site page, such as drilling exhibition, through train, dharma plate and so on, these sites are SPA single-page application framework, interaction is more complex, such as some very complex plan creation process, report presentation and so on. In addition, these applications have been maintained for many years with frequent iterations. Therefore, how to maintain the long-term maintainability of such projects and the robustness of the code are the points that our team has been focusing on and optimizing for a long time. There is also the knowledge site category, which is basically static document content. There is also a category of marketing activity type pages, such as the Double Eleven activity, New Year’s Shopping festival, are online for a short time some pages

Road of front-end engineering development

In the early days, our front end was basically in the form of development page demo and then submitted to the back end to set Java VM templates and so on. The front and back end codes were seriously coupled. In addition, the front end basically did not have its own local development environment, so we had to ask the back end engineers to help install the back end development environment, such as JBoss. Tomcat or something like that, and then you can debug it, which is very troublesome.

Until now, after the whole front-end engineering, we can basically achieve the separation of front and back ends, which is mainly the result of SPA single-page application architecture. The back end basically only needs to provide interfaces to the front end. The other thing is that we do some scaffolding tool precipitation, including some best practice configurations, and then it’s almost out of the box for new development projects. There is now a very perfect local development environment, mainly the command line tool developed with Node.js, can provide local development docking simulation interface, and then in the joint adjustment can docking back-end real interface ability. Then we also made a separate interface management platform, so that when developing projects, we only need to let back-end developers input interface information on the platform in the early stage, so that the parallel development of the front-end and back-end can be done without interference. In addition, we have also made a special statistical platform for buried data, which is convenient for operations or products to directly see the effect data. Finally, our current projects are all built and released in the cloud. One of the benefits is that there is no longer any code after construction such as build in the project warehouse, and the other is to ensure the consistency of the project construction environment.

In this whole process of front-end engineering, a lot of front-end infrastructure has been deposited, which I will briefly introduce one by one later.

Relationship between front-end infrastructure and engineering

First of all, there are a lot of front-end teams of BU in our group. Over the years, we have accumulated a lot of bright single point infrastructure, so to speak, there are a lot of wheels. Therefore, our idea of engineering is to sort out the front-end development process of our own team. In every process to research group or community there is nothing can use tools, if can have a look at these tools to carry on the secondary development, in order to better meet the demand of the development of our team, if not you can try to research and development, and then to integrate all these infrastructure tools together, cover the entire front end development link, Finally, the goal of improving development efficiency can be achieved.

Front-end development of the full link process

It basically includes the following processes, including project initialization, local development, interface tuning, data burial, build release, and performance monitoring. Our idea is to develop a command line tool, RMX-CLI, to connect the whole front-end development link, but also through this way to integrate the scattered front-end infrastructure, to achieve the effect of 1 + 1 > 2.

Next I will talk about the various infrastructures on the full link of our front-end development around RMX – CLI command line tools.

Front-end development links contain infrastructure

In the process of project initialization, local development and interface coordination, we have the following infrastructure, including the most basic project code management, which is managed by GitLab. As for the front-end framework system, we developed our own Magix system framework, and part of the projects used the React system, which is very mature in the community. Then, based on the Magix framework, we have our own component library magix-Gallery. We also provide a visual scaffolding plugin, Magix-Desiger, for local development. Then, we also have a self-developed local development intermodulation service plug-in called MAT, which mainly provides local development services, interface simulation, interface proxy and other functions. For the interface management part, there is an interface service platform called RAP, which uses the capabilities of mock. js at the bottom. Iconfont is used in the font icon, which we should be more familiar with, is a font icon management platform developed by ali mother. The chart system uses our self-developed chart management platform: Chartpark, similar to component library, to provide common types of business charts.

In terms of data burying point, we have our self-developed Dataplus platform, which provides automatic burying point of single-page application pages and viewing effect data after burying point. In terms of performance monitoring, we accessed the Group’s Clue monitoring platform through a command-line tool, which was automatically accessed during project initialization. In addition, we also have a self-developed content management system ALP similar to TMS, which is mainly used to quickly publish some functions such as active pages. Finally, in terms of construction and release, we have magix-Combine of Magix system to be responsible for code construction. At the same time, we also access DEF platform of the group through command line tools to realize cloud construction and release capability of front-end resources.

Command line tool RMX – CLI

In the early days when our team members were developing projects, everyone had their own way to deal with the problem of local development services. For example, some people used Nginx, while others used Java server. The environment was very different, and it was a tedious task to set up the environment when they needed to develop each other’s projects. We wondered if we could make a command line tool based on Node.js to unify the whole development process, so we came up with RMX – CLI command line tool. The command line tool has also been iterated over a number of versions, from Maginx only to a suite architecture that now supports different frameworks and is more versatile.

Then take a look at the basic architecture of rmX-CLI command line tool. Rmx-cli is mainly composed of three layers. One layer is cli command line entry, which mainly provides some system commands, such as login, logout, installation and uninstallation. It mainly provides the concrete implementation logic of system commands and default suite commands. The other is the outermost suite and plug-in architecture, which is responsible for the specific command logic implementation of different framework architectures. Some of the more basic generic suite commands are built directly into the command line tools, such as init initialization, build build, publish, etc. Then commands with special logic in different frameworks can be customized by rewriting or adding suite commands.

Plugin commands are global generic commands that cross frameworks. For example, we have a clear plugin designed to clear Chrome’s HSTS and DNS caches. However, the plug-in system is also extensible, so anyone can develop a plug-in package according to our plug-in specification and apply for access to our plug-in system.

In addition, we also made a supporting Webui workbench, which is basically a visual implementation of THE Cli command line. The core logic behind it is the same set of code, which can be invoked by the Cli command line and Webui at the same time. We can use the Webui workbench for visual operation of development commands, project management, installation and deletion of kits and plug-ins, and so on.

Front-end development framework Magix

Magix is a block management framework developed by Ali Mom. Its main application scenario is SPA single-page application (it also supports traditional multi-page application). Magix has a long history in Ali Mom, even earlier than React. Magix is a framework that we have developed from background management sites for years. After many iterations, Magix is now basically mature and stable. Magix is mainly a view for block development management, including component library components are also written based on view. We also provide bidirectional binding capability, and we developed magix-Combine offline processing plug-in to improve performance, such as HTML, CSS, JS merge, style isolation and so on. In addition, with the help of RMX – CLI command line tools, we also implemented the ability to live hot update while developing locally.



Front-end scaffolding RMX init

In fact, over the years of business development, there must have been many different types of front-end scaffolding containing best practices that could have been deposited and used directly when developing new projects. We mainly through the command line tool RMX init command to scaffold initialization, you can see the general operation effect. This command is a one-click initialization of the project environment by typing in the project name, including helping you create projects locally and on GitLab, connect to various platforms, etc.

First, we will select a suite such as Magix or React, and then specify the scaffolding, which supports different types of sites such as background administration and marketing pages, and then enter a project name. The Cli command line can automatically create the corresponding project in the GitLab platform. In addition, the project will be created on the corresponding platform by connecting with various development platforms, such as RAP, Iconfont, Chartpark, etc., and the project ID will be configured in the package.json of the project, which can be used by subsequent Cli command line tools in local development. Basically, we can cover most of the different types of project initialization requirements through the kit system and multiple scaffolding options, which can be done out of the box, and greatly improve the development efficiency.



The local development service command RMX dev

This is our whole front-end development link in the very core link. RMX dev will start a LOCAL KOA server, according to the RAP project ID configured in the project, you can do the local development of the interface simulation data from the RAP platform, at the same time RMX dev -d plus the interface IP address provided by the back end, you can directly connect with the back end of the real interface. Then using RAP platform data can also achieve the ability of local interface verification, improve the accuracy of the interface. If you want to develop the online interface locally, you need to set up the HTTPS service locally. Those who have done this know that it is very troublesome. You need to install the signature certificate and so on. So we made a very clever way, is the local HTTP service, and then the interface through our MAT plug-in forward to the online interface with HTTPS protocol, can bypass this problem.

Also, we have added a lot of efficiency AIDS to local development, such as magix-HMR support for real-time hot updates to Magix projects. We then injected the page injection help documentation at local development time for easy viewing at development time, and we also injected the Magix-Desiger visual build system to quickly create pages for specific templates. Also, our local development services will plug directly into Magix-Combine for real-time offline compilation.


Develop the help float tool RMX Dev

We now develop projects generally have many configurations, such as the project ID of each associated platform, as well as the IP address of the interface of the development environment and so on, and then these configurations are injected into the local development page through the form of floating layer, which is very convenient to modify and save the configuration in the form of visualization. The other is to provide some help document address, such as RMX-CLI usage document, component library document address, etc., easy to view during local development.



Micro front-end

A very popular concept at the moment: micro front-end, we have also made relevant attempts under the Magix system.

Take a look at the business scenario: at the beginning, we may only have a Dharma disk project, and then after a period of time, the product says I want to develop a second version of Dharma disk and a service version, they are different domain sites, but some module pages such as the crowd list are the same. Then we thought about whether the general crowd list module could be loaded into other projects in the form of view. However, these three projects are actually different warehouses, so it cannot be loaded directly, so we need to do something about it.

Of course, iframe loading can also solve the business scenario just mentioned, but the disadvantages of iframe are also obvious. One is that IFrame is completely isolated, and HTML, JS and CSS are completely loaded, so the loading speed is much slower and the sense of fragmentation is obvious. In addition, the style width and height of an iframe cannot be automatically expanded like div, and it is a cumbersome matter to deal with the adaptive width and height of an iframe.

Magix vFrame is used to manage views. If we want to load the views of other projects across projects, we just need to configure the front-end resource address across projects. And cross-project interface host configuration. Then our Magix-Combine offline compilation plug-in provides scope style isolation, preventing loaded modules from affecting the host style. The common style will be localized, meaning that when your view is loaded into another project, the common style will be directly applied to that project. However, since they are all Magix projects, our component libraries are shared to avoid multiple loading performance issues. Then we also added an xSite plugin to load resources on demand, that is, when there is a page to load the view, the related resources are loaded.

In this way, different project modules can be basically loaded with each other, greatly improving the reuse of modules across projects.

Then, based on the ability of loading views across projects, we also developed a supporting Chrome plug-in to assist local development, to configure the front-end resource address and interface host configuration of the child view loaded across projects. That is, we can start two projects locally at the same time, and then modify the configuration of the Chrome plug-in, you can debug and develop the code of two projects at the same time, without changing the relevant configuration code of the project. In addition, this Chrome plugin can also modify the configuration of the child view loaded across the project while visiting online sites, and directly perform local debugging development of the child view.

Visual construction of Magix-Desiger

In the early days without any tools, you had to create a page manually, copy some existing page code, and then develop it. Later we have RMX cli command line tool, through the RMX add command to directly generate preset template view pages, such as tables, forms and other common types of pages, can also choose according to the RAP platform interface generated with the interface associated with the page. But basically still more fixed template way, not very flexible.

In order to solve this kind of efficiency problem, we developed the Magix-Desiger visual scaffolding plug-in, which plugs into our RMX-CLI command line architecture. Mainly through the preparation of Schema description template, combined with the RAP platform interface data, in the local development of a visual operation interface, to carry out real-time and fast page building. Later further, we also joined the AI design directly generated page, is to directly upload a design, the system can automatically identify the drop-down box in the picture, the form, the form element such as the input box, automatically generated page, and then, based on the generated page developers and continue development.

Up to now, magix-Desiger visual construction system can even provide back-end developers with some common forms or table page construction, reducing a lot of front-end workload.

Let’s take a look at the next general implementation:

The magix-Desiger plugin will use a WebSocket service to read and write some files. The magix-Desiger plugin will use a WebSocket service to read and write files. Then rely on this plug-in to operate directly on the page, you can create a new page, can also select the current page for editing.

Interface Management RAP

We developed a platform to manage interface simulation data: RAP

At present, our main business is the development mode of the separation of the front and back ends, and the most important way of interaction with the back end is through the interface, so we developed the RAP platform specifically for maintaining the interface information. This platform mainly provides the interface information input, document description, and interface simulation data function.

Take a look at the evolution of interface management in our team over the years:

Mock. Js is the js library that one of our colleagues developed to generate Mock data. In the early days, we introduced the Mock. To address the initial pain point of having to manually simulate the interface on specific business pages. One of the problems with this is that mock. js needs to be removed when it goes live, and the Mock Mock rule files are redundant in the project.

In order to solve these problems, we developed a RAP interface management platform. Mock rule information of the interface is maintained uniformly on the platform, and there is no need to store mock data locally. However, a rap.js file needs to be introduced in the local development environment to forward the interface to the RAP platform to obtain simulated data. There is also a need to determine whether the project code is a local development environment, which means that there is some redundant local development auxiliary code in the production code.

Then up to now, we can forward the interface to RAP platform to obtain simulation data through the command line tool of RMX-CLI in the local development stage. Rap.js file is no longer needed for interface interception and forwarding at the front-end code level. In addition, we also provide commands to directly synchronize the interface information configured on the RAP platform to the project, without the need to manually maintain the interface list information in the project. In addition, we now have the RAP platform interface information, but also the local development of the interface request information, so that we can do in the local development of the ability to verify the interface, that is, can effectively verify the real interface information returned by the backend and the RAP platform interface information match.

Then we this solution, it is can be done in a production environment without redundant invasive code, at the same time we also no longer concerned with the maintenance of the interface front-end, basically all is by the backend students recorded their interface information on RAP platform, RAP platform at the same time also assume the function of the interface documentation.

Component architecture Magix-Gallery

Our component library based on the Magix framework, Magix-Gallery. In a relatively large team, the component system is basically self-developed, and only the self-developed component library can quickly respond to the highly customized requirements of visual interaction on components, which is also conducive to the unification of front-end interaction behavior of common components across product lines. Up to now, our component library has accumulated more than 60 common components and business components, which basically covers most interaction types and provides bidirectional binding capability at component level to improve daily business development experience. In fact, after our component library is mature and stable, the development of daily business is basically to assemble various components, and then add some logical code to complete, so that we can have a very high development efficiency of [daily business].

Then we also combined the RMX-CLI command line tool with the component library to provide one-click synchronization of component code into the project, component synchronization for a specific version, and upgrade prompts on the command line when a new version of the component is available.

Because we have many product lines and theme color is different, so we component platform provides a quick edit function of theme colors, choose a theme color, as long as all the other auxiliary color will be generated automatically, can also fine tuning each auxiliary color again, finally to download to local features, can be directly applied to the project.

Icon management Iconfont

Our team developed the icon management platform Iconfont, which should be more familiar to everyone.

Currently supports both monochrome Webfont and multicolor SVG ICONS, and the ability for designers to upload SVG ICONS and adjust them online.

Then take a look at the evolution of our team icon scheme over the years:

In the early days, if there was an icon on the page, it was basically cut out of the PSD, and then to reduce the request, all the ICONS would be pieced together, and which icon would be used by position. This scheme was basically inflexible, and the size and color were fixed.

Later, we developed the Iconfont icon management platform based on the form of Webfont. The advantage of Webfont is obvious, that is, the size and color can be defined casually, and the HTML reference is also very convenient. However, the biggest drawback of Webfont is that it can only be a single color, so we added SVG mode to support multi-color ICONS.

Now through RMX – CLI command line tools, we can automatically create the associated project in the Iconfont platform when initializing the project, and then add ICONS in the Iconfont platform during development, and synchronize the font configuration to the project through the command of one click. At the same time, the CLI command line tool also provides commands to detect whether there are redundant icon fonts in the project. This is mainly to solve the problem of the increasing number of invalid Iconfont fonts caused by long-term maintenance of the project.

Chart system Chartpark

Take a look at Chartpark, our team’s charting system, and tell us why we developed our own chart library:

One is that many of the chart requirements of our products are very customized, and only our own chart system can quickly respond to their needs. In addition, many charts of our background management page have a high interactive effect, and the chart library developed by ourselves can better develop the corresponding interactive effect.

The underlying Canvax library is the Canvas Context2D and Webgl renderer functionality that encapsulates the Canvas context2D. The mid-tier Chartx library is based on Canvax, which encapsulates common business diagrams such as bar charts and pie charts. The last one is the topmost Chartpark chart management platform, which mainly manages the chart configuration of the project centrally.

Then this is our Chartpark chart management platform. After years of accumulation, there are now more than 100 official chart databases, which can basically cover the types of charts commonly used in business.Copy the code

Our original goal of this platform was to have a place where charts can be configured intuitively to quickly meet the needs of interactive vision for chart customization. Therefore, we made this platform to integrate the configuration of all charts in the project for development and debugging, and only the ID of the chart needs to be configured in the project. So our platform provides the ability to edit preview charts online. Once the charts are configured online, you can use the RMX-CLI command line tool to synchronize the chart library configuration to the project with one click without manual operation. It is also possible to download the configured chart library directly to the local computer and import it into the project through import.



Buried platform data station Dataplus

The background of the development of this platform at that time: First, our products, operations and interactions had strong demand for business products to verify the effect of their products launched. In addition, our group has a very mature traditional page burying platform SPM, but our side is mainly single-page applications, SPM does not support the statistics of UV and PV of single-page applications. Therefore, we developed our own Dataplus data station platform based on SPM, mainly to solve some problems that SPM could not meet the needs of our project, such as automatic burying function, PV and UV statistics of single-page application mentioned just now, and the platform produced valuable data reports. Mainly based on business data, we stratified user data to help product managers better understand the situation of users at different levels using various business products.

Take a look at the basic architecture of Dataplus:

The underlying buried point is supported by the SPM system of our group, and then the data station will pull the buried point data stored by SPM to the platform for personalized processing, and then it can do the whole site overview, page analysis, single point analysis and other functions of the business site. Also, we have developed a supporting Chrome plug-in, when visiting the business site, you can directly view the PV, UV, or buried data of a function point on the page, and you can select a specific function buried point on the page to add directly to the data station platform.

Then it is access to RMX – CLI command line tool, mainly through the command line tool, when the project initialization can directly create a good data station related buried point information, and configure to the project. In addition, we have included automatic burying and synchronous configuration in the build release command of the project. So basically we develop a project, zero configuration and zero operation can achieve the burying point of full station automation.

Content management System ALP

Our team also developed a generic content management system, ALP, which is a TMS like page publishing system.

The background of THE BIRTH of ALP is mainly to respond to the increasingly complex and changeable business scenarios of our department and provide a quick launch of products. For example, our front end can quickly develop and publish a daily activity page based on ALP, and provide a promotion page for the operation as simple as writing Word documents, which can be directly published to the production environment to complete an activity promotion.

For our background management business, a commonly used function is Jsonp interface management module, such as active page floating layer in the project, or some purely static display of copywriting and pictures, and then these contents are often changed pages, we can use the form of Jsonp interface configuration on the ALP platform. Providing visibility to operations to change content at any time saves us a lot of unnecessary repetitive work on the front end.

Fether Serverless system

Our team is also trying to develop a Serverless system: Fether

For example, there are many self-developed service platforms in our team, such as data station platform, chart Chartpark platform, RAP interface platform and so on. All of these need to build, operate and maintain server resources by ourselves, which is not very good at front-end development, but also very laborious. So our colleagues on the team developed their own Serverless platform based on the group’s Aliyun computing. Now basically, the platform developed by our team has been slowly migrated to Fether. The most obvious advantage brought to the front end by using Serverless is that it is free of operation and maintenance. There is no need to care about server load, but only need to pay attention to its own business logic. Then we also provide unified monitoring and statistics capabilities based on platform capabilities.

Release build

Our major projects are all based on the Magix framework, so the construction is carried out by magix-Combine, which is an offline processing package for improving performance, such as changing templates into functions in advance, reducing temporary compilation on line, There’s Scope isolation for styles, support for Typescript, ES6, basic code error detection, and more.

Then came the release stage. We integrated DEF, the cloud construction platform of the group, into the Cli command line tool. The advantages of cloud construction are: one is that the build directory after construction is no longer needed in the project warehouse; the other is to ensure the consistency of the construction environment for different developers; All branch management and release records can be traced and viewed on the platform. We also integrated one-click commands for daily and online environment through the Cli command line tool. The commands contained the logic of merge trunk code, commit code, SPM burying point and DEF cloud construction, basically eliminating all the repetitive work of manual operation.

conclusion

Our team there is no single architecture group, so we are basically in the business of themselves, to discover problems, positioning problem, finally solve the problem, and then in the process of natural precipitation out front various infrastructure oriented, team members will be in the process of find suits own front field, and deep down. Then, with the gradual maturity of infrastructure in various fields in the team, it will inevitably take the road of front-end engineering. In fact, engineering is to connect the excellent infrastructure scattered at various points and finally form a complete systematic thing suitable for the team. Finally, the perfect front-end engineering will definitely feed back into our business and greatly improve our development efficiency and experience.

Front-end development to the present stage, although much more mature than the previous era, but horizontal comparison of other languages, such as back-end Java system, there are still many imperfect places, so now is far from the end, in the long run there are still many front-end areas worth exploring and exploring.

And then it’s time for our AD. Yes, we’re hiring. We are the front end of Ali Mother group two, the main business is docking business background management site. We don’t have an independent architecture group, but basically everyone is responsible for infrastructure development in a certain area. At present, the whole technology stack is spread out more, and there is a lot of room for further development. In addition, our team is very stable and has not changed much for many years. It is suitable for long-term development and the atmosphere is relatively relaxed. In addition, we belong to ali department, and our welfare is in line with the group, which is relatively good. Last but not least, ali mom is not well known, but it is the most profitable department of Ali, so you know, interested students can scan my Dingding QR code to contact me.

The third phase of 2020.3.28 live online, the theme of “front-end construction”, scan code to pay attention to the “front-end early chat” public account to participate in the registration: