Github: github.com/midwayjs/mi… Open source is to contribute to the development of front-end and Node.js, please go to Github to experience, and help click the Star~ 🙇♂️ thank you ~

In the last article, you had a question about the 50% figure. This time, as a follow-up, we will give an answer and a summary.

Since last year, Ali Front-end and several teams in the group have jointly started a “secret” mission to use Serverless, a new generation of R&D architecture, in the hope of significantly reducing the cost of infrastructure and operation and maintenance for r&d personnel.

Why Midway Serverless?

Midway’s legacy Web stack framework addresses a similar problem with EggJS and NestJS. These frameworks are widely used on the front end to build business systems from the middle end to the mobile end. Ali Group is no exception. There are many Node.js applications, but these systems have one thing in common: the CPU usage of most servers is very low, which is undoubtedly a huge waste of resources.

This normal waste of resources and the application of large-scale geometric multiple yield, so that the application of governance staff headache. With last year’s group Serverless architecture in the actual application of the appeal, let us see the front-end hope. Because of this, the evolution of the group’s Midway system is imperative.

Serverless and FaaS

FaaS is one form of the Serverless architecture that Midway wanted to address. Before V1.0, we had invested a lot in FaaS, but the Serverless architecture was huge and FaaS was only a small part of it. Event-driven models evolved from microservices, which were small functional blocks focused on a single responsibility and function. Today’s more “code fragmentation” software architecture paradigm offers unparalleled flexibility to business code compared to the smaller units of programming known as microservices.

Today, according to Forbes magazine, typical servers in commercial and enterprise data centers provide only 5 to 15 percent of their average maximum processing power output, which is a huge waste of resources. With the advent of the Serverless architecture, allowing service providers to provide our computing power to maximize our real-time needs will enable us to use computing resources more efficiently.

Today ali uses FaaS as the landing container of its business, hoping to further reduce the size of the container and reduce the cost. Currently, the cost of the group’s machines is calculated according to CPU Core. Taking the 4C8G (4-core 8G) machine as an example, a middle and background application needs at least 2 machines. However, with FaaS, the cost can be reduced to 1C or even 0.5C, which is a considerable reduction.

Why the front end?

Under the trend of large middle stage and small front stage, the front end is the team that is closest to users and full of vitality. The front end has been hoping to have the opportunity to get rid of the dilemma of “resources”, and there is a broader and clear expansion demand for the functions and boundaries of the overall work, which makes the range of the front end constantly derived, from side to side to intelligent, all of which reflects the expansion of functions.

For front-end developers, Node.js gives them the ability to expand their territory. Since the separation of front and back ends, Node.js has become the standard of front-end learning from end to full stack. In practice, it is found that most of the front end is indeed moving in that direction, but there is more confusion between business and autonomy. The relationship between the two is not easy to balance, and over time, it will have some impact on the scale of business.

image.png

The emergence of Serverless just gives the front end the opportunity to reduce the whole OPS process and focus more on the business itself. Meanwhile, due to the reduction of the overall code volume, the enhancement of the lightweight development concept and the deployment platform capability, the scale cost of the whole business is getting lower and lower.

Serverless has been compared to front-end 3.0, and for good reason. Node.js has been widely recognized for its lightness and speed. In the era of Serverless, fast container scheduling and fast code startup are very important indicators, while Node.js has great advantages in this aspect. On the other hand, continuous rapid trial and error at the front end, business support, As well as the implementation of the whole Ali front-end committee, so that Ali front-end in the Serverless era shine brilliantly.

Why 50%?

This number seems to us to be higher than Serverless. It is divided into two aspects of scale cost and delivery speed.

Reduce the cost of scaling

The first is server cost.

From the perspective of the container itself, the previous simple calculation has been done. From the traditional container to the function, the entire container resource gradually evolves from a fixed specification to a more fine-grained specification, which will be more in line with the demands of the scene. After one year’s tracking, the machine cost of middle and background applications can be reduced by more than 70%, while the actual mobile terminal business has also reached about 30%.

The second is the cost of governance.

The bigger the company, the more serious the historical burden, this year’s Ali Group, there are still Node.js V6 and V4 code. Node.js versions, frameworks, and libraries are updated every year for at least a few months, if not several years.

Today, the function Runtime is written by the front end and we can embed the node.js version, framework, and even middleware that needs to be governed into it, which requires the ability to customize the entire Runtime and generalize it.

The group has a variety of functional services, providing a different infrastructure, gateway services. Today’s Amoy front-end can be deployed on different platforms using a single set of code, thanks to the multi-platform adaptation capabilities underlying Midway Sererless. The code’s anticorrosion layer also smoothes out the platform differences in the community.

For each platform, Midway Serverless provides different runtime initiators to smooth out platform differences and regularize platform entry and exit parameters, event structures, and gateway return formats so that users are as unaware of differences in underlying containers and protocols as possible.

image.png

Through this solution, the group deploys a set of code on different function services to provide services of different protocols. Therefore, in the community, our open source solution is also applicable to multiple platforms, such as Ali Cloud, Tencent Cloud or aws Lambda and Azure in the future.

After this layer of corrosion protection and customization, the entire runtime update became simple, the traditional application of half a year of version flattening work, in just a week to complete.

For example, a platform at the bottom of the connection protocol library has vulnerabilities, starting from receiving safety report, we did a few things, a pull data from the platform affected the function of range, to the security, the mail to all business, and advise within a certain period of time don’t do take the initiative to declare, unifying the default automatic updates. The second is to carry on the rolling update in the flow trough period, and inform the business timely attention and test. Through this process, the entire security update can be processed in a very short time, which is almost impossible in the past application scenarios.

image.png

The third is safety production cost, which has a large demand within the group, but there should be few small and medium-sized companies, so I will not elaborate here. Through the control and governance of these three pieces, the group’s business scale cost is rapidly reduced under the Serverless architecture.

Delivery speed

In addition to the cost of scaling, another piece is business delivery. Both the mobile terminal and the middle and background scenarios facing the front end need rapid delivery. In view of the current situation, the front end is still the bottleneck of research and development. After the use of Serverless, the original complex process can no longer meet the existing demands.

As we shared last year, the front end built its own development process and platform for testing, grayscale and rollback in new scenarios. Throughout the process, there are fewer and more focused nodes than ever before.

image.png

On the other hand, the overall efficiency of r&d has also been greatly improved.

The efficiency of front-end development benefits from the integration of front and back ends and the speed of integrated development and delivery.

Traditional front-end development needs to be developed in the front-end warehouse and node.js side warehouse, and the release process is also separated. In the Serverless scenario, Midway Serverless designed an integrated development and release solution that allowed the front-end to develop the business in the same repository and release it in the same process. Especially those who maintain more than business students, the feeling will be deeper.

In addition to integrated development, debugging, and deployment, from a code perspective, the original coding habits are retained and there is no need to re-learn new programming apis. Midway Serverless, in addition to offering TypeScript and decorator-based coding styles, also offers some traditional Egg application migration solutions, which have been tested in various BU applications with great results.

After a year of statistics on the platform test and interviews with business developers, the new R&D mode has improved the overall delivery efficiency of the business to a certain extent, and the improvement is universal.

For example, traditional business requirements require the intervention and joint investigation of the back-end, while the new research and development mode will develop faster at the code level. Although the workload per person increases, the overall delivery time, input personnel and joint investigation cost will be significantly reduced.

image.png

In addition to business-sensitive delivery data, we also measured the overall amount of code being developed, the frequency of code being delivered, and the iteration and release of requirements. After a year of business tracking and data calculation, we have concluded that the overall human efficiency of the front-end is about 48%. The algorithm of the whole core involves a lot of internal data. I’m sorry that I can’t provide it.

The disadvantages of Serverless

Every coin has two sides. Serverless advantage is of course great, but it is new after all, especially when it is landed in the enterprise, it will inevitably encounter some problems.

First, the lack of infrastructure, traditional various clients, Japanese delivery, even road tracking and other capabilities are very perfect, but the function of these new things still need time to gradually settle, coupled with the impact of the elastic container, the whole link is still a new thing, need time to verify the stability and reliability.

Second, the overall concept of business students still stays at the level of traditional application. They do not have a deep understanding of the operation mechanism of functions and the behavior triggered by events. In addition, the framework has done a lot of shielding work.

All these need to be polished slowly. I believe that with continuous practice, the whole will become better and better.

Github: github.com/midwayjs/mi… Open source is to contribute to the development of front-end and Node.js, please go to Github to experience, and help click the Star~ 🙇♂️ thank you ~

The last

As we can see, 50% of the calculation method is a relatively sentimental number, but Serverless really embodies its charm and value.

Finally, celebrate the release of Midway Serverless V1.0. Through the entire Midway Serverless system, we have gradually opened up ali’s Serverless capabilities. We hope that the whole front end will have different ideas to undertake greater business functions and enter a new era.