Introduction: SAE’s five new features and four best practices break the boundaries of Serverless and make All on Serverless possible.

Micro service scenario, open source self-built really the fastest, most economical and stable? Is complexity really Kubernetes’ Achilles heel? Enterprise application container, must go through the K8s this “plank bridge”? Serverless Applies to non-core scenarios with simple logic, such as small programs, ETL, and scheduled backup. Is Java Microservices really a long way off?

On the scene of 2021 Cloud Computing Conference, Alibaba researcher ding Yu (Shutong), general manager of Aliyun Intelligent cloud native application Platform, released the new product positioning and five new product features of Serverless application engine SAE, and gave the answer to the above questions.

From proprietary to generic, SAE is a natural fit for the large-scale implementation of a company’s core business

Different from Serverless in the form of FaaS, **SAE takes “application as the center”, provides application-oriented UI and API, does not change the application programming model and deployment mode, ** maintains the consistent development and deployment experience of customers on traditional servers, and can also facilitate local development, debugging, and monitoring. Greatly reduces the threshold for customers to use Serverless, can achieve zero transformation smooth migration of enterprise online applications.

Because of this, SAE helps Serverless from dedicated to general, breaks the implementation boundary of Serverless, and makes Serverless no longer the favorite of front-end full stack and small program. Background micro-services, SaaS services, Internet of Things applications can also be built on Serverless, which is naturally suitable for large-scale implementation of enterprise core business.

From complex to simple, SAE is a natural fit for enterprise zero-threshold containerization

SAE provides a full set of out-of-the-box micro-service governance capabilities that have gone through the double 11 test. Customers do not need to consider framework selection, data isolation, distributed transactions, circuit breaker design, stream limiting degradation, etc., nor do they need to worry about the limited community maintenance and secondary customized development. Spring Cloud/Dubbo zero transformation seamless migration. On the open source, we also enhanced lossless up and down line, service authentication, full link gray level and other advanced features.

SAE also helps users to mask the technical details of THE K8s, enabling enterprise applications to be containerized without barriers, and embracing the K8s without feeling at all. Provide the ability to automatically build the image, in addition to the image, provide WAR/JAR/PHP ZIP package and other ways, reduce the threshold for customers to make Docker image. The K8s eliminates complex network and storage plug-in adaptation, assigns each application instance an IP address that can communicate with each other in the VPC, and stores data to the storage system persistently. Shield the OPERATION and maintenance upgrade of K8s, and you no longer need to worry about the stability risk caused by K8s version upgrade. Mask the K8s interconnection monitoring component and elastic controller, providing the end-to-end observation capability in white screen mode and flexible flexible policy configuration. Users continue to have the original packaging deployment mode, directly enjoy the technical dividend of K8s.

5 new features highlight the new advantages of Severless and extend the new boundaries of Serverless

  • Resiliency 2.0: the industry’s first hybrid resiliency policy, which supports the combination of timing and indicator policies. In the open source K8s, the number of TCP connections is increased, and service indicators such as SLB QPS/RT trigger elasticity can be set. Advanced elastic Settings such as expansion step and cooling time can be set.
  • **Java cold startup 40% faster: ** Based on Alibaba Dragonwell 11 enhanced AppCDS startup acceleration technology, the first application startup process generates a cache to save, the subsequent directly from the cache to launch the application. Compared to the standard OpenJDK, the cold start time is 40% faster.
  • ** Ultimate deployment efficiency 15s: ** Provides an end-to-end deployment experience of 15s based on low-level full-link upgrade, secure sandbox 2.0, image acceleration, etc.
  • ** One-stop PHP application hosting: ** Support for PHP ZIP package to directly deploy SAE, and provide PHP runtime environment selection and application monitoring capabilities, providing one-stop PHP application hosting experience.
  • ** Richer developer tool chain: ** In addition to Cloudtoolkit, CLI, VSCode and other developer tools, new support for Terraform and Serverless Devs, based on resource orchestration ability, one-click deployment of SAE applications and dependent cloud resources, making environment construction easier.

Four best practices, the model of All on Serverless

Low threshold microservices architecture transformation

Faster, cheaper and more stable than open source self-built microservices. With the rapid growth of business, many enterprises are facing the difficult problem of single to micro service architecture transformation; Or self-built microservices cannot meet the needs of enterprise stability and diversification. With SAE’s full set of out-of-the-box micro-service capabilities, the cost of customer learning and r&d is reduced, and the stability endorsement that has passed the test of Double 11 enables these enterprises to quickly complete the transformation of micro-service architecture and support the rapid launch of new businesses. This is also the most widely used scenario for SAE, arguably the best Serverless practice in microservices.

One-click start and stop development test environment

Medium and large enterprises have multiple sets of environments, which are often not used 7*24 hours for development, testing and pre-release environments. They keep application instances for a long time and have high idle waste. Some enterprises’ CPU utilization rate is close to 0, and the demand for cost reduction is obvious. SAE’s one-click start-stop capability has given these companies the flexibility to free up resources on demand, saving two-thirds of the machine cost just by developing the test environment. Next we will also use K8s orchestration capabilities, orchestration of application and resource dependencies, one-click initialization of a set of environments and clone replication environments.

Full-link gray scale

More grayscale capability than the open source K8s Ingress offers. SAE combined with the scene characteristics of customers in the PaaS layer, not only realized the seven-layer traffic gray scale of K8s Ingress, but also realized the full-link gray scale from front-end traffic to multiple cascaded microservices at the interface and method level.

Compared with the original scheme, deployment operation and maintenance is more convenient. In the past, customers needed to deploy multiple applications in two namespaces and use two sets of complete environments to achieve formal and grayscale release, resulting in high hardware cost and troublesome deployment and maintenance. Based on SAE, the customer only needs to deploy a set of environment and configure some grayscale rules to access the specified special traffic to the special instance. The cascading layer by layer not only controls the explosion radius, but also saves the hardware cost.

SAE is used as an elastic resource pool to optimize resource utilization

Most customers will use SAE in full, while a small number of customers will mix and match the normal holding portion of the same business on ECS, using SAE as an elastic resource pool.

You only need to ensure that ECS instances and SAE instances of the same application are mounted to the same SLB backend, and set the weight ratio. Microservices applications also need to be registered in the same registry. In addition, the reuse of the customer’s own release system ensures that the VERSION of SAE and ECS instances is consistent with each release. Reuse the customer’s self-built monitoring system, send the MONITORING data of SAE to the monitoring system through OpenAPI, and standardize the monitoring data of ECS. When the traffic peak arrives, the elastic module will bounce the elastic instances into the SAE system, greatly improving the elastic capacity expansion efficiency and reducing the cost.

This mixed-part scheme is also suitable for the transition from ECS mode to SAE as an intermediate scheme to further improve the stability of the migration process.

SAE’s five new features and four best practices break the boundary of Serverless and make All on Serverless possible. Zero-threshold containerization is applied to make container + Serverless + PaaS combine three into one. Technology advancement, resource utilization optimization, constant development operation and maintenance experience can be integrated together.

The original link

This article is the original content of Aliyun and shall not be reproduced without permission.