It’s been a while since Kubernetes 1.14 was officially released, so I’m sure you’ve already seen various interpretations of it from different sources.

However, it is fascinating to see how the Kubernetes project, which is about to celebrate its fifth birthday, will evolve as compared to code Release. As a mature open source ecosystem, the Kubernetes project is officially released every three months, which is actually a solid footprint in the process of the rapid development of the technology community.

If the “democratization of infrastructure open source projects” with “increasing plug-in capabilities and scalability” is the core theme of Kubernetes in 2017-2018, what will be the evolution of the technology community in 2019?

In this article, we’ll explore the nature of the technology behind this question, starting with the release of Kubernetes 1.14.

Windows Ecology becomes a first-class citizen of the Kubernetes project

Kubernetes support for Windows Ecology has been on the agenda since the release of this project. However, as a pure Linux technology stack supported infrastructure open source project, Windows node and Windows container support really made substantial progress, or Kubernetes project plug-ins and extensible capabilities gradually mature after 1.6 slowly into the right track. It is also easy to understand that the Windows architecture is fundamentally different from the current mainstream container stack, which requires the Kubernetes project to be able to provide a higher level of abstraction and scalability to support the two very different stacks. And connect with existing Kubernetes ecosystems such as CNI and CSI. The complexity and workload of this part of the work is also the main reason why production of Windows Node was delayed from 1.13 to 1.14.

In the 1.14 release, most of Kubernetes’ Pod, Service, application choreography, CNI network and other core capabilities have been supported on Windows nodes. In addition, many advanced features are implemented on Windows, including custom monitoring metrics, horizontal scaling, preemption, and priority scheduling. Currently, the functions that are not supported are basically semantics that are temporarily unavailable on Windows, such as Host Network and other resources and permission definition methods that are exclusive to the Linux kernel. As you can see, the Kubernetes release for Windows node and Windows container support, compared to the previous has a huge improvement, the completion degree is very high, really live up to the “GA” with the promise of the release term. Domestic and foreign public cloud providers, such as Aliyun Container Service (ACK), have also recently launched Windows Container support, which provides unified management capability of mixed deployment of Linux/Windows applications, again proving the availability of this release.

It is not hard to see that public cloud providers (such as Microsoft cloud team behind Windows GA support), as one of the main drivers of CNCF community, have actually been playing a huge role in the whole cloud native technology ecosystem. Gradually bringing real enterprise user needs like Windows support to a rapidly growing infrastructure project centered entirely on the Linux technology stack. In the future, such input from public cloud providers will continue to play a critical role in the Kubernetes project as it becomes an important way for more enterprise users to benefit from the cloud’s native technology ecosystem. This will continue to be the biggest difference between the Kubernetes project and other open source infrastructure projects.

Kubernetes’ native application management capabilities are emerging

For a long time, Kubernetes application management was done by third-party projects like Helm or upper PaaS. However, after 1.14, the Kubernetes project itself began to have native application management capabilities, one of the most important of which is Kustomize.

Kustomize allows users to Base an application description file (YAML file) on an application description file and then use overlays to generate the description files needed for the final deployment of the application, rather than just providing application description file templates as Helm does. Customization is then done through Templating.

At the same time, other users can use any Base YAML or YAML generated by any layer without any impact. This allows every user to manage a huge number of application description files through a Git-style process like fork/modify/rebase. The idea of PATCH is very similar to Docker image, which can avoid the intrusion of “character replacement” on application description files and does not require users to learn additional DSL syntax (such as Lua).

More importantly, the idea of PATCH mentioned above is completely matched with the declarative API emphasized by Kubernetes project, and the whole experience of using it is completely consistent with Kubernetes API itself without any sense of fragmentation (you can think about why PATCH is the essence of declarative API).

In the 1.14 release, Kustomize has become a built-in kubectl command, making it possible for users to manage, modify, and deploy massive applications directly in the cloud using Kubernetes’ declarative API. Moreover, kubectl’s own plug-in mechanism has been greatly improved in 1.14, making Kubectl combined with a variety of client plug-ins has become an application management tool potential. In this way, Kubernetes project’s definition of application and application management has become clear. We can use the following schematic diagram to briefly describe it:



Application description files (YAML files) are at the heart of Kubernetes’ native application management system. An application description file, which is actually a combination of several Kubernetes API objects, defines the resource orchestration and service orchestration required to deploy the application. Once such a description file is submitted to Kubernetes, it then uses the controller pattern to ensure that the state of the entire cluster is exactly as defined in the description file.

The source of these description files is the output of the upper framework or the user. More importantly, all operations on the application should be performed through declarative apis with Create, Patch, and Delete operations on the file, triggering Kubernetes’ controller model to perform predefined choreography actions.

In this model, Helm and Kustomize actually define the output path and user experience of two different application description files, and also represent two different degrees of coupling and abstraction from Kubernetes API: One stands on its own and the other is incorporated into Kubernetes’ design philosophy. It remains to be seen how well the Kubernetes community is currently exploring this application management system after the 1.14 release.

Performance optimization in large-scale scenarios is increasingly on the agenda

As many participants familiar with the Kubernetes project probably know, in the past, the Kubernetes community has generally not given very high priority to performance tuning in large-scale scenarios. It’s easy to see why, in the early stages of an open source infrastructure project, scaling the ecosystem and improving functionality is often more important than supporting a larger cluster.

However, as the Kubernetes backbone becomes more stable, the community will surely begin to pay more attention to the various problems exposed by Kubernetes projects in large-scale scenarios, which is still easy to understand: Small and medium-sized users are of course the basis for the ecological success of the whole project. However, through the path of Kubernetes, more Wal-Mart, Starbucks and technology unicorns at home and abroad can become beneficiaries of cloud native technology, and then become scale users on the public cloud. Must be a key consideration for the Kubernetes community.

Of course, as a natural “integrated” infrastructure project, Kubernetes’ main direction of performance improvement must be focused on the API layer and client usage scenarios that are most closely related to the upper users. Of course, this is closely related to the architecture of the Kubernetes project: the design of the declarative API revolves around the configuration management mechanism with ETCD as the core, making the Kubernetes project naturally a distributed system with heavy API layer and light scheduling. This also means that when the amount of configuration information (i.e., API objects) that needs to be managed is large, this layer is also the area most likely to expose performance problems.

So, in Kubernetes V1.14, the community first made a lot of optimizations from an end-user perspective, such as kubectl parallelizing the traversal behavior of API objects. This seemingly small change in the performance of kubectl users in large scale scenarios is significant.

The most important work, of course, is in the performance optimization of APIServer itself. For example, Kubernetes’ Aggregated API allows developers to write a custom service, and then incorporate that service into the K8S API to use it as if it were the native API. But in this case, APIServer merges user-defined API specs with native API specs, which is a very CPU-consuming performance pain point. In V1.14, the community specifically optimized the efficiency of this operation, resulting in more than a tenfold improvement in APIServer merging Spec performance.

In addition, another important direction of Kubernetes project performance improvement is the optimization and improvement of the connection path between ETCD and APIServer. As the configuration center of Kubernetes project, it is also the only external data dependence. The data volume and interval size of each ETCD submission operation, and the request and response period of each connection may affect the performance of the final Kubernetes project in large-scale scenarios. Alibaba’s technical team has been continuously tuning and improving the performance of the ETCD project, which has been released in the latest version of ETCD. These are not part of the Kubernetes 1.14 release, but they are worth keeping an eye on.

Scalability and project stability continue to improve

In addition to the above areas in this release gradually become the core areas, Kubernetes project in the past has been more attention to several core directions, such as the expansion of capacity, project stability, etc., is still an important melody of Kubernetes project to continue to evolve. That’s why in Kubernetes 1.14, there are many important changes like “Pod Ready ++” that further refactor mature system features into extensible interfaces. With Pod Ready ++ officially released, Kubernetes users can easily customize an application from creation to final availability by writing an external Controller. Rather than being forced to follow the definition of the Kubernetes project. This ability is also an important embodiment of the democratization of open source infrastructure projects.

For details on the scalability and project stability enhancements, I recommend that you follow the Kubernetes 1.14 technical readings for further understanding, and ultimately customize your upgrade plan based on how important these features are to your group Settings.

conclusion

The release of Kubernetes 1.14 is an important link in the development of this mature and stable open source infrastructure project. So we’re seeing the Kubernetes community continue to push in a few areas that haven’t received much attention in the past, and may even further change the direction of the entire cloud native community in some areas. This kind of technological innovation, punctuated by steady growth, is the key to the continued excitement of the community

Looking at the current cloud computing ecology, more and more foreign large-scale enterprise users, such as Snapchat and Twitter, have started to transfer their entire technology stack directly to kubernetes-based public cloud services, which just confirms the essence of the keyword “cloud native” : In the future cloud era, the complete life cycle of software development, testing, release, operation and maintenance will be carried out based on cloud. The so-called “cloud native”, in fact, is through a series of technical means, for the majority of developers to work out a software natural growth on the cloud, delivery on the cloud, so as to maximize the value of the cloud technology blueprint.


The original link

This article is the original content of the cloud habitat community, shall not be reproduced without permission.