Kubernetes project, backed by a rich family and a big factory, has risen rapidly under the joint efforts of many manufacturers and open source enthusiasts since its open source in June 2014, and has grown into a de facto standard in the field of container management today. Its success is self-evident with its advanced design concept, open participation threshold, and strong support from major manufacturers and developers at home and abroad.



Kubernetes heat



Both at home and abroad, the wave of Kubernetes is sought after, including major cloud manufacturers, Ant Financial, JINGdong, Meituan, Didi and other major companies have taken Kubernetes as their infrastructure focus, “there are 10,000 Hamlet in the eyes of 10,000 people”. Although Kubernetes is the de facto standard in container management, there are actually differences in the way Kubernetes is implemented in enterprises with different backgrounds. They can be broadly divided into three categories:

  • One is completely Above Kubernetes in its native way of deployment and application, most of these users are some start-ups, without too much burden of technology stack, and mainly concentrated in the use of public cloud Kubernetes solutions and services;
  • One is based On Kubernetes (On) container management platform, reuse some concepts of Kubernetes but did not give the application management to Kubernetes to manage, maintain the old service governance mode, this kind of enterprise development time is relatively long, heavy technical burden. Unable to immediately switch to the cloud native service governance mode, temporarily unable to abandon years of technical accumulation, such users are mainly concentrated in some medium or large private cloud Kubernetes usage scenarios;
  • The other is based on the design concept of Kubernetes (In) through the custom application load to solve and adapt to the needs of localized application management, the localized load and management into the native Kubernetes architecture, which is also a trend of application management at present, Not only can enjoy the dividends of cloud native and community Kubernetes, but also can better integrate years of technology accumulation and evolution, which is a perfect way to embrace cloud native.

Base “axe” : Above Kubernetes

If you choose a container management platform now, I believe that no one will miss Kubernetes, especially for users without any technical burden, choose Kubernetes is undoubtedly the wisest choice.

Above Kubernetes, you take the native, standard, community version of Kubernetes, deploy it, and build your application entirely on top of Kubernetes. The cluster is accessed through the standard Kubernetes API. You can evolve your Kubernetes completely with the community, stay in sync with the community, and maintain your Kubernetes entirely with the community.

This approach is ideal because you don’t have to worry about breaking away from the mainstream of your community and industry, and it also reduces the cost of management and operations.





As shown above, you can install standard, mainstream cloud native systems to implement Kubernetes that embrace the community’s complete architecture and meet your needs.

Advanced “axe” : On Kubernetes

It’s nice to be able to use the native Kubernetes cluster, but some scenarios don’t always work. As we all know, Kubernetes concept and design is actually very advanced, Google’s software development and application deployment concept although good, but the industry most of the enterprise is still the old technology concept and more complex scene, for some of the enterprise users accumulated by technology, It’s a bit overwhelming to abandon the current way of application management and deployment to the native Kubernetes way of application deployment and management. That is to these user character, certainly can’t watch others eat meat oneself chew a biscuit.

The landing form of On Kubernetes is actually a compromise and intermediate process. On the one hand, it is difficult to abandon the existing infrastructure at once, such as service governance, monitoring, network topology, etc. Kubernetes can only be localized to meet the current application management mode, such as abandoning Kube-proxy to use a flat Intranet environment, packaging some monitoring and proxy components in a rich container, etc.

This landing way on the one hand can do less changes to eat this wave of technology dividend, on the one hand can explore their own cloud native road, the internal technology stack can also be developed in the direction of cloud native evolution, not too much behind in this wave of trends, and can do customized Kubernetes development according to their own scenes, Even go further than the community Kubernetes or solve some problems that the community didn’t solve.

Although we can rely on the design concept and management ability of Kubernetes, but at the same time, because the localization transformation is not fully compatible with the community version of Kubernetes, the upgrade will be more troublesome, and each upgrade has to re-patch. There will also be the dilemma of maintaining multiple versions of Kubernetes at the same time, which will undoubtedly bring a lot of trouble to the development and operation, so this is not the general small company can go through the road, requires certain RESEARCH and development and technical ability. Typical examples are Alibaba’s Sigma, Meituan-Dianping’s HULK 2.0 and JD.com’s JDOS 2.0.



HULK2.0

In this kind of advanced play, there is no standard formula, only one solution. For example, meituan-dianping has built HULK2.0 system on Kubernetes based on its own existing facilities. It has made localized transformation in storage, network, load life cycle management and application monitoring, but it still maintains full compatibility with Kubernetes API. You can customize Kubernetes based on your own infrastructure, such as storage, monitoring, link tracking, service publishing and network integration, and even according to business scenarios and your own needs. For example, netease Cloud has implemented in-depth customization based on Kubernetes.

Axe: In Kubernetes

The term “cloud native” has been widely spread in the technical circle. Even some students do not understand what cloud native is, but they all know that they should evolve towards the direction of cloud native. However, it is necessary for users to change the way VMS are deployed and managed and the governance policies of services. I have to say that the trend is All in Kubernetes, CRD from Kubernetes 1.7 to last week’s release of GA 1.16, which means we have the ability to extend Kubernetes in production.

If we in-depth understanding of Kubernetes will find that Kubernetes itself is a platform, Kubernetes in addition to provide a lot of functions: for example, it can simplify the workflow of the application, speed up the development speed; Users can use labels to organize and manage resources in their own way. Annotations can also be used to customize the description of a resource, such as providing status checks for management tools. In addition, the Kubernetes controller is built on the same API that developers and users use, and users can write their own controllers and schedulers, as well as extend the functionality of the system through various plug-in mechanisms.

That said, we can do any form and type of application load and management in Kubernetes by extending the API and the load type, even if you have a complicated stack or a complicated workflow, no problem, You can inject any external dependencies and logic you need in the resource and application lifecycle.

This landing way is actually with the help of the extension mechanism provided by Kubernetes, completely localized, complicated logic into Kubernetes design and management concept, not only using Kubernetes, but into and weaken the original Kubernetes, Ultimately, every user has their own unique set of Kubernetes. You have me, I have you. In addition, it is still fully compatible with native Kubernetes and can gracefully upgrade and merge community patches and so on. A typical example is Ali’s open source Openkruise project. Github.com/openkruise/…

Users use Kubernetes core is to manage the workload, in fact, a big reason to choose On Kubernetes is that the user’s current workload management mode and Kubernetes existing workload types do not match well. CRD and Operator solve this problem nicely, allowing users to customize their own loads. One such example is the OpenKruise project, a set of controllers that extend and complement Kubernetes core controllers for workload management. For example, it provides three types of workload controllers:

  • Advanced StatefulSet: An enhanced version of the default StatefulSet, with additional features such as inplace-update, pasue, and MaxUnavailable.




  • BroadcastJob: A job that runs pods on all nodes in the cluster to complete.




  • SidecarSet: A controller that injects sidecar containers into the Pod specification based on selectors and can upgrade sidecar containers.




Ideally, any load can do All in Kubernetes, even Kubernetes’ own load management, i.e. Kube-on-kube, as well as management of stateful services, such as Mysql cluster operators, etc. You can find some very classic examples in operatorhub.





conclusion

Although different landing methods differ from each other, in fact, they are the best choice for different backgrounds, they can be fully compatible with Kubernetes API, apart from the problem itself, can not say which way is best.

  • Above Kubernetes: If you are a startup and you just want to use Kubernetes for normal container management or service deployment, you don’t have the burden and you don’t have the manpower to maintain Kubernetes yourself.
  • On Kubernetes: If you are a medium or even large company, with a large amount of technology accumulation and facilities, and have the ability and manpower to transform and develop Kubernetes or native Kubernetes can not meet your needs;
  • In Kubernetes: you are not satisfied with using Kubernetes alone or native Kubernetes does not meet your needs. You can switch from Above Kubernetes to Above Kubernetes. Of course, if you want to change your mind, or you want to overhaul your current infrastructure and application management, move closer to the cloud native path, or upgrade your old machine deployment and delivery model, you can switch from On Kubernetes to All in Kubernetes.
How do you land your Kubernetes?

Author: Wang Guoliang

The original link

This article is the original content of the cloud habitat community, shall not be reproduced without permission.