On July 21, 2015, Google announced the establishment of the CNCF Foundation and officially released Kubernetes version 1.0. After more than 2 years of rapid development, Kubernetes has grown from a small sapling into a big tree in the sky. It is even said that Kubernetes has won the war of containers. Some time ago, GitHub announced that they have all switched to Kubernetes cluster, jingdong also said that more than 60% of their online business runs on Kubernetes, these typical cases are no doubt injected a shot in the heart of the Kubernetes community.

Looking back over the past two years, what interesting things have happened in the container ecosystem? What growth and change has Kubernetes experienced? Docker, Mesos, OpenStack and other open source projects? On the occasion of Kubernetes’ second birthday, InfoQ invited Zhang Xinlai, CEO of Talent Cloud Technology, to explain the ups and down behind the development of this open source project from Google.

In 2014, the chill came early in the American autumn. Inside Google, however, a fiery “earthquake”, which the company calls “Ursquake”, is raging. Urs Holzle, the Swiss Senior Vice President and the boss of Google Technical Infrastructure, was responsible for all of this. The top layer of Google’s organizational structure is composed of many Product Areas (pas). Technical Infrastructure PA led by Urs is responsible for managing the Infrastructure of hundreds of Google data centers and more than one million machines. Provide “computing” support for all other business PA within the company (e.g. advertising PA, mobile PA, media PA, etc.); The Cluster Management ecosystem is the core of this infrastructure. Through the “troika” of full container Management, distributed storage and software-defined network, Providing functionality, performance, and stability for upper-level production operations (such as Google Search) and batch tasks (such as big data and deep learning) that are unmatched by the outside world.

“Even the smell of wine needs a deep alley.” This engineering masterpiece, which has been composed of hundreds of Elite Google engineers for 10 years, has no choice but to serve as a business support and cost center for the company. When submitting applications for promotion, infrastructure engineers can only measure “what business they have supported” and “how much money they have saved for the company”, rather than “how many billions they have earned for the company” by their colleagues in advertising PA.

Urs ‘Earthquake, or Ursquake, saw the cloud computing boom of 2014 and the success of AWS, aiming to bring the behind-the-scenes heroes to the fore and turn Google’s in-house first-class infrastructure into an openly profitable product, Turn Technical Infrastructure, a cost center, into a profit center for the company. Although AWS had a market share of more than 80% in 14 years among enterprises that had already gone cloud, the overall proportion of cloud companies was very small, and Urs and Google were aiming to grab the untapped cake.

“The ideal is full, the reality is very skinny”, ambition in the landing encountered two major problems:

  • How can AWS catch up with its first-mover advantage and product maturity advantage?

  • How to solve the contradiction between PaaS and IaaS products: Although Google’s public cloud PaaS product AppEngine has strong hosting (automatic deployment, operation and maintenance), but poor flexibility (only supports fixed languages and middleware); IaaS product Compute Engine has high flexibility (similar to the configurability of bare computers) but low management (users install, maintain, and configure applications by themselves).

In order to solve this problem, the Google Cloud Summit was held behind closed doors in Woodloch, a summer resort covered with snow in March 2015. After unanimous discussion, it was decided that the core to solve the above problems was to launch the open source container choreographer management system “Project 7”. Make use of open source community to quickly circle fans, build ecology, “Shape people’s mind”, and carry out market subversion. At the same time, this open source system not only provides the underlying container, network, storage and other configuration items, but also provides rich management functions, thus breaking the aforementioned PaaS and IaaS products in the contradiction between flexibility and hosting. Project 7 was soon renamed Kubernetes.

Kubernetes stands on the shoulders of Borg, Google’s time-tested container management system; It also learned from the failure of the Omega project, which was intended to replace Borg but failed. The Omega project was motivated to solve the problems of severe monomer of Borg control nodes (BorgMasters), insufficient interaction with other components in the cluster management ecosystem, and extremely complex use and configuration. However, the failure of Omega project is due to the engineering failure to solve the contradiction between Omega scheduling accuracy and throughput, and the excessive smoothing from Borg to Omega. Kubernetes avoids these problems well.

On June 6, 2014, Joe Beta, one of Kubernetes’s founders, merged the first GitHub Commit on GitHub. Led by Brendan Burns and Craig McLuckie, the other two early founders, the Kubernetes team expanded rapidly, They include Brian Grant, who previously led the Omega project inside Google, and Tim Hockin, Borg’s technical lead. One year later, on July 21, 2015, Kubernetes announced the release of official version 1.0, which is ready for production. Since then, Kubernetes has been moving forward at a rapid pace with a new release every quarter, and has released version 1.7 by its second birthday.

With the release of Kubernetes 1.0, Cloud Native Computing Foundation (CNCF) was also announced, thus providing Kubernetes with a strong legal, operational and project maintenance umbrella. Allowing the Kubernetes project itself to focus on technological innovation. Different from Apache Foundation, CNCF does not impose its own organizational structure and management mode on its member projects. Instead, CNCF allows member projects such as Kubernetes to manage themselves more autonomously, and provides help from the following dimensions:

  • Ecological binding: The plug-in projects closely surrounding Kubernetes (such as Linkerd, Fluentd and Promethues) are placed under the same CNCF ecology to form an organic binding.

  • Legal protection: To ensure the reasonable use and consumption of Kubernetes trademark, Logo, License, patent, copyright, etc.

  • Marketing: Promote Kubernetes online and offline meetups, K8sPort, Kubecon, blogs, Twitter, news media, and more.

  • Training and certification: develop specifications, procedures and courses to popularize and profit from Kubernetes and other technologies.

  • Finally, coordinate the relationship and competition between different manufacturers.

Docker brought container technology to the forefront, and nearly all cloud computing vendors are waking up to new opportunities. The Apache Mesos project, which came out in 2009, also enjoyed a second life after Docker came out by integrating with Docker and positioning itself as a container-based Data Center Operating System (DCOS). To some extent, it overlaps with the container cluster management platform Kubernetes. Docker also later realized that the container itself is just a thin and thin operation carrier at the bottom, which is difficult to make profits on a large scale and promote enterprises to use it in complex production environment. Therefore, it also started to develop its own cluster management tool Docker Swarm, and Docker Swarm was integrated into the Docker engine after Docker version 1.2, causing a controversy in the community. Then Docker launched Swarmkit, an ambitious move into cluster management.

The leaders of Kubernetes have realized the importance of ecological “competition and cooperation” relationship from the very beginning, and played the banner of “neutrality” along the way to win the recognition and goodwill of the community. At the same time, it has been working hard to support different container runtimes and engines at the bottom, gradually lifting the dependence on Docker. During the Kubernetes’ infancy, leaders opted for a more community-pleasing “good boy” approach, helping themselves to rapid growth by tapping into more resources. Back in May 2015, when Kubernetes announced support for new container runtimes AppC and RKT in addition to Docker, Kubernetes product manager Craig McLuckie blogged that the move was not intended to replace Docker and praised Docker for its ecological contribution. Earlier, on April 22, 2015, the Official Kubernetes blog reported on Mesosphere’s integration of Kubernetes into DCOS.

As Kubernetes grew, and it became clear in 2016 that Kubernetes was winning the container management war, Kubernetes’ attitude toward other “brother projects” grew stronger, For example, in 2016, Kelsey Hightowers, the evangelist of Kubernetes, and Docker executives engaged in a war of words on Twitter. In 2016, KubeCon explicitly declared Kubernetes in several keynote speeches in Seattle Container Runtime, Network Plugin, and other abstract interfaces.

The growth of an open source project, in addition to the leader and technical community, also needs the support of many vendors and a convincing user group. At the time of the 1.0 release in July 2015, Kubernetes’ partners, vendors, and users were clustered around early contributors such as Red Hat, Mirantis, Rackspace, CoreOS, and a handful of little-known startups. With the release of one milestone version of Kubernetes, the functionality, stability and practicability of Kubernetes have been continuously sublimated, and also attracted many 500 powerful factories to participate in, become partners, manufacturers and end users:

  • Q2-q3, 2016: OpenStack vendors led by Mirantis actively promote the integration with Kubernetes, so as not to stand in opposition to Kubernetes and become the “old generation technology” overthrown by Kubernetes.

  • In November 2016, CNCF and Linux Foundation jointly launched Kubernetes certification service, further promoting the commercialization and popularization of Kubernetes.

  • At the KubeCon Conference held in Seattle in November 2016, dozens of Kubernetes end users, including Pearson, Box and others, demonstrated the successful application of Kubernetes in their production environment.

  • In December 2016, With the release of Kubernetes 1.5, Windows Server and Windows Container can officially support Kubernetes, the perfect integration of Microsoft ecosystem.

  • In February 2017, The official Microblog of Kubernetes reported that China JINGdong replaced a large number of services and components in OpenStack with Kubernetes, and realized the construction of full-containerized private cloud and public cloud. China’s Kubernetes user case first appeared on the international stage.

  • In June 2017, At LinuxCon held in Beijing, Chinese companies reported Kubernetes’ successful cases in China’s finance, power and Internet industries, marking the internationalization of Kubernetes’ end user group.

  • On its second birthday, Kubernetes’ users include leading companies in finance (Morgan Stanley, Goldman Sachs), Internet (eBay, Box, GitHub), media (Pearson, New York Times), communications (Samsung, Huawei) and other industries.

With a brief history review, let’s take a look at what Kubernetes looks like today. If there is one word to describe the status of the Kubernetes project today, it would be active; If I had to put a time limit on it, I would say ten years. We can get a sense of how active the Kubernetes project is by looking at an objective set of official data.

  • From July 2015 to July 2017 for two years, Kubernetes main code warehouse (github.com/kubernetes/kubernetes) from version 1.0 of 10000 + commits becomes today’s nearly 50000 + commits, nearly five times.

  • Development up to now, Kubernetes is from a single large code base (github.com/kubernetes/kubernetes) to an ecological evolution of multiple code base; In addition to the main code base, there are about 40 other plug-in code bases and more than 20 incubation projects.

  • As of today, the Kubernetes Eco-community has 2,505 developers from 789 participating companies

  • Interestingly, the elephant phenomenon also exists among community contributors, with the top 10 contributors contributing over 26% of the code.

  • Kubernetes’ main repository has earned nearly 25,000 GitHub Stars, far ahead of its early competitors (Swarm and Mesos each have fewer than 5,000 Stars).

  • CNCF has hosted more than 200 offline meetups worldwide, including 10 in China alone.

The most representative and intuitive data is: Kubernetes’ GitHub activity has exceeded 99.99% of projects!

The community is now growing faster than even the Kubernetes founders expected. According to seattle-based KubeCon in November 2016, Google, the creator and leader of Kubernetes, contributed about 40%+ of the code, with more than half coming from companies and community developers outside Of Google. With over 2,000 contributors pushing the project forward at a high speed, the challenge of maintaining the consistency, stability and neutrality of the open source project is extremely high. At the Kubernetes Leadership Summit at Samsung in SAN Jose on June 2, 2017, When Kubernetes co-founder Brendan Burns was asked if he needed more community contributors, Mr Burns replied: “Maybe not”. Of course, the leaders of Kubernetes and CNCF are not ordinary people, and also proposed solutions from two dimensions of management and technology.

First of all, on the management dimension, CNCF and Kubernetes community formally proposed the management architecture (as shown in Figure 1) to help operate the open source project in a better distributed way, which includes:

  • Steering Committee: As the highest decision-making board, it shoulders the mission of grasping the project direction, defining the culture, rules and managing the team. At the same time, the Steering Committee should always remind itself to delegate power to the maximum extent and delegate specific tasks to the corresponding management team below. At present, the Steering Committee has 13 seats, which are created by nomination and election.

  • Special Interest Groups (SIG) : Own and are responsible for specific sub-code bases and functional modules.

  • Working Group (WG) : Formed on an AD hoc basis to implement short-term tasks, or to discuss early feature points.

  • Committee: Responsible for discussing sensitive topics (e.g. safety, codes of conduct), discussed and promoted by closed-door groups (not for the community).

This management structure ensures that the entire Kubernetes community can be led by the organic level quickly and without disperse. it is both autonomous and centralized. In addition to management, Kubernetes’ project architecture is also being tweaked and evolved, with a more modular, decouple, layered architecture that allows thousands of contributors to collaborate efficiently and distributed without having to fork their own versions.

Figure 1. Kubernetes Project Management Organizational Structure [1]

If the early Kubernetes quickly stepped into the spotlight because of the halo of Google, the development to today it is widely adopted and applied by users in various industries is due to its real efforts: functionality, stability, scalability, security. To this day, the greatest efforts of the Kubernetes community are devoted to the internal work of making every new release a leap forward. Taking the launch of new features as an example, we can see the efforts and acceleration of the community from the following data [2] :

  • Kubernetes 1.6, released in March 2017, includes a total of 29 new feature points — 8 Alpha, 12 Beta, and 9 Stable. The 1.6 release focuses on the following areas:

    • Storage (10 new feature points)

    • Scheduling (5 new feature points)

    • Cluster lifecycle Management (4 new feature points)

    • Authentication and Authorization (RBAC)

  • In Kubernetes 1.7, released in June 2017, 43 new feature points were released, including 31 Alpha, 6 Beta, and 3 Stable. And the areas of concern have also changed greatly:

    • The Apps submodule makes it easier to deploy and manage complex applications.

    • The Federation submodule allows Kubernetes to scale indefinitely while providing cross-domain live and load balancing.

    • The Node sub-module further integrates the Container Runtime Interface (CRI) to accelerate the universality of containers and decouples Docker.

    • The Auth sub-module improves security through more complete certificate management, encryption and auditing.

In addition to these rapid improvements in functionality, Kubernetes developers have also worked hard on scalability, stability and reliability. Take cluster size as an example. Two years ago, Kubernetes version 1.0 only supported 100 nodes in a single cluster, while in The March 2017 release of version 1.6, a single cluster can support 5000 nodes, and the CORRESPONDING API latency is smaller. The cluster federation function can jump out of the limit of a single cluster, so that the cluster scale dozens of times, 100 times the increase.

As Kubernetes grows, its community evolves dynamically. Especially in China, there has been a change from the early run by other camp manufacturers to the present crowd. In terms of contributors, Google still leads the community at 40 + percent, while individual developers’ combined contributions jumped over 25 percent. By contrast, early players like Red Hat and CoreOS have continued to invest. Since 2016, Chinese power has also begun to appear on the international stage. Take the code contribution as an example, as shown in Figure 2, ZTE, Huawei, Zhejiang University and Caiyun are the four companies or organizations from China with the highest contribution, and have entered the global top 25 contribution list. In addition, the speeches from Huawei and Caiyun were selected to be listed on KubeCon, the international conference of Kubernetes, for many consecutive times.

Over the past two years, Kubernetes’ vendor landscape has evolved. In the US, Red Hat and CoreOS have launched their own commercial distributions as Kubernetes project participants. Apprentices such as Kismatic and Deis, early Kubernetes start-ups, have been set to take over in the past year (Apprenda has bought Kismatic, and Microsoft has taken over Deis), suggesting that Dachang thinks apprentices are apprentices in Kubernetes’ direction. In addition, the three founders of Kubernetes are no longer at Google. Joe Beta and Craig McLuckie founded Heptio, a Kubernetes product company. Aim to make it easier for enterprises to use Kubernetes.

On the giant end of the scale, Aside from Google’s understandably big support for Kubernetes and the commercial Google Container Engine (GKE), Microsoft is also doing nothing. It poached Kubernetes founder and former Google engineer Brendan Burns and acquired Deis. AWS, on the other hand, has no doubt about its emphasis on containers, but it developed its own ECS before Kubernetes came out. AWS now supports Kubernetes, but due to ECS, AWS is not too aggressive on Kubernetes, and it is hampering the developer experience. It will be interesting to see in a few years whether Google’s bizarre strategy of using open-source Kubernetes as a secret weapon to keep AWS at bay actually works.

In China, The popularity of Kubernetes has not decreased at all. Start-up companies such as CAI Yun, Speed Cloud and Light Yuan Technology have taken Kubernetes as their underlying container management platform at the beginning of their establishment. With the increase of Kubernetes heat, Tencent, Huawei, JINGdong and other big business have also invested in Kubernetes camp. Due to the general trend, some of the previous Mesos, Swarm camp of startups have also rushed into Kubernetes tide. There is also a class of companies such as Rancher, which is compatible with Kubernetes in addition to supporting its own scheduling framework, Swarm and Mesos.

Figure 2. Major contributing companies in the Kubernetes community [3]

Worldwide, in addition to major public clouds have competed to support Kubernetes, Kubernetes has also been widely used in the Internet, finance, communication, energy, e-commerce and traditional industries in the context of enterprise private cloud. And adapt to the underlying bare-metal environment, OpenStack environment, VMWare environment and other cases. In China alone, Kubernetes already has top 500 corporate users such as JD.com, State Grid, Jinjiang Group, SAIC Group and a large banking organization.

Especially in the private cloud scenario, Kubernetes has had great success: According to the survey report released by 451 Research and CoreOS in May 2017 [4], 80% of American enterprises surveyed believe that Kubernetes is enough to replace PaaS. Seventy-five percent of them are already using Kubernetes to manage their container cloud platform, leading other container management tools by a wide margin. In The Chinese market, In June 2017, Kubernetes Chinese community K8SMeetup organized the first survey for Chinese container developers and enterprise users in China. Nearly 100 interviewed users and enterprises brought us interesting observations about Kubernetes’ launch in China:

  • As figure 3 shows, Kubernetes was the most popular container management system in the surveyed enterprises with a high usage rate of nearly 70%.

  • As shown in Figure 4, in addition to simple Web applications, more and more stateful and data applications are also running on Kubernetes platform in the surveyed enterprises.

Figure 3. Usage distribution of container management tools among surveyed enterprise users in China

Figure 4. Distribution of application types running on Kubernetes platform among surveyed enterprise users in China

Beyond the data, we try to get to the bottom of the phenomenon: Why is Kubernetes so popular? What do users like and hate most about Kubernetes during landing? What big holes did Kubernetes have in the landing process? Figures 5 and 6 summarize some of the insights from the K8SMeetup container survey from the surveyed enterprises.

Figure 5. What Chinese respondents like most about Kubernetes

Figure 6. The least favorite aspects of Kubernetes among Chinese enterprise users

In particular, in view of the shortcomings and traps in the landing of Kubernetes in the enterprise, the author summarizes the following based on his own experience:

  • The default upstream Kubernetes configuration is based on the public cloud environment or has a complete IaaS layer API, while the private cloud bare-metal environment support is relatively lacking, requiring additional development.

  • Kubernetes has a long learning curve and configuration files based on the command line and JSON/YAML are not cheap to learn.

  • Traditional industries have heavy technical debt and monolithic or stateful applications. Adapting Kubernetes’ modern microservices concept requires additional secondary development work or a deep understanding of Kubernetes’ advanced configuration items.

  • The deployment of Kubernetes cluster is tedious, especially the lack of mature upstream open source solution for online upgrade.

  • Upstream open source Kubernetes project still has some functional dead spots, such as multi-dimensional and multi-index elastic scaling, fine granularity isolation for hard disk and network resources, reasonable allocation of resources, etc.

However, the good news is that with Kubernetes’ rapid iteration and strong support from the open source community, a breakthrough is expected in 2017.

Kubernetes is the embodiment of Cloud Native Computing philosophy. Through container technology and abstract IaaS interface, it shields the details and differences of underlying infrastructure, enabling multi-environment deployment and flexible migration between multiple environments. On the one hand, cross-domain, multi-environment, high-availability, multi-active Dr Can be implemented. On the other hand, users need not be bound by a cloud vendor or underlying environment.

As shown in Figure 7, among the surveyed container enterprises in China, bare-metal environment, OpenStack, virtualization, and a variety of public cloud suppliers form a “four-world” pattern of Kubernetes operating environment. And Kubernetes just gives users enough flexibility, so that their own business systems can be rapidly deployed, run in different underlying environments, data centers, and even on the public cloud, to achieve high availability of business and maximize business benefits.

I believe that for a long time, due to the amount of investment, hybrid deployments of multiple infrastructures will become the mainstream of Kubernetes operation, and that binding Kubernetes to a specific IaaS must go against the cloud native philosophy and the original intent of Kubernetes. In addition, I believe that with the maturity of Kubernetes and the maturity of users, in the new incremental market environment, as Google with Kubernetes direct management of the data center (running on the bare machine, at the same time with the corresponding storage, network scheme through) will become the most long-term trend and the most reasonable choice.

Figure 7. The underlying environment distribution of Kubernetes run by Chinese enterprises

Kubernetes is now a huge success, but in the eyes of its leaders, how would they rate this success? In a summary of Kubernetes’ current situation published in June 2017 by Tim Hockin, Kubernetes’ technical lead, he stated: “All the simple problems have been solved. What is left? That’s 90 percent of difficult cases!” And the problems he mentioned include:

  • Code health: Test coverage needs to be improved, test stability needs to be improved, and many code modules need to be refactored.

  • Developer experience: Difficult for new contributors, documentation needs to be improved, and many existing hacky scripts are difficult to understand.

  • Stability and predictability need to be as important as developing new features.

  • The management of community and open source projects should be consistent and controllable while being open.

Community developers also gave feedback on some problems in the current development of Kubernetes project, such as:

  • There is a lack of clarity about the decision process for how new functionality will be selected and mapped to a future release.

  • Barriers to discussion, design, and development of new features are extremely high, communication costs and delays in reviewing design documents hinder productivity.

  • SIG product manager’s help and understanding of new feature product planning is not outstanding.

In addition, the technical community is focusing most of its efforts on improving the functionality of Kubernetes so that it can be used in more scenarios. While Kubernetes 1.7 has solved “10% of the world’s problems,” there is still another 90% that needs to be solved, and the distribution of these problems can be seen in Figure 8: Nodes, apis, Container Runtime, networks, storage, cross-domain cluster federation, and complex application management are the most demanding and challenging areas.

Figure 8. Number of new function requirements distributed according to SIG domain [5]

On top of that, getting thousands of developers around the world to collaborate effectively is a huge challenge. Although the Kubernetes project is organized and managed into dozens of independent SIG, the main code repository is not well decoupled, and the codes of different SIG are still in the same GitHub code repository. We need a more modular Kubernetes project organization structure that allows the different SIG, domains, and submodules to be more independent and reduce or explicitly depend on each other. To change this, a new Kubernetes architecture was also proposed, as shown in Figure 9.

Finally, while Kubernetes has been the trendsetting technology in the tech world — especially cloud computing — since 2016, it will take time and effort for it to truly dominate. As shown in Figure 10, according to Datadog, more than 50% of container users do not use container management tools (including Kubernetes), and according to another RightScale report, when combining the public and private cloud markets, The most used management platform is AWS public cloud ECS. Therefore, Kubernetes still has a long way to go to achieve its container management supremacy.

Figure 9: The new modular Kubernetes architecture [6]

Figure 10. Percentage of container users using container management tools simultaneously

Taking a look at the past and the present, let’s look beyond the technical and project details and imagine what Kubernetes looked like when he was four years old. There is a lot of room for imagination and many detailed functions. However, the author only wants to compare Google’s internal cluster management system and imagine two major directions for Kubernetes:

These days we all talk about Data Center Operating System (DCOS) or cluster management, and it’s easy to equate “cluster management” with Kubernetes (or Mesos or Swarm). However, take Google as an example, its internal cluster management is a complete ecosystem, and Borg (often referred to as Kubernetes, Google’s internal version) is a member of this ecosystem. Specifically, Borg or Kubernetes is more container management/application management/service management; A complete cluster management system, or DCOS, also involves node management, hardware management, SLA management, network management, security management, operation and maintenance management and many other functional components.

As a two-year-old Kubernetes, we have made great achievements in container service management from the very beginning. To truly achieve a pure containerized data center like Google (without and without the layerization of IaaS and PaaS), we need to build a complete cluster management system around Kubernetes. The author believes that according to the current positioning of CNCF and Kubernetes, and with the increase of the depth of the use of Kubernetes by enterprises, When Kubernetes is four years old, it will be arranged into a perfect container cluster management ecosystem.

Kubernetes realized the automatic operation of many functions through its Declarative design mode and Control loop Control loop, such as the birth and death Control of Pod and expansion Control of Replica number. However, in the prevalence of artificial intelligence today, there is still a gap between “automation” and “intelligent”, Kubernetes current degree of automation has not completely eliminated the user to learn and use Kubernetes threshold. For example, although Kubernetes’ most classic Declarative design mode has made great progress compared with Imperative design mode, users still need to express the applied configuration in advance according to rules and syntax, and the selection of the configuration still requires a lot of manual experience and trial and error. Kubernetes also offers a number of configuration options that still require users to make the right configuration through experience or trial and error.

In addition, Borg, which is used internally by Google, can truly “schedule all tasks”. Whether stateless micro-service applications, big data and deep learning services are managed uniformly through Borg platform. Big data and deep learning services also take advantage of Borg’s agile distributed computing to greatly improve its own performance. By the time Kubernetes turns four, we expect him to be growing smarter and doing smarter things with his computing power.

  1. https://docs.google.com/presentation/d/1pc-nayPpUZQlS10VPKqc-fb0Y3FXeSLeGAJ81g8NsCg/edit#slide=id.g22bd1761f4_0_57

  2. https://docs.google.com/spreadsheets/d/1NVZmn5u_Xkezqe9ZA6ZZivj1KEJCL0DsMMahbcjIfws/edit#gid=38455014

  3. http://stackalytics.com/?metric=commits&project_type=kubernetes-group&release=all

  4. http://www.bitpipe.com/detail/RES/1497557025_989.html

  5. https://docs.google.com/presentation/d/1XXHk-oy-8eeqGMGRHQwOolOV4hFHlLu-KDpoXuU-C74/edit#slide=id.g22c01c4c8e_0_0

  6. https://docs.google.com/presentation/d/1oPZ4rznkBe86O4rPwD2CWgqgMuaSXguIBHIE7Y0TKVc/edit#slide=id.g21b1f16809_5_51

Identify the “QR code” below, download a free Kubernetes ebook is probably the most recommended!

The authors introduce

Zhang Xin, founder and CEO of Caiyun Technology. He was a senior software engineer at Google in the United States and received instant awards from Google vice President and Director six times. As a technical leader, he was engaged in the research and development of Google containerized cluster management system, and managed more than 95% of Google data center servers automatically. In 2015, I participated in the product design and research and development of Google Public Cloud, and the products such as graphical one-click deployment, application store and application deployment manager were widely used by users after they were launched.

Xin Zhang received his PhD degree in computer science from Carnegie Mellon University (CMU) in the United States, during which he has published dozens of academic papers and been cited thousands of times at top international conferences in the field of distributed systems and network security. The research results have been reported by international media such as Economist in the United States, BBC in Britain and RTS TV in Switzerland. Zhang xin has won the titles of “Distinguished Expert” of Hangzhou city, “2017 Annual Young Person of Growth of All Things”, “Excellent Overseas Student of Ministry of Education”, “Outstanding Graduate of Tsinghua University”, “Outstanding Graduate of Beijing Area”, and was selected as a student of Microsoft Accelerator Shanghai “Huangpu Phase I”.


K8s has become one of the most noteworthy technologies recently, and it seems that the container war is coming to an end! CNUTCon2017 Shanghai station special “container layout and management”, invited from eBay, Tencent, jingdong to share their latest technology practice in this field, then can face to face in-depth communication! At present, 10% discount registration, click “read the article” to see the first!


Today’s recommendation,

Click on the image below to read it

Recommend a search engine for technical articles