This is an overview of Kubernetes.

Kubernetes is an open source platform for automating the deployment, scaling, and manipulation of application containers.

With Kubernetes, you can quickly and efficiently meet the following user needs:

  • Deploy applications quickly and accurately
  • Scale your application instantly
  • Seamless presentation of new features
  • Limit the amount of hardware to the required resources

Our goal is to foster an ecosystem of tools and components that ease the strain on applications running in public or private clouds.

The advantage of Kubernetes

  • Mobile: public cloud, private cloud, hybrid cloud, polymorphic cloud
  • Extensible: modular, plug-in, can be mounted, can be combined
  • Self-repair: Automatic deployment, automatic restart, automatic replication, and automatic scaling

Google launched the Kubernetes project in 2014. Kubernetes was built on Google’s 15 years of experience with product-scale tasks, combining the best ideas and practices from the community.

Why choose containers?

Want to know why you chose to use containers?


The traditional approach to program deployment is to install programs on the host through the operating system package manager. The downside of this is that it is easy to confuse executables, configuration files, libraries, and life cycles between programs and between programs and the host system. To achieve accurate display and accurate rollback, you can build an immutable virtual machine image. But virtual bodies are often too large and untransferable.

The new approach to container deployment is operating system-level virtualization rather than hardware virtualization. Containers are isolated from each other and from the host: they have their own file systems, cannot see each other’s processes, and are allocated limited computing resources. They are easier to build than virtual machines. And because they are decoupled from infrastructure and host file systems, they can be moved across different types of clouds or operating systems.

Because containers are so small and fast, each container image can be packed with a program. This one-to-one “application-mirror” connection brings a lot of convenience to the container. With containers, static container images can be created at compile/release time rather than deployment time. As a result, each application no longer has to wait to integrate with the rest of the entire application stack or compromise with the production infrastructure environment. Generating container images at compile/release time creates an environment for continuous development into production. Similarly, containers are far more transparent than virtual machines, especially when it comes to device monitoring and management. This is especially true when the process life cycle of the container is managed by the infrastructure rather than hidden by the process monitor within the container. Ultimately, with a single application loaded within each container, managing the container equals managing or deploying the entire application.

Summary of container advantages:

  • Agile application creation and deployment: Compared with VM images, container images are easier and more efficient to create.
  • Continuous development, integration, and deployment: Provide reliable, high-frequency container image compilation and deployment with fast rollback (immutability based on image).
  • Separation of development and operational concerns: Because container images are created at compile/release time, the entire process is decoupled from the infrastructure.
  • Environment stability across development, testing, and production: Running results on laptops are exactly the same as running on the cloud.
  • Portability of distribution on cloud and OS: It runs on Ubuntu, RHEL, CoreOS, Presets, Google Container Engine, and many other platforms.
  • Application-centric management: From running systems on virtual hardware to running programs on systems that utilize logical resources, raising the level of abstraction in the system.
  • Loosely coupled, distributed, resilient, unconstrained microservices: The entire application is dispersed into smaller, more independent modules that can be dynamically deployed and managed rather than a bloated single application stack stored on a large, single-purpose machine.
  • Resource isolation: Increases the predictability of application performance.
  • Resource utilization: efficient and intensive.

Why do I need Kubernetes and what does it do?

At a minimum, Kubernetes can schedule and run application containers on a cluster of physical or virtual machines. Also, Kubernetes allows developers to break the “chains” that bind physical or virtual machines from a host-centric architecture to a container-centric one. This architecture ultimately provides developers with a number of built-in advantages and conveniences. Kubernetes provides a true container-centric development environment for the infrastructure.

Kubernetes addresses a number of common requirements for running programs within a product, such as:

  • Coordinate helper processes, assist in application integration, and maintain a one-to-one “application-mirror” model.
  • Mount the storage system
  • Distributed confidential information
  • Check program status
  • Copying an application Instance
  • Use horizontal pod for automatic scaling
  • Naming and discovery
  • Load balancing
  • Scroll to update
  • Resource monitoring
  • Access and read logs
  • Program debugging
  • Provide authentication and authorization

This combines platform-as-a-service (PaaS) simplification with infrastructure-as-a-service (IaaS) flexibility and facilitates migration among platform service providers.

What kind of platform is Kubernetes?

While Kubernetes offers a great deal of functionality, there are always new scenarios that will benefit from new features. Application-specific workflows can be streamlined to speed up development. Ad-hoc choreography is initially acceptable and often requires robust large-scale automation mechanisms. That’s why Kubernetes is also designed as a platform to build an ecosystem of components and tools that make it easier to deploy, scale, and manage applications.

Label Labels allow users to organize resources according to their preferences. Annotations allow users to add customer information to resources to optimize workflows and provide an easy way for management tools to indicate debug status.

In addition, the Kubernetes control panel is built with the same API that developers and users can use. Users can write their own controllers, such as schedulers, using their own apis that can be recognized by common command-line tools.

This design allows a large number of other systems to be built on top of Kubernetes.

What is Kubernetes not?

Kubernetes is not a traditional, all-inclusive platform as a service (Paas) system. It respects the user’s choice, which is important.

Kubernetes:

  • There are no restrictions on the supported program types. It does not detect application frameworks (e.g., Wildfly), does not limit the set of languages supported by the runtime (e.g., Java, Python, Ruby), caters to more than 12-factor applications, and does not distinguish between applications and services. Kubernetes is designed to support as many types of workloads as possible, including stateless, stateful, and data-processing workloads. If something works well in the container, it can only work better on Kubernetes.
  • It does not provide middleware (such as message bus), data processing frameworks (such as Spark), databases (such as mysql), nor cluster storage systems (such as Ceph) as built-in services. But all of the above programs can run on Kubernetes.
  • There is no market for click-and-deploy services.
  • No source code is deployed and no programs are compiled. A continuous integration (CI) workflow is a place where different users and projects have different needs and performance. Therefore, Kubernetes supports layered CI workflows without monitoring the working state of each layer.
  • Allows users to select their own logging, monitoring, and warning systems. (Kubernetes provides integration tools to ensure that this concept is implemented)
  • Does not provide or manage a complete set of application configuration languages/systems (such as JsonNet).
  • Does not provide or cooperate with any complete machine configuration, maintenance, management, self-repair system.

On the other hand, a number of PaaS systems run on Kubernetes, such as Openshift, Deis, and Eldarion. You can also develop your own custom PaaS, integrate CI systems of your choice, or just deploy container images on Kubernetes.

Because Kubernetes operates at the application level rather than the hardware level, it provides some of the common applicable functions that PaaS typically provide, such as deployment, scaling, load balancing, logging, and monitoring. However, Kubernetes is not monolithic, and these default solutions are optional and can be added or removed.

And Kubernetes is more than just a choreography system. In fact, it fulfills the need for choreography. The technical definition of choreography is the execution of A defined workflow: do A first, then B, then C. In contrast, Kubernetes consists of a series of independent, composable control processes that continuously drive the current state toward the state of requirements. The specific process of getting from A to C is not unique. Centralised control is not necessary; It’s more like choreography. This will make the system easier to use, more efficient, more robust, more reusable, and more extensible.

What does the word Kubernetes mean? K8s?

The word Kubernetes comes from the Greek word meaning helmsman or navigator. The roots are governor and cybernetic. K8s is short for it, with the word eight replacing ubernete.

Via: kubernetes. IO/docs/concep…

By Kubernetes. IO

This article is originally compiled by LCTT and released in Linux China