The origin of

Kubernetes originated from Borg, Google’s internal service orchestration system, and was born in 2014. It draws on Google’s 15 years of experience in production environments and incorporates the best ideas and practices of the community.

The name

The name Kubernetes, which originated in ancient Greece, means helmsman, so its logo resembles both a fishing net and a compass, and Google chose the name for another reason: Since Docker compares himself to a whale that swims on the sea carrying containers, Google needs to use Kubernetes to control the discourse power in the age of great navigation and capture and guide the whale to cruise according to the route set by the owner.

The core

Thanks to Docker’s features, creating and destroying services is fast and easy. Based on this, Kubernetes has realized the management and arrangement scheme of cluster scale, so that the release, restart and expansion of applications can be automated.

Kubernetes – cognitive

Cluster design

Kubernetes can manage large clusters, allowing each node in the cluster to connect to each other and control the entire cluster as if it were a single computer.

A cluster has two roles: master and Node(also known as worker).

The master is the brain of the cluster and is responsible for managing the whole cluster: scheduling, updating, scaling, and so on.

Node is the specific work, a Node is generally a virtual machine or physical machine, which runs docker service and Kubelet service (a component of Kubernetes) in advance, when receiving the task sent by the master, Node is about to complete the task (running a specific application with docker)

  


Deployment-application manager

When we have a Kubernetes cluster, we can run our application on it. The premise is that our application must support running in Docker, that is, we need to prepare the Docker image in advance.

After having the image, we usually describe the application through the configuration file of Kubernetes Deployment, such as the name of the application, the name of the image to use, how many instances to run, how much memory resources, CPU resources, etc.

With the configuration file, you can manage the application through Kubernetes’ command line client – Kubectl. Kubectl communicates with Kubernetes’ master via the RestAPI to manage the application.

For example, the Deployment configuration file we just configured is called app.yaml

Kubectl create – f app. Yaml to create the application, and then by Kubernetes to ensure our application is in running state, when an instance running failed or running application Node suddenly goes down, Kubernetes automatically finds and schedules a new instance on a new Node, ensuring that our application always achieves the desired results.

  


Pod-kubernetes Minimum scheduling unit

After creating Deployment in the previous step, Kubernetes’ Node does more than simply run a docker container. For reasons like ease of use, flexibility, stability, etc., Kubernetes proposes something called Pod as Kubernetes minimum scheduling unit. So our application is actually running a Pod on each Node. Pod can only run on Node. The diagram below:

  


So what is a Pod? A Pod is a set of containers (or just one, of course). The container itself is a small box, and a Pod is a layer of small boxes on top of the container. What are the characteristics of the container inside the box?

You can use volume to share storage resources.

It has the same network space, which in plain English means the same IP address, the same network card and network Settings.

Multiple containers can “know” each other, such as knowing each other’s mirrors and ports defined by others.

As to the benefit of such design, or want everyone to be experienced slowly after in-depth study ~

  


Service – Service discovery – Finds each Pod

The Deployment above is created and the Pod is up and running. How can I access our application?

The most obvious way to access it is through pod-IP +port, but what if there are many instances? Ok, take all the POD-IP lists, configure them into the load balancer, poll access. However, as mentioned above, the Pod may die, and even the Node where the Pod is located may go down. Kubernetes will automatically help us to create a new Pod. The Pod is also rebuilt each time the service is updated. And each Pod has its own IP. So the IP of Pod is unstable and will change frequently.

To cope with this change, we turn to another concept: Service. It was designed specifically to solve that problem. No matter how many Pods there are in Deployment, whether it is updated, destroyed, or rebuilt, the Service can always find and maintain its IP list. The Service also provides a variety of external entrances:

ClusterIP: The unique IP address of a Service in a cluster. We can use this IP address to evenly access back-end PODS without caring about specific pods.

NodePort: The Service starts a port on each Node in the cluster. We can access the Pod through this port on any Node.

LoadBalancer: creates an external LoadBalancer based on NodePort using the public cloud environment and forwards requests to NodeIP:NodePort.

ExternalName: forwards the service to the specified domain name (set by spec.externlName) in DNS CNAME record mode.

  


Well, it looks like the service access problem has been resolved. But have you ever wondered how a Service knows which Pod it is responsible for? How are these Pod changes tracked?

The most obvious approach is to use the Deployment name. One Service corresponds to one Deployment. Of course it can be done. However, uberNetes uses a more flexible and generic design, the -label tag. By labeling pods, a Service can be responsible for pods in only one Deployment or for pods in multiple Deployments. Deployment and Service can then be decoupled via Label.

  


RollingUpdate – RollingUpdate

Rolling upgrade is the most typical service upgrade scheme in Kubernetes, the main idea is to increase the number of instances of the new version of the application, while reducing the number of instances of the old version of the application, until the number of instances of the new version reaches the expected number, the number of instances of the old version is reduced to 0, the end of rolling upgrade. The service remains available throughout the upgrade. And you can roll back to the old version at any time.

  


Kubernetes – Getting started practice

Deployment practice

First configure the Deployment configuration file (tomcat image is used here)

[ app.yaml ]

Create the service through the kubectl command

The Service practice

We can’t access the application properly through the Deployment created above, so we will create a Service as the entry point to access the application.

Start by creating the Service configuration

[ service.yaml ]


Create a service

Access the service

The back-end applications can then be accessed from any node through ClusterIP load balancing