This is the 21st day of my participation in the August More Text Challenge

Introduction of Kubernetes

Kubernetes is Google’s container choreography and scheduling engine based on Borg. As one of the most important components of Cloud Native Computing Foundation (CNCF), its goal is not only to be a choreography system, but to provide a specification. Kubernetes allows you to describe the architecture of the cluster and define the final state of the service. Kubernetes can help you automatically achieve and maintain the system in this state.

To put it more bluntly, Kubernetes allows users to create applications by writing a configuration file in YAML or JSON format, either through tool/code generation, or by directly requesting the Kubernetes API. This configuration file contains the state that the user wants the application to stay in. Whatever happens to individual hosts in the entire Kubernetes cluster does not affect the state of the application. You can also change the state of the application by changing the configuration file or requesting the Kubernetes API.

This means that developers don’t need to care about the number of nodes, where to run the containers from, or how to communicate with them. Developers also don’t need to manage hardware optimizations, or worry about node shutdowns (they will follow Murphy’s law), because new nodes are added to the Kubernetes cluster, while Kubernetes adds containers to other running nodes, where Kubernetes takes maximum advantage.

Summary: Kubernetes is a container control platform that abstracts all the underlying infrastructure (the infrastructure used by containers to run).

Kubernetes — Bringing container applications into large-scale industrial production.

Pod

Kubernetes has many technical concepts and corresponding API objects. The most important and basic object is Pod. A Pod is the smallest unit in a Kubernetes cluster that runs a deployed application and supports multiple containers.

Pod is designed to allow multiple containers to share network addresses and file systems in a Pod. Services can be combined in a simple and efficient way through interprocess communication and file sharing. Pod support for multiple containers is the most fundamental design concept of Kubernetes. For example, if you run an operating system distribution software repository, one Nginx container is used to distribute software, and another container is used to synchronize from the source repository, the images of the two containers are unlikely to be developed by the same team, but they work together to provide a microservice. In this case, different teams develop and build their own container images, which are combined into a microservice to provide services externally at deployment time. In most cases, however, we will only run a container in a Pod, as is the case with the examples in this article.

Another feature of Pod is that if we want to use other RKE technologies, we can do so without relying on the Docker container.

Docker is the most commonly used container runtime in Kubernetes, but Pod also supports other container runtimes.

In summary, the main features of Pod include:

  • Each Pod can have a unique IP address within the Kubernetes cluster;
  • Pods can have multiple containers. These containers share the same port space, so they can pass throughlocalhostCommunication (it is conceivable that they cannot use the same port), communication with other Pod containers can be done by combining the Pod IP;
  • Containers within a Pod share the same volume, IP, port space, and IPC namespace.

Service

A Service resource in Kubernetes can act as an entry point to a set of pods that provide the same Service, and this resource is responsible for discovering services and balancing the load between pods.

In a Kubernetes cluster, we have pods that provide different services, so how does the Service know which Pod to process?

This problem is solved by using labels, which can be divided into two steps:

  • Labels all object Pods that need Service processing.
  • Use a selector in the Service (Label Selector), which defines all object pods with corresponding labels.

Deployment

In fact, in K8S, we rarely use Pod directly and use another Kubernetes resource, Deployment.

Deployment represents an update operation by the user to the Kubernetes cluster. You can create a new service or update a new service, or you can roll up a service. Deployment can help keep the life of every application in the same place: change. In addition, only dead applications remain intact; otherwise, new requirements are constantly emerging, and more code is developed, packaged, and deployed, with the potential for error at every step of the way. Deployment automates the process of moving an application from one version to another and keeps the service uninterrupted, allowing us to quickly roll back to the previous version if something unexpected happens.

apiVersion: apps/v1
kind: Deployment Define Kubernetes resource type as Deployment
metadata:
  name: demo-web-deployment Define the name of the resource
  labels:
    app: demo-web-deployment
spec:  Define the state of the resource.
  replicas: 2 # define how many pods we want to run, in this case we want to run 2
  selector:
    matchLabels: Define which Pod the deployment matches
      app: demo-web
  minReadySeconds: 5 Optional, specifies the minimum number of seconds in which a Pod can become available. Default is 0
  strategy: # specify the policy to use when deploying the updated version
    type: RollingUpdate # Policy type, using RollingUpdate to ensure uninterrupted service during deployment
    rollingUpdate:
      maxUnavailable: 1 # Maximum number of PODS allowed to stop when deployed (compared to Replicas)
      maxSurge: 1 Maximum number of PODS allowed to be created when deployed (compared to Replicas)
  template: The template used to specify Pod is similar to the Pod definition
    metadata:
      labels: # Pods created from the template will be labeled with this label, corresponding to the matchLabels above
        app: demo-web
    spec:
      containers:
        - name: web
          image: rainingnight/aspnetcore-web
          imagePullPolicy: Always The default is IfNotPresent. If set to Always, the container image will be pulled again every deployment (otherwise, if the specified version of the image exists locally, it will not be pulled again)
          ports:
            - containerPort: 80
Copy the code

Kubernetes installation

Disabling the Firewall

[root@localhost ~]# systemctl stop firewalld
[root@localhost ~]# systemctl disable firewalld
Copy the code

Disable selinux

[root@localhost ~]# setenforce 0
Copy the code

Disabling swap Partitions

vim /etc/fstab
Copy the code

Install the docker – ce

Install the tools required by Docker
yum install -y yum-utils device-mapper-persistent-data lvm2
# Configure aliyun docker source
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
Install this version of docker-CEYum install - y docker - ce - 18.09.9-3. El7# start docker
systemctl enable docker && systemctl start docker

# Set docker image acceleration
tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://v16stybc.mirror.aliyuncs.com"]
}
EOF
Copy the code

Install K8S

# set yum source cat < < EOF > / etc/yum repos. D/kubernetes. '[kubernetes] name = kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg EOF # https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg started to install yum install - y kubelet - 1.15.0 kubeadm - 1.15.0 Kubectl - 1.15.0Copy the code

Set k8S to start automatically

systemctl enable kubelet && systemctl start kubelet
Copy the code

Set automatic completion

echo "source <(kubectl completion bash)" >> ~/.bash_profile
source .bash_profile 
Copy the code

Initialize a node

Kubeadm init - kubernetes - version = 1.15.0 - image - repository registry.aliyuncs.com/google_containersCopy the code

If errors occur, be sure to clear the configuration and re-initialize it

kubeadm reset -f
Copy the code

After initialization, you’ll notice a lot more Docker images

Run the prompted command

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
Copy the code

Viewing Cluster Information

kubectl cluster-info
Copy the code

Run the prompted command after initialization, otherwise the following error will occur

Install the dashboard

Refer to the official documentation to quickly install Dashboard v2.0.0 by executing the following commands:

kubectl apply -f "https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml"
Copy the code

The above files may not be downloaded normally in China for well-known reasons. A simple solution is to open recommended. Yaml on Github, copy the code locally, and execute again:

kubectl apply -f kubernetes-dashboard.yaml
Copy the code

When pasting using Vim, you can get a lot of # and garbled characters

To solve this problem, type :set paste in vim

kubectl proxy
Copy the code

Visit http://IP address: 8001 / API/v1 / namespaces/kubernetes – dashboard/services/HTTPS: kubernetes – dashboard: / proxy

Kubernets implement load balancing instances

Write the nginx.yaml file

apiVersion: v1
kind: Service
metadata:
  name: my-nginx-svc
  labels:
    app: nginx
spec:
  type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-nginx
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: Nginx: 1.14.2
        ports:
        - containerPort: 80

Copy the code

Use files for deployment

kubectl apply -f nginx.yaml
Copy the code

After specifying the number of pods, if a pod is deleted and the number of pods cannot be met, a new pod is created

Modify the web content of each POD for immediate observation

View the IP address exposed by the service

Access to ports exposed by this service is load balanced between multiple pods by default

The resources

[Kubernetes Primary 1] : Deploy your first ASP.NET Core application to a K8S cluster

Recommended reading

Linux Shell programming basics!

Linux Sudo and Sudoers

Samba server deployed on Linux!

Linux Zabbix 5.0 installation details!

Docker docker-compose: docker-compose

Docker Dockerfile file details!