Three, Kubernetes basic introduction

All of the following are for basic understanding first, and we will cover them in detail later

0. Basic knowledge

The above shows a K8S cluster with one master (master node) and six workers (worker nodes)

Docker is the runtime environment of each worker node

Kubelet is responsible for controlling the start and stop of all containers to ensure the normal operation of nodes, and has helped nodes interact with master

1. Deploy an application

Kubectl create --help # create a deployment named my-nginx that runs the nginx image kubectl create deployment my-nginx --image=nginx # Create a deployment with command kubectl create deployment my-nginx --image=nginx -- date # Create a deployment named my-nginx that runs the nginx image with 3 replicas. kubectl create deployment my-nginx --image=nginx --replicas=3 # Create a deployment named my-nginx that runs the nginx image and expose  port 80. kubectl create deployment my-nginx --image=nginx --port=80Copy the code

(2) Deployment

  • In K8S, by publishing Deployment, you can create an instance of an application (docker Image) (Docker Container), which is contained in a concept called a Pod, which is the smallest manageable unit in K8S.
  • After Deployment is published in the K8S cluster, Deployment instructs K8S how to create and update instances of the application, and the Master node schedules the application instances to specific nodes in the cluster.
  • The Kubernetes Deployment Controller continuously monitors the application instances after they are created. If the worker node running the instance is shut down or deleted, the Kubernetes Deployment Controller will create a new instance on another worker node with optimal resources in the cluster. This provides a self-healing mechanism to solve machine failures or maintenance problems.
  • In the pre-container choreography era, various installation scripts were commonly used to start applications, but not to recover them from machine failures. Kubernetes Deployment provides a completely different way to manage applications by creating application instances and ensuring the number of instances they run in the cluster nodes.
  • Deployment is on the master node. By publishing the Deployment, the master node will select the appropriate worker node to create the Container (the cube in the figure), and the Container will be included in the Pod (the blue circle).

2. Application exploration

  • Understand Kubernetes Pods (Container Set)
  • Understand Kubernetes Nodes
  • Some troubleshooting

After creating Deployment, K8S creates a Pod (container group) to hold the application instance (Container container).

1. Learn about Pod

Pod (Container group) is an abstract concept in K8S that holds a group of Containers (which can contain one or more Container containers, namely cubes) and some shared resources of these containers (containers). These resources include:

  • Shared storage, called Volumes, is the purple column
  • Network, each Pod (container group) has a unique IP address in the cluster, which is shared by containers in Pod (container group)
  • Basic information about a Container, such as the image version of the container and exposed ports

A Pod (container group) is the most basic unit on a K8S cluster. When we create Deployment on K8S, pods containing containers are created on the cluster (rather than directly creating containers). Each Pod is bound to the worker Node (Node) where it is running and remains there until terminated or removed. If a Node fails, the same Pod is run on other available nodes in the cluster (create containers from the same image, use the same configuration, different IP addresses, different Pod names).

TIP

Important:

  • A Pod is a set of containers (which can contain one or more application containers) with shared storage (volume Volumes), IP addresses, and information about how to run the container.
  • If multiple containers are tightly coupled and need to share resources such as disks, they should be deployed in the same Pod (container group).

2. Learn about Node

Pods (container groups) always run on nodes. A Node is a computer in a Kubernetes cluster, which can be a virtual machine or a physical machine. Each Node is managed by the master. A Node can have multiple pods, and the Kubernetes Master automatically dispatches pods to the best Node based on the resources available on each Node.

Each Kubernetes Node runs at least:

  • Kubelet, the process responsible for communication between master node and worker node; Manages pods (Container groups) and containers (containers) running within pods (Container groups).
  • Kube-proxy is responsible for traffic forwarding
  • Container runtime environments (such as Docker) are responsible for downloading images, creating and running containers, and so on.

3. Troubleshooting

  • Kubectl get – Displays a list of resources

    • Kubectl get resource type # get resource list of Deployment type kubectl get deployments # get resource list of Pod type  get nodesCopy the code
    • Kubectl get deployments -a kubectl get deployments --all-namespaces Deployment kubectl get deployments -n kube-systemCopy the code
    • ##### not all objects are in the namespace # kubectl api-resources --namespaced=true # kubectl api-resources --namespaced=falseCopy the code
  • Kubectl describe – Displays detailed information about the resource

    • Kubectl describe describe Pod nginx-xxxxxx kubectl describe Pod nginx-xxxxxx kubectl describe deployment my-nginxCopy the code
  • Kubectl logs – View the print logs of containers in pod (similar to the command docker logs)

    • # kubectl logs Pod # kubectl logs Pod # kubectl logs Pod # kubectl logs Pod # kubectl logs Pod So you see empty kubectl logs -f nginx-pod-XXXXxxxCopy the code
  • Kubectl exec – Execute commands in pod container environment (similar to command docker exec)

    • Kubectl exec-it nginx-pod-xxxxxx /bin/bash kubectl exec-it nginx-pod-xxxxxx /bin/bash ## New version 1.21.0 indicates that this command will expireCopy the code

4, kubectl run

You can also run a Pod on your own

## kubectl run --help
kubectl run nginx --image=nginx
Copy the code

3. Visible outside the application

1, the target

  • Learn about Service in Kubernetes
  • How Label and Label Selector objects are associated with services
  • Expose the application with Service outside the Kubernetes cluster

2. Kubernetes Service Overview

  • Kubernetes Pod is fleeting.

  • Pods actually have a life cycle. When a working Node dies, the Pod running on the Node dies.

  • ReplicaSet automatically returns to the target state by creating a new cluster of Pod drivers to keep the application running.

  • Kubernetes’ Service is an abstraction layer that defines a logical set of pods and supports external traffic exposure, load balancing, and Service discovery for these pods.

    • Services enable loose coupling between slave pods. Like other Kubernetes objects, services are defined in YAML (preferably) or JSON. A set of pods under a Service is typically marked by a LabelSelector (see below for an explanation of why you might want a Service that doesn’t include a selector in its spec).

    • Although each Pod has a unique IP address, these IP addresses are not exposed outside the cluster without a Service. Service allows your application to receive traffic. Services can also be exposed in the ServiceSpec type notation

      • ClusterIP(Default) – Exposes the Service on the internal IP of the cluster. This type makes the Service accessible only from within the cluster.
      • NodePort– Use NAT to expose Service on the same port on each selected Node in the cluster. use<NodeIP>:<NodePort>Access the Service from outside the cluster. Is a superset of ClusterIP.
      • LoadBalancer– Create an external load balancer (if supported) in the current cloud and assign a fixed external IP to the Service. Is a superset of NodePorts.
      • ExternalName– By returning a CNAME record with that name, using any name (specified by specexternalNameSpecify) to expose Service. No proxy is used. This type requireskube-dnsV1.7 or later.

3. Service and Label

Services are routed through a set of PODS. A Service is an abstraction that allows pods to die and replicate in Kubernetes without affecting the application. Discovery and routing between dependent pods, such as front-end and back-end components in an application, is handled by the Kubernetes Service.

A Service matches a set of pods using labels and selectors, which are grouping primitives that allow logical operations on objects in Kubernetes. A Label is a key/value pair attached to an object that can be used in a variety of ways:

  • Specify objects for development, testing, and production
  • Embed version tag
  • Label is used to classify objects

4, kubectl x.

Kubectl expose Deployment tomcat6 --port=8912 --target-port=8080 --type=NodePort ## --port: 8912 ## --target-port: pod container port 8080 ## --nodePort: Kubectl get SVC curl IP :port ## kubectl exec load balancingCopy the code

4. Scaling applications – scaling

The target

  • Scale applications with Kubectl
  • Expand by one Deployment

We created a Deployment and provided access to the Pod through the service. The Deployment we released only created a Pod to run our application. As traffic increases, we need to scale the application to meet system performance requirements.

Replicas =3 deployment tomcat6 watch kubectl get Pods -o wideCopy the code

5. Perform the rolling upgrade

The target

  • Perform rolling updates using Kubectl

Rolling updates allow Pod instances to be progressively updated with new instances to achieve Deployments updates with zero downtime.

Similar to application extension, if Deployment is exposed, the Service will load balance only available pods during the update. Available pods are instances that are available to application users.

Rolling updates allow the following operations:

  • Promoting applications from one environment to another (via container image updates)
  • Roll back to a previous version
  • Continuous integration and continuous delivery of applications without downtime

# App upgrade: Tomcat :alpine, tomcat:jre8-alpine kubectl set image deployme. apps/tomcat6 tomcat=tomcat:jre8-alpine Kubectl rollout History deployment.apps/tomcat6 kubectl rollout History deploy Tomcat6 ### Kubectl rollout undo deployment.apps/tomcat6 --to-revision=1 kubectl rollout undo deploy Tomcat6 --to-revision=1Copy the code

6. The above uses configuration files

1. Deploy an application

ApiVersion: apps/v1 # Kubectl API -versions Deployment metadata: Deployment metadata: Deployment metadata: Deployment metadata: Deployment metadata: Deployment metadata Nginx-deployment # deployment_namellabels: # tags can be used to locate one or more resources, including key and value. Spec: # This is a description of the Deployment. It can be read as how you expect the Deployment to use Replicas in K8S: 1 # Use this Deployment to create an application instance selector: # tag selector, which works with the tags above, matchLabels are not currently needed. Metadata: #Pod metadata: #Pod metadata: #Pod metadata: #Pod metadata: #Pod metadata: #Pod metadata: #Pod metadata: #Pod Metadata: #Pod Metadata: #Pod Metadata: #Pod Metadata: #Pod Metadata: #Pod Metadata: #Pod Metadata: #Pod Metadata: #Pod Metadata: #Pod Metadata: #Pod Metadata: #Pod Metadata: #Pod Metadata: #Pod Metadata Containers: # generate containers, which are the same type as docker containers. Nginx :1.7.9 Create a container that is accessible on port 80 by defaultCopy the code

kubectl apply -f xxx.yaml

2. Exposure application

ApiVersion: v1 kind: Service metadata: name: nginx-service #Service app_labels: #Service app_labels: Spec: # This is the definition of the Service, which describes how the Service selects a Pod and how it is accessed: # tag selector app:nginx # select Pod ports containing the tag app:nginx: - name: nginx-port # 80 # Other container groups in the cluster can access the Service nodePort through port 80:32600 # Service targetPort through port 32600 of any node: 80 NodePort # Serive type, ClusterIP/NodePort/LoaderBalancerCopy the code

3, expand and shrink capacity

Just change the replicas property in deployment.yaml

When done, run kubectl apply-f xxx.yaml

4. Rolling upgrades

Modify the imageName attribute in deployment.yaml, etc

When done, run kubectl apply-f xxx.yaml

All of the above can be directly kubectl edit deploy/service, automatically take effect after the modification is completed

Four, other

1, Check Kubernetes adaptation docker version

Github.com/kubernetes/… Check his Changelog and search for the appropriate Docker version.

2. Deprecating Dockershim

Kubernetes. IO/useful/blog / 202…

  • Using containerd: kubernetes. IO/useful/docs/set…
  • Configuration docker: kubernetes. IO/useful/docs/set…

3. Deploy dashboard

Github.com/kubernetes/…

type: NodePort

# visit test Every access to the token kubectl -n kubernetes - dashboard go in secret $(kubectl -n kubernetes - dashboard get secret | grep admin-user | awk '{print $1}')Copy the code

4. Master initialization logs

[root@i-iqrlgkwc ~]# kubeadm init \
> --apiserver-advertise-address=10.170.11.8 \
> --image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \
> --kubernetes-version v1.21.0 \
> --service-cidr=10.96.0.0/16 \
> --pod-network-cidr=192.168.0.0/16
[init] Using Kubernetes version: v1.21.0
[preflight] Running pre-flight checks
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.170.11.8]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-01 localhost] and IPs [10.170.11.8 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-01 localhost] and IPs [10.170.11.8 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 66.504822 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-01 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: os234q.tqr5fxmvapgu0b71
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
​
Your Kubernetes control-plane has initialized successfully!
​
To start using your cluster, you need to run the following as a regular user:
​
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
​
Alternatively, if you are the root user, you can run:
​
  export KUBECONFIG=/etc/kubernetes/admin.conf
​
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
​
Then you can join any number of worker nodes by running the following on each as root:
​
kubeadm join 10.170.11.8:6443 --token os234q.tqr5fxmvapgu0b71 \
    --discovery-token-ca-cert-hash sha256:68251032e1f77a7356e784bdeb8e1f7f728cb0fb31c258dc7b44befc9f516f85 
​
Copy the code