purpose
With the number of deployed components and the growth of data centers, configuring and managing and keeping systems up and running becomes complicated and difficult. It becomes increasingly difficult to decide where to deploy components when you want to achieve high enough resource utilization and lower hardware costs. Automated scheduling, configuration, supervision, and troubleshooting are formal k8S pain points to solveCopy the code
Convenience brought
Provide a consistent environment for the application to find problems in the morning and provide easier continuous deliveryCopy the code

Component is introduced

The control panel is used to control cluster work and contains components that can be deployed on multiple nodes for high availability

Kubernetes API server A bridge for communication between components in the control panel
schedule Apply the scheduler
controller manager Perform cluster level functions such as replicating components, continuously tracking working nodes, handling failed nodes, etc
etcd Distributed data storage, persistent storage cluster configuration
The work node is the machine that runs the containerized application
Container runtime Docker and other container types
kubelet Communicates with the API server and manages the container of the node on which it resides
kube-proxy Load balancing network traffic among components
Container life cycle example
docker build -t kubia .
docker images | grep kubia
docker run –name kubia-container -p 8080:8080 -d kubia
curl localhost:8080
docker inspect kubia-container
docker exec -ti kubia-container bash
docker stop kubia-container
docker rm kubia-container
docker tag kubia mingch94/kubia
docker push mingch94/kubia
Environment to prepare
yum install bash-completion Install the bash – completion
source <(kubectl completion bash) Automatic completion of Kubectl in effect
alias k=kubectl Set a nickname for kubectl
source <(kubectl completion bash | sed s/kubectl/k/g) Adjust automatic completion of nickname K
alias kcd=’k config set-context $(k config current-context) –namespace’ Set toggle namespace nicknames
Cluster operations
kubectl cluster-info Viewing Cluster Status
kubectl get nodes Listing cluster nodes
kubectl describe node docker-desktop View more information about cluster nodes
k run kubia –image=mingch94/kubia –port=8080 The deployment of application
k get po List the pod
k describe po kubia See pod for more details
k apply -f K8s. IO/examples/co… Create the rc
k get rc Viewing RC Resources
k expose po kubia –type=LoadBalancer –name kubia-http Create a service to expose the POD
k get svc Viewing Service Objects
curl localhost:8080 Access the service
K scale RC kubia — replicas=3 Horizontally expand POD to 3
get pods -o wide Show more columns
k get po kubia -o yaml Check the YAML description of the existing pod
k explain po Parsing pod definitions
k explain pod.spec View the Spec properties
k create -f kubia-manual.yaml Create a POD from YML
kubectl logs kubia-manual -c kubia View the logs for the container Kubia in pod
k port-forward kubia-manual 8888:8080 Forwards local 8888 traffic to port 8080 of pod
k get po –show-labels View pods and all labels
k get po -L creation_method,env The label is displayed separately in the corresponding column
k label po kubia-manual creation_method=manual Create a label
k label po kubia-manual-v2 env=debug –overwrite Modify the label
k get po -l creation_method=manual Query pod with label value
k get po -l env Query pod with tag name
k get po -l ‘! env’ Query pod without env tag name. , in, notin
k label node docker-desktop gpu=true Adds the specified label to node
k annotate pod kubia-manual Mycompany.com/someannotat… bar” Create annotation
k get ns View all namespaces in the cluster
k get po -n kube-system View pods for the specified namespace
k create ns custom-namespace Creating a namespace
k config get-contexts View the current context and the corresponding namespace used
k delete pod kubia-gpu Delete the pod by name
k delete po -l creation_method=manual Remove the POD based on the label
k delete ns custom-namespace Delete the namespace and all pods below it
k delete po –all Delete all pods
k delete all –all Delete all resources in the current namespace
k logs kubia-liveness –previous Get the previous container log
k edit rc kubia Edit the rc
k delete rc kubia –cascade=false Delete only rc and leave POD running normally
k get rs Check the rc
k get ds Look at the ds
k get jobs Look at the jobs
k exec kubia-p5mz5 — curl -s http://10.96.72.201 Execute the command in POD, – – represents the end of kubectl command. If the command to be executed has no – parameter, you can remove it
k exec -it kubia-72v4k — bash Interactively enter the POD to execute commands
k get endpoints View endPoints resources
k get nodes -o json Node information is displayed in JSON mode
k get po –all-namespaces Display all pods in ns
k get ingresses Check out the Ingress service
k run dnsutils –image=tutum/dnsutils Use pod with DNS
k proxy By interacting with the API server through kube-proxy, access the root path to view the currently supported apis
k create -f kubia-deployment-v1.yaml –record Create a Deployment resource, –record indicates the record version number
k patch deployment kubia -p ‘{“spec”: {“minReadySeconds”: 10}}’ Configure the POD to be available at least 10 seconds after it is created
k set image deployment kubia nodejs=luksa/kubia:v2 Example Change the image name to luksa/kubia:v2 and the image name to nodejs
k rollout status deployment kubia Viewing the Upgrade Process
k rollout undo deployment kubia Roll back to the previous version
k rollout undo deployment kubia –to-revision=1 Rollback to the specified version
k rollout pause deployment kubia Pause rolling upgrade
k rollout resume deployment kubia Cancel the rolling upgrade and roll back to the previous version
k apply -f kubia-deployment-v3-with-readinesscheck.yaml Modify objects with a full YML definition

When a POD contains multiple containers, they always run on the same node. That is, a POD never spans multiple working nodes

Because you can’t have multiple processes in a single container, you need a more advanced structure to bind containers together, called a POD. You can think of a POD as a real-world host

Containers in the same POD share the same IP and port, so they can communicate with other containers in the same POD through localhost

Several major parts of the POD definition
The API version and the resource types described by YAML apiVersion, kind
Metadata includes names, namespaces, labels, and other container information name, namespace, labels
Spec includes the actual specification of the pod contents (containers, volumes, and other data) containers, volumes
Status contains information about the pod currently running (condition, container description and status, internal IP, etc.) conditions, containerStatuses, phase, hostIP
Kubernets API version v1
apiVersion: v1
Resource type: pod
kind: Pod
metadata:
  Kubia-maual
  name: kubia-manual
spec:
  containers:
  # select mirror
  - image: luksa/kubia
    # container name
    name: kubia
    ports:
    The container listens on port 8080
    - containerPort: 8080
      protocol: TCP
Copy the code

Namespaces do not do any isolation of running objects, and network isolation between namespaces depends on the network solution adopted by K8S

If the application that needs to be deployed is automatically kept running and healthy without any manual intervention, the POD cannot be created directly. Instead, a resource such as RC or Deployment can be created to manage the POD.

Directly created pods will only be restarted by K8S monitoring failure (Kubelet operation on the node), but the entire node fails and cannot be replaced by a new node

Probe container mechanism
HTTP GET probe Perform an HTTP GET request on the CONTAINER’s IP address. If the response status code is 2XX or 3XX, the probe is considered successful
TCP socket probe Attempts to establish a TCP connection with the specified port of the container. If the connection is successfully established, the probe succeeds. Otherwise the container restarts
The Exec probe Execute any command in the container and check the exit status code of the vehicle command. If the status code is 0, the probe succeeds

The three elements of ReplicationController
label selector Label selector, used to determine which pods are in the RC scope
replica count Number of copies, specifying the number of pods that should be run, affects existing pods
pod template Pod template for creating a new POD copy
The rc version
apiVersion: v1
# rc type
kind: ReplicationController
metadata:
  # the name of the rc
  name: kubia
spec:
  # number of copies
  replicas: 3
  # tag selector
  selector:
    app: kubia
  Create a new POD template
  template:
    metadata:
      labels:
        Tags in templates must be consistent with tags in tag selectors to avoid endless creation of pods
        app: kubia
    spec:
      containers:
      - name: kubia
        image: luksa/kubia
        ports:
        - containerPort: 8080
Copy the code
Rs version
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: kubia
spec:
  replicas: 3
  selector:
    matchExpressions:
      - key: app
        operator: In
        values:
         - kubia
  template:
    metadata:
      labels:
        app: kubia
    spec:
      containers:
      - name: kubia
        image: luksa/kubia
Copy the code
Ds version
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: ssd-monitor
spec:
  selector:
    matchLabels:
      app: ssd-monitor
  template:
    metadata:
      labels:
        app: ssd-monitor
    spec:
      nodeSelector:
        disk: ssd
      containers:
      - name: main
        image: luksa/ssd-monitor
Copy the code
Batch – version of the job
apiVersion: batch/v1
kind: Job
metadata:
  name: batch-job
spec:
  template:
    metadata:
      labels:
        app: batch-job
    spec:
      restartPolicy: OnFailure
      containers:
      - name: main
        image: luksa/batch-job
Copy the code
The service version
apiVersion: v1
kind: Service
metadata:
  name: kubia
spec:
  # Session affinity. Traffic from the same clientIP will be sent to the same POD
  sessionAffinity: ClientIP
  ports:
  Service available port
  - port: 80
    The container port to which the service forwards the connection
    targetPort: 8080
  selector:
    # Pods with the app=kubia tag belong to this service
    app: kubia
Copy the code
Internal cluster test service
Create a pod The request is sent to the cluster IP of the service and the response is recorded, which can be viewed through the POD log
SSH to any node Run curl on a remote node
kubectl exec Execute the cuL command in an existing POD
Expose the service to external clients
NodePort The cluster node opens a port on the node and redirects traffic on that port to the underlying service
LoadBalance An extension of the NodePort type that allows services to be accessed through dedicated load balancers
Ingress Exposing multiple services over a single IP address, running on a layer 7 network



Several common volume types

More details see kubernetes. IO/docs/concep…

emptyDir A simple empty directory for storing temporary data
hostPath Mount a directory from the working node’s file system to a POD pointing to a specific file or directory on the node’s file system. Persistent volume, but pod will have problems once it is scheduled to other nodes
gitRepo Initialize the voucher by checking out the contents of the Git repository
nfs NFS shared volume mounted to pod
gcePersistentDIsk Google’s High Performance storage disk volume
awsElasticBlockStore Aws service elastic block storage volumes
azureDisk Microsoft Zaure disk volume
configMap, secret, downwardAPI Expose part of the K8S resource and cluster information to a special volume of pod
persistentVolumeClaim Preset or dynamically configured persistent storage type
EmptyDir volume using
apiVersion: v1
kind: Pod
metadata:
  name: fortune
spec:
  containers:
  - image: luksa/fortune
    name: html-generator
    volumeMounts:
    The volume named HTML is mounted in the container /var/htdocs, which can be read and written
    - name: html
      mountPath: /var/htdocs
  - image: nginx:alpine
    name: web-server
    volumeMounts:
    /usr/share/nginx/ HTML
    - name: html
      mountPath: /usr/share/nginx/html
      readOnly: true
    ports:
    - containerPort: 80
      protocol: TCP
  volumes:
  # volume named HTML, mounted in the above two containers
  - name: html
    emptyDir: {}
Copy the code
GitRepo volume using
apiVersion: v1
kind: Pod
metadata:
  name: gitrepo-volume-pod
spec:
  containers:
  - image: nginx:alpine
    name: web-server
    volumeMounts:
    - name: html
      mountPath: /usr/share/nginx/html
      readOnly: true
    ports:
    - containerPort: 80
      protocol: TCP
  volumes:
  - name: html
    gitRepo:
      repository: https://github.com/chaimcode/kubia-website-example.git
      revision: master
      directory: .
Copy the code
API Service Usage
k proxy
curl http://localhost:8001

{
  "paths": [
    "/api",
    "/api/v1",
    "/apis",
    "/apis/",
    "/apis/admissionregistration.k8s.io",
    "/apis/admissionregistration.k8s.io/v1",
    "/apis/admissionregistration.k8s.io/v1beta1",
    "/apis/apiextensions.k8s.io",
    "/apis/apiextensions.k8s.io/v1",
    "/apis/apiextensions.k8s.io/v1beta1",
    "/apis/apiregistration.k8s.io",
    "/apis/apiregistration.k8s.io/v1",
    "/apis/apiregistration.k8s.io/v1beta1",
    "/apis/apps",
    "/apis/apps/v1",
    "/apis/authentication.k8s.io",
    "/apis/authentication.k8s.io/v1",
    "/apis/authentication.k8s.io/v1beta1",
    "/apis/authorization.k8s.io",
    "/apis/authorization.k8s.io/v1",
    "/apis/authorization.k8s.io/v1beta1",
    "/apis/autoscaling",
    "/apis/autoscaling/v1",
    "/apis/autoscaling/v2beta1",
    "/apis/autoscaling/v2beta2",
    "/apis/batch",
    "/apis/batch/v1",
    "/apis/batch/v1beta1",
    "/apis/certificates.k8s.io",
    "/apis/certificates.k8s.io/v1beta1",
    "/apis/compose.docker.com",
    "/apis/compose.docker.com/v1alpha3",
    "/apis/compose.docker.com/v1beta1",
    "/apis/compose.docker.com/v1beta2",
    "/apis/coordination.k8s.io",
    "/apis/coordination.k8s.io/v1",
    "/apis/coordination.k8s.io/v1beta1",
    "/apis/events.k8s.io",
    "/apis/events.k8s.io/v1beta1",
    "/apis/extensions",
    "/apis/extensions/v1beta1",
    "/apis/networking.k8s.io",
    "/apis/networking.k8s.io/v1",
    "/apis/networking.k8s.io/v1beta1",
    "/apis/node.k8s.io",
    "/apis/node.k8s.io/v1beta1",
    "/apis/policy",
    "/apis/policy/v1beta1",
    "/apis/rbac.authorization.k8s.io",
    "/apis/rbac.authorization.k8s.io/v1",
    "/apis/rbac.authorization.k8s.io/v1beta1",
    "/apis/scheduling.k8s.io",
    "/apis/scheduling.k8s.io/v1",
    "/apis/scheduling.k8s.io/v1beta1",
    "/apis/storage.k8s.io",
    "/apis/storage.k8s.io/v1",
    "/apis/storage.k8s.io/v1beta1",
    "/healthz",
    "/healthz/autoregister-completion",
    "/healthz/etcd",
    "/healthz/log",
    "/healthz/ping",
    "/healthz/poststarthook/apiservice-openapi-controller",
    "/healthz/poststarthook/apiservice-registration-controller",
    "/healthz/poststarthook/apiservice-status-available-controller",
    "/healthz/poststarthook/bootstrap-controller",
    "/healthz/poststarthook/ca-registration",
    "/healthz/poststarthook/crd-informer-synced",
    "/healthz/poststarthook/generic-apiserver-start-informers",
    "/healthz/poststarthook/kube-apiserver-autoregistration",
    "/healthz/poststarthook/rbac/bootstrap-roles",
    "/healthz/poststarthook/scheduling/bootstrap-system-priority-classes",
    "/healthz/poststarthook/start-apiextensions-controllers",
    "/healthz/poststarthook/start-apiextensions-informers",
    "/healthz/poststarthook/start-kube-aggregator-informers",
    "/healthz/poststarthook/start-kube-apiserver-admission-initializer",
    "/livez",
    "/livez/autoregister-completion",
    "/livez/etcd",
    "/livez/log",
    "/livez/ping",
    "/livez/poststarthook/apiservice-openapi-controller",
    "/livez/poststarthook/apiservice-registration-controller",
    "/livez/poststarthook/apiservice-status-available-controller",
    "/livez/poststarthook/bootstrap-controller",
    "/livez/poststarthook/ca-registration",
    "/livez/poststarthook/crd-informer-synced",
    "/livez/poststarthook/generic-apiserver-start-informers",
    "/livez/poststarthook/kube-apiserver-autoregistration",
    "/livez/poststarthook/rbac/bootstrap-roles",
    "/livez/poststarthook/scheduling/bootstrap-system-priority-classes",
    "/livez/poststarthook/start-apiextensions-controllers",
    "/livez/poststarthook/start-apiextensions-informers",
    "/livez/poststarthook/start-kube-aggregator-informers",
    "/livez/poststarthook/start-kube-apiserver-admission-initializer",
    "/logs",
    "/metrics",
    "/openapi/v2",
    "/readyz",
    "/readyz/autoregister-completion",
    "/readyz/etcd",
    "/readyz/log",
    "/readyz/ping",
    "/readyz/poststarthook/apiservice-openapi-controller",
    "/readyz/poststarthook/apiservice-registration-controller",
    "/readyz/poststarthook/apiservice-status-available-controller",
    "/readyz/poststarthook/bootstrap-controller",
    "/readyz/poststarthook/ca-registration",
    "/readyz/poststarthook/crd-informer-synced",
    "/readyz/poststarthook/generic-apiserver-start-informers",
    "/readyz/poststarthook/kube-apiserver-autoregistration",
    "/readyz/poststarthook/rbac/bootstrap-roles",
    "/readyz/poststarthook/scheduling/bootstrap-system-priority-classes",
    "/readyz/poststarthook/start-apiextensions-controllers",
    "/readyz/poststarthook/start-apiextensions-informers",
    "/readyz/poststarthook/start-kube-aggregator-informers",
    "/readyz/poststarthook/start-kube-apiserver-admission-initializer",
    "/readyz/shutdown",
    "/version"
  ]
}
Copy the code
Deployment version
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kubia
spec:
  replicas: 3
  selector:
    matchExpressions:
      - key: app
        operator: In
        values:
         - kubia
  template:
    metadata:
      name: kubia
      labels:
        app: kubia
    spec:
      containers:
      - image: luksa/kubia:v1
        name: nodejs
Copy the code
Deployment Properties to be aware of
minReadySeconds The pod has a minimum available wait time, even if the ready probe returns successfully
revisionHistoryLimit Limit the number of historical versions
maxSurge The desired number of copies is 25% by default, and if the number of copies is 4, there will be no more than 5 POD instances during the roll
maxUnavailable The maximum number of pods allowed to be unavailable during a rolling upgrade.