K8s Quick start

  • Yaml files and installation packages required for cluster setup
  • Gitee address

Introduction to the

  • KubernetesHereinafter referred to as k8s; Open source system for automated deployment, extension, and management of post-container applications
  • Chinese website
  • The Chinese community
  • The official documentation

What is Kubernetes

  • Kubernetes is a portable, extensible, open source platform for managing containerized workloads and services, facilitating declarative configuration and automation; Kubernetes has a large and rapidly growing ecosystem. Kubernetes’ services, support and tools are widely available
  • Containers have become popular for a number of advantages. Listed below are some of the benefits of containers:
    • Agile application creation and deployment: Improved ease and efficiency of container image creation compared to using VM images
    • Continuous development, integration, and deployment: Supports reliable and frequent container image builds and deployments with quick and easy rollback due to image immutability
    • Focus on separation of development and operations: Create an application container image at build/release time rather than at deployment time to separate the application from the infrastructure
    • Observability displays not only operating system-level information and metrics, but also application health and other metrics signals
    • Consistency across development, test, and production environments: Run on portable computers the same as in the cloud
    • Portability across cloud and operating system distributions: Runs on Ubuntu, RHEL, CoreOS, native, Google Kubernetes Engine, and anywhere else
    • Application-centric management: Raising the level of abstraction from running the OS on virtual hardware to running applications on the OS using logical resources
    • Loosely coupled, distributed, resilient, liberated microservices: Applications are broken down into smaller, independent parts that can be deployed and managed dynamically – rather than run as a whole on a single, large machine
    • Resource isolation: Predictable application performance
    • Resource utilization: high efficiency and high density

Why use Kubernetes

Containers are a great way to package and run applications. In a production environment, you need to manage the containers in which your applications are running and make sure they don’t go down. For example, if one container fails, you need to start another container. Would it be easier if the system handled this behavior?

This is how Kubernetes solves these problems! Kubernetes provides you with a framework that can run distributed systems flexibly. Kubernetes will meet your scaling requirements, failover, deployment patterns, etc. For example, Kubernetes can easily manage the system’s Canary deployment

Kubernetes offers you:

  • Service discovery and load balancing

    Kubernetes can expose containers using DNS names or their own IP addresses, and if there is a lot of traffic coming into the container, Kubernetes can load balance and distribute network traffic to make the deployment stable

  • Store layout

    Kubernetes allows you to automatically mount storage systems of your choice, such as local storage, public cloud providers, etc

  • Automatic deployment and rollback

    You can use Kubernetes to describe the desired state of a deployed container, which can change the actual state to the desired state at a controlled rate. For example, you can automate Kubernetes to create new containers for your deployment, remove existing containers and use all their resources for new containers

  • Automatically complete the packing calculation

    Kubernetes allows you to specify the CPU and memory (RAM) required for each container. When the container specifies resource requests, Kubernetes can make better decisions to manage the container’s resources

  • self-healing

    Kubernetes restarts containers that fail, replaces containers, kills containers that do not respond to user-defined health checks, and does not notify clients until the service is ready

  • Key and configuration management

    Kubernetes allows you to store and manage sensitive information such as passwords, OAuth tokens, and SSH keys. You can deploy and update the keys and application configurations without recreating the container image and without exposing the keys in the stack configuration

What is Kubernetes not

Kubernetes is not a traditional, all-encompassing PaaS (platform as a service) system. Because Kubernetes operates at the container level rather than the hardware level, it provides some of the common features common to PaaS products, such as deployment, scaling, load balancing, logging, and monitoring. However, Kubernetes is not a monolithic system and the default solutions are optional and pluggable. Kubernetes provides the foundation for building a developer platform, but retains user choice and flexibility where it is important

Kubernetes:

  • There are no restrictions on the types of applications supported. Kubernetes is designed to support an extremely wide variety of workloads, including stateless, stateful, and data-processing workloads. If the application can run in a container, it should run just fine on Kubernetes.

  • No source code is deployed, and no application is built. Continuous integration (CI), delivery, and deployment (CI /CD) workflows depend on the culture and preferences of the organization as well as technical requirements.

  • Application-level services are not provided as built-in services, such as middleware (e.g., messaging middleware), data processing frameworks (e.g., Spark), databases (e.g., mysql), caches, clustered storage systems (e.g., Ceph). Such components can run on Kubernetes, and/or can be accessed by applications running on Kubernetes through portable mechanisms (for example, open service brokers).

  • No logging, monitoring, or alerting solutions are required. It provides some integration as a proof of concept and provides a mechanism for collecting and exporting metrics.

  • No configuration language/system (such as JsonNet) is provided or required, which provides a declarative API that can be composed of any form of declarative specification.

  • No comprehensive machine configuration, maintenance, management, or self-repair system is provided or adopted.

  • In addition, Kubernetes is more than just an orchestration system; it actually eliminates the need for orchestration. The technical definition of choreography is to execute A defined workflow: first A, then B, then C. In contrast, Kubernetes contains a set of independent, composable control procedures that continuously drive the current state to the required state provided. It doesn’t matter how you get from A to C, nor does it require centralized control, making the system easier to use and more powerful, making the system more robust, resilient and scalable

Kubernetes working example

  • Automatic deployment

  • Automatic recovery

  • Horizontal scaling

Architectural principles & Core concepts

Holistic master-slave approach

Master Node Architecture

  • Architecture diagram

    • kube-apiserver
      • The API interface of k8S is the only entry for external resource operations
      • Provides authentication, authorization, access control, API registration, and discovery mechanisms
    • etcd
      • etcdIs a consistent and highly available key-value database that can be used as a storeKubernetesBackground database for all cluster data
      • KubernetesThe clusteretcdDatabases usually need to have a backup plan
    • kube-scheduler
      • Component on the master node that monitors newly created PODS that do not specify running nodes and selects nodes to run pods on
      • All cluster operations on K8S must be scheduled through the primary node
    • kube-controller-manager
      • The components of the controller run on the master node
      • These controllers include:
        • Node Controller: Notifies and responds to Node failures
        • Replication Controller: Responsible for maintaining the correct number of PODS for each replica Controller object in the system
        • Endpoints Controller: Populates Endpoints objects (adding services and pods)
        • Service Account & Token Controllers: Create default accounts and API access tokens for the new namespace

Node Architecture

  • Node components run on each node, maintain running PODS and provide a Kubernetes runtime environment

  • Architecture diagram

    • kubelet
      • Agents running on each node in a cluster; It ensures that containers are running in pods
      • Responsible for maintaining the container lifecycle, as well as managing the Volume (CSI) and network (CNI)
    • kube-proxy
      • Responsible for providing intra-cluster Service discovery and load balancing for services
    • Container Runtime
      • The container runtime environment is the software responsible for running the container
      • KubernetesSupport for multiple container running environments:Docker, Containerd, Cri-O, and RKtletAnd any implementationKubernetes CRI(Container runtime environment interface)
    • fluentd
      • Is a daemon that helps provide cluster-level logging

Complete concept

  • Overall Architecture

    • Container: a Container, which can be a Container started by Docker

    • Pod:

      • K8s uses PODS to organize a set of containers
      • All containers in a Pod share the same network
      • Pod is the smallest deployment unit in K8S
    • Volume

      • Declares the directory of files accessible in the Pod container
      • Can be mounted under one or more container specified paths in Pod
      • Supports multiple back-end storage abstractions (local storage, distributed storage, cloud storage…)
    • Controllers: Higher level objects that deploy and manage pods

      • ReplicaSet: Ensure the expected number of Pod copies
      • Deplotment: Stateless application deployment
      • StatefulSet: Stateful application deployment
      • DaemonSet: Ensures that all nodes are running on a specified Pod
      • Job: One-time task
      • Cronjob: Scheduled task
    • Deployment:

      • Define the number of copies, versions, and so on of a set of pods

      • Maintain Pod count through Controller (automatic recovery of failed PODS)

      • Version control (rolling upgrade, rollback, etc.) with a specified policy through the controller

    • Service

      • Define an access policy for a set of PODS

      • Pod load balancing, which provides one or more Pod stable access addresses

      • Supports multiple modes (ClusterIP, NodePort, LoadBalance)

    • Label: indicates the Label used to query and filter object resources

    • Namespace: Namespace, logical isolation

      • Logical isolation within a cluster (authentication, resources)
      • Each resource belongs to a namespace
      • All resource names of a namespace must be unique
      • Resource names of different namespaces can be the same
  • API:

    We use the KUbernetes API to operate the entire cluster

    You can use Kubectl, UI, curl to send HTTP + JSON/YAML request to API server, and then control k8S cluster. All resource objects in K8S can be defined or described in yamL or JSON files

The process described

  • 1, through theKubectlSubmit a createRC (Replication Controller)The request is passedAPIServerTo be writtenetcdIn the
  • 2, at this time,Controller ManagerThis RC event is heard through the API Server’s interface that listens for resource changes
  • 3. After analysis, it is found that there is no CORRESPONDING Pod instance in the current cluster
  • 4. Create a Pod object based on the Pod template definition in RC and write etCD via APIServer
  • 5. The event is detected by the Scheduler, which immediately executes a complex scheduling process to select a resident Node for the new Pod and writes the result to the API ServeretcdIn the
  • Run on the target NodeKubeletThe process detects the new Pod through API Server and, according to its definition, starts the Pod and is responsible for the rest of its life until the end of the Pod’s life
  • Then we passedKubectlSubmit a creation request for a new Service that maps to the Pod
  • Eight,ControllerManagerThe associated Pod instance is queried using the Label Label, and the Endpoints information for the Service is generated and written to by APIServeretcdIn the
  • 9. Next, the Proxy process running on all nodes queries and listens to the Service object and its corresponding Endpoints information through API Server, and establishes a software load balancer to realize the traffic forwarding function of Service accessing back-end Pod

All resource objects in K8S can be defined or described in yamL or JSON files

K8s cluster installation

Kubeadm

  • Kubeadm is a tool from the official community for rapid deployment of Kubernetes clusters; This tool can complete the deployment of a Kubernetes cluster with two instructions

    Kubeadm installation documentation

    • 1. Create a Master node

      $ kubeadm init
      Copy the code
    • 2. Add a Node to the current cluster

      $kubeadm join <Master node IP and port >Copy the code

Lead requirements

  • One or more machines, operating system Centos 7.X-86_X64
  • Hardware: 2GB or more RAM, 2 oR more cpus, 30GB or more hard disk
  • All the machines in the cluster are communicating properly
  • You can access the Internet. You need to pull the mirror
  • Disabling swap partitions

Deployment steps

  • 1. Install Docker and Kubeadm on all nodes

  • 2. Deploy Kubernetes Master

  • 3. Deploy the container network plug-in

  • 4. Deploy kubernetes Node and add the Node to the Kuberbetes cluster

  • 5. Deploy Dashboard Web page and visually view Kubernetes resources

Environment to prepare

The preparatory work

  • VMware clone two VMS and prepare three VMS

    /etc/sysconfig/network-scripts/ifcfg-ens33 192.168.83.133 192.168.83.134 192.168.83.135Copy the code

Set up the Linux network environment (for all three nodes)

  • Disabling the Firewall

    systemctl stop firewalld
    systemctl disable firewalld
    Copy the code
  • Close the selinux

    Sed -i 's/enforcing/diabled/' /etc/selinux/config setenforce 0Copy the code
  • Disable swap

    # temporary swapoff -a # permanent sed-ri 's/.*swap.*/#&/' /etc/fstabCopy the code
  • Add the mapping between host names and IP addresses

    Hostname hostnamectl set-hostname k8s-node1 vim /etc/hosts 192.168.83.133k8S-node1 192.168.83.134k8S-node2 192.168.83.135 k8s - node3Copy the code

    Ensure that the node name can be pinged from each VM

  • The chain that passes the bridged ipv4 traffic to iptables

    cat > /etc/sysctl.d/k8s.conf << EOF   
    net.bridge.bridge-nf-call-ip6tables=1
    net.bridge.bridge-nf-call-iptables=1
    EOF
    
    sysctl --system
    Copy the code

Installing all Nodes

  • Docker, kubeadm, kubelet, kubectl

    Kubernetes default CRI (container runtime) is Docker, so follow Docker first

  • Add Aliyun YUM source

    cat > /etc/yum.repos.d/kubernetes.repo << EOF
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
    enabled=1
    gpgcheck=0
    repo_gpgcheck=0
    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    EOF
    Copy the code
  • Version specific installation

    Yum list | grep kube # based on version 1.1.7 kubernetes yum install - y kubelet - 1.17.3 kubeadm - 1.17.3 kubectl - 1.17.3 # executive systemctl installation is complete enable kubelet systemctl start kubeletCopy the code

Deploy k8s – master

The master node is initialized. Procedure

  • Copy the K8S resource file to k8S-node1

  • Initialize the master node

    Kubeadm init \ --apiserver-advertise-address=192.168.83.133 \ --image-repository Registry.cn-hangzhou.aliyuncs.com/google_containers \ - kubernetes - version v1.17.3 \ - service - cidr = 10.96.0.0/16 \ - pod - network - cidr = 10.244.0.0/16Copy the code

    The execution result

    Continue to execute the prompt command as prompted by the console

    # start complete, copy this sentence for use (*), Valid within 2 hours kubeadm join 192.168.83.133:6443 --token f9s477.9qH5BG4GD7xy9f67 \ --discovery-token-ca-cert-hash sha256:8d6007a15b9dfa0940d2ca8fbf2929a108c391541ae59f9c75d66352bdd0aba6Copy the code

Install the Pod network plug-in

  • Use kube-flannel.yml in the K8S folder

    kubectl apply -f kube-flannel.yml
    Copy the code

    kubectl get pods --all-namespaces
    Copy the code

Join the K8S cluster

  • Checking master Status

    kubectl get nodes
    Copy the code

  • Node2 and Node3 are added to the cluster

    When #master starts, Output content kubeadm join 192.168.83.133:6443 --token f9s477.9qH5BG4GD7xy9f67 \ --discovery-token-ca-cert-hash sha256:8d6007a15b9dfa0940d2ca8fbf2929a108c391541ae59f9c75d66352bdd0aba6Copy the code

  • If NotReady is used and the network is faulty, run the following command to monitor the POD progress

    kubectl get pod -n kube-system -o wide
    Copy the code

    Wait for 3 to 10 minutes until it is completely running, then check again

Getting started Kubernetes Cluster operations

Basic Operating experience

Deploying tomcat

Kubectl create Deployment tomcat6 --image=tomcat:6.0.53-jre8 wideCopy the code
  • View node and POD information

Simulate tomcat container downtime (manually stop containers)
  • Observe whether the Tomcat container is pulled up again

Simulating the Node3 node outage (shutting down the virtual machine)
  • The detection process may take about 5 minutes. Wait patiently

Simple Dr Test

Exposure to access

  • 80 for Pod maps 8080 for container; Service will delegate 80 of Pod

    kubectl expose deployment tomcat6 --port=80 --target-port=8080 --type=NodePort
    Copy the code
  • Access to the service

    kubectl get svc
    Copy the code

Dynamic Capacity Expansion Test

  • Obtaining Deployment Information

    kubectl get deployment
    Copy the code

  • capacity

    kubectl scale --replicas=3 deployment tomcat6
    Copy the code

    Multiple copies have been expanded so that you can access Tomcat6 from any specified port on that node

  • Shrink (change the number of copies)

    kubectl scale --replicas=1 deployment tomcat6
    Copy the code

delete

  • Get all resources

    kubectl get all
    Copy the code

  • Remove the deployment

    kubectl delete deployment.apps/tomcat6
    Copy the code
  • Remove the service

    kubectl delete service/tomcat6
    Copy the code

K8s details

kubectl

Kubectl document
  • Kubectl is a command line interface for commands to Kubernetes clusters

    Kubectl command line interface

The resource type
  • The resource type

Formatted output
  • The default output format for all Kubectl commands is human-readable plain text. To output details to a terminal window in a specific format, you can add -o or –output arguments to supported Kubectl commands

    Formatted output

Common operations
  • Address of common operation documents
    • kubectl apply– Apply or update resources based on files or standard input
    • kubectl get– Lists one or more resources
    • kubectl describe– Displays the detailed status of one or more resources, including uninitialized resources by default
    • kubectl delete– Deletes a resource from a file, stDIN, or specified label selector, name, resource selector, or resource
    • kubectl exec– Execute commands on containers in pod
    • kubectl logs– Prints the Pod container logs
Command reference
  • Kubectl command reference

Yaml syntax

Yml template
  • The illustration

  • View the yamL content of the tomcat6 creation command above

    Kubectl create Deployment tomcat6 --image=tomcat:6.0.53-jre8 --dry-run -o yamlCopy the code

  • Generate yamL files

    Kubectl create deployment tomcat6 --image=tomcat:6.0.53-jre8 --dry-run -o yaml > tomcat6.yamlCopy the code

    Modify the file, copy 1 to 3 vim tomact6.yaml

    Run the YAML file

    kubectl apply -f tomcat6.yaml
    Copy the code

  • Port exposure can also be used with YAML files instead of verbose commands

    kubectl expose deployment tomcat6 --port=80 --target-port=8080 --type=NodePort --dry-run -o yaml
    Copy the code

An introduction to the operating

Pod
  • A Pod is the basic execution unit of a Kubernetes application, that is, it is the smallest and simplest unit created or deployed in the Kubernetes object model. Pod represents a process running on a cluster.

    A Pod encapsulates an application container (or, in some cases, multiple containers), storage resources, unique network IP, and options for controlling how the container should behave. A Pod represents a deployment unit: a single instance of an application in Kubernetes, which may consist of a single container or a small number of tightly coupled containers that share resources.

    Docker is the most common container runtime in Kubernetes Pod, but Pod can support other container runtimes as well.

    A Pod in a Kubernetes cluster can be used for two main purposes:

    • Run pods in a single container.” The one container per Pod model is the most common Kubernetes use case; In this case, the Pod can be thought of as a wrapper for a single container, and Kubernetes manages the Pod directly, not the container.

    • A Pod that runs multiple containers working together. A Pod may encapsulate an application consisting of multiple co-existing containers that are tightly coupled and need to share resources. These containers in the same location may form a single cohesive unit of service — one container makes files available to the public from a shared volume, while a separate “sidecar” container refreshes or updates these files. Pod packages these containers and storage resources into a manageable entity. The Kubernetes blog has some additional Pod use case information. Please refer to:

    • Distributed Systems toolkit: a pattern for container composition

    • Container design pattern

    Each Pod represents a single instance of running a given application. If you want to scale your application horizontally (for example, to run multiple instances), you should use multiple PODS, one Pod per application instance. In Kubernetes, this is often referred to as a copy. An abstraction called a controller is typically used to create and manage a set of replica pods

The controller
  • The controller

  • A Deployment controller provides a descriptive update mode for Pods and ReplicaSets

Deployment&Service
  • The relationship between

  • In simple terms

    • Deployment is a Deployment information saved by the master node
    • Service exposes PODS, a load balancing of pods
  • The meaning of Service: unified application access entry; Manage a group of PODS to prevent Pod loss (service discovery); Define an access policy for a set of PODS

    Now Service is exposed using NodePort, so that every port that accesses the node can access the Pod. If the node goes down, there will be a problem

Labels and selectors
  • Tags and selectors (analogous to the relationship between control IDS and class and selectors in javascript)

  • Relationship diagram

Ingress
  • Based on the nginx

  • Start the previous Tomcat6 container using YAML and expose the access port

    tomcat6-deployment.yaml

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      labels:
        app: tomcat6
      name: tomcat6
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: tomcat6
      template:
        metadata:
          labels:
            app: tomcat6
        spec:
          containers:
          - image: tomcat:6.0.53-jre8
            name: tomcat
    ---
    apiVersion: v1
    kind: Service
    metadata:
      labels:
        app: tomcat6
      name: tomcat6
    spec:
      ports:
      - port: 80
        protocol: TCP
        targetPort: 8080
      selector:
        app: tomcat6
      type: NodePort
    Copy the code
    • Run the YAML file

      kubectl apply -f tomcat6-deployment.yaml
      Copy the code

    • Tomcat can be accessed from port 30658 of any three node addresses

  • Discover Pod through Service for association based on domain name access; Pod load balancing via Ingress Controller Supports TCP/UDP layer 4 load balancing and HTTP layer 7 load balancing

    • Step 1: Run the file (which is ready in the K8S folder) to create the Ingress Controller

      kubectl apply -f  ingress-controller.yaml
      Copy the code

    • Step 2: Create an Ingress rule

      Yaml file apiVersion: Extensions /v1beta1 kind: ingress metadata: name: web spec: rules: -host: tomcat6.touch.air.mall.com http: paths: - backend: serviceName: tomcat6 servicePort: 80Copy the code
      Kubectl apply -f ingress-tomcat6.yaml kubectl get allCopy the code

    • Configure local domain name resolution C:\Windows\System32\drivers\etc\hosts