Kubernetes evolution

Kubernetes referred to as K8S(except the beginning of the K and the end of the S, there are eight characters in the middle, so called K8S) is Google open source a new distributed architecture solution based on container technology, but also a new container arrangement tool.

Kubernetes, formerly Google’s Borg system, started in 2003 as a container-managed system used within Google to manage hundreds of thousands of jobs from thousands of different applications across many clusters with up to tens of thousands of machines each.

In 2013, another Omega system was implemented internally at Google. Omega can be seen as an extension of Borg, which is superior to Borg in terms of large-scale cluster management and performance. But Borg and Omega are both systems used internally by Google, known as closed source systems. Until 2014, Google launched Kubernetes as an open source system, the first submission was completed on GitHub on June 7 of the same year, and then on July 10, Microsoft, IBM, RedHat, and Docker joined the Kubernetes community.

Kubernetes officially released v1.0 in July 2015, and v1.15 was released on June 19 of this year, with version 1.16 under intense development. You can see that sixteen versions have been released in just four years, and it’s getting faster and faster.

In fact, there are not only Kubernetes, but also Docker Swarm and Mesos in the field of container arrangement. In the early days, the three were in a three-way competition, but now Kubernetes is a standout, and Kubernetes has developed into a Kubernetes ecosystem. And CloudNative, which is built around Kubernetes, is also growing rapidly.

Deploy the application on Kubernetes

When accessing the application on Kubernetes platform, the user requests to reach the Ingress resource, and Ingress will find the corresponding service according to the configured policy. Since service and pod are mapped through LabelSelector, the service is requested. The appropriate POD is naturally found, which takes the request to its final destination: the container in which the application is running. The process diagram is as follows

“Everything is a resource” on the Kubernetes platform, so if you want to deploy an externally accessible application, you can create deployment, Service, as shown in the following three steps (deployment(POD)– > Service– > Ingress) Ingress three resources will do.

Step 1: Deployment(POD) creation

POD is the most basic and important concept of Kubernetes. It is the smallest and most basic unit of resource scheduling on Kubernetes platform. The application runs inside the container, the POD manages the container, and the POD is scheduled on top of the node, which can be a physical machine or a virtual machine. Containers, pods, and nodes are just like pea plants, pods, and peas. As shown in the figure below.

There can be multiple containers in a pod, and multiple containers in the same pod can share the resources of the pod, such as shared volumes, network, etc. Just like a pea pod can have multiple peas in it, several peas in the same pod share a pedicle to absorb nutrients. PODS are run on Kubernetes nodes. A node can run multiple PODS, just like a pea plant with multiple pea pods. A Kubernetes cluster typically consists of several nodes, just as an acre of land contains several peas.

The preparatory work

On the Kubernetes platform, applications run in containers that are mirrored, so before deploying POD, we first package an application that outputs HelloDevOps as a container image. The code for the application is as follows

package main

import(
    "fmt"
    "log"
    "net/http"
    "os"
)

func get_hostname() string{
    hostname, _ := os.Hostname()
    return hostname
}

func get_env_var() string {
    version := os.Getenv("VERSION")
    return version
}

func handler(w http.ResponseWriter, r *http.Request) {
    fmt.Fprintf(w, "Hello DevOps, pod name is %s, version is %s",get_hostname(), get_env_var())
}

func main()
 {
    http.HandleFunc("/", handler)
    log.Fatal(http.
ListenAndServe(":9999", nil))
}

Next, write a Dockerfile

From golang:1.12.7-alpine3.9 MAINTAINER [email protected] WORKDIR /usr/ SRC /app COPY main.go /usr/ SRC /app EXPOSE 8888CMD  ["go","run","main.go"]

Then use the docker build-t DLLHB /go_demo:v1.0&&docker push DLLHB /go_demo:v1.0 to push the image onto the dockerhub. You can use this image to create a pod.

Create Deployment (Pod)

apiVersion: extensions/v1beta1 kind: Deployment metadata: name: hello-devops labels: app: hello-devops spec: replicas: 1 template: metadata: labels: app: hello-devops spec: containers: - name: hello-devops image: DLLHB /go_demo:v1.0 Resources: limits: CPU: "200m" memory: "200Mi" requests: CPU: "100m" memory: "50Mi" imagePullPolicy: IfNotPresent ports: - containerPort: 8888

Then use kubectl-n devops create-f devops.yaml to create deployment under the devops namespace and check the status of POD.

$ kubectl -n devops get pods
NAME                            READY     STATUS    RESTARTS   AGE
hello-devops-78dd67d9cc-klxlm   1/1       Running   0          17s

The application runs in the container, and then POD is responsible for management. After each POD is successfully created, it will be assigned an IP, and other PODS can communicate with it through this IP. However, the IP of POD will change after the POD is restarted, so that other PODS can not communicate with it. In this case, you abstract a resource called a Service, which has a uniquely specified name and, once created, does not change until it is deleted. In general terms, a Service is an abstraction of a Service provided by a set of PODs with the same label. In this way, Service and POD form a mapping. The constant change of POD IP is solved by keeping the Service name unchanged.

Step 2: Service creation

apiVersion: v1
kind: Service
metadata:
  name: hello-devops-svc
  labels:
    app: hello-devops
spec:
  ports:
  - port: 8888
    name: http
  selector:
    app: hello-devops

Then execute kubectl-n devops-f service.yaml to create a service under the DevOps namespace and check it out

$kubectl-n devops get SVC NAME TYPE cluster-ip external-ip PORT(S) AGE hello-devops-svc ClusterIP 172.21.246.13 <none>  8888/TCP 71s

The purpose of deploying an application is to provide a service that is available to the user, which means that the service should be accessible to the outside world. This brings us to the topic of service exposure. There are three forms of Kubernetes service exposure: NodeIP+Port, LoadBalancer, and Ingress.

If you want to expose the service through Nodeip +Port, you need to open the corresponding Port on the Kubernetes node. For the application on the production line, this way has certain security risks; In addition, if a node in a cloud vendor’s Kubernetes Cluster is restarted due to an upgrade or other reason, the NodeIP is likely to change. Therefore, Nodeip +Port is an early and widely used service exposure method of Kubernetes.

LoadBalancer is generally a service exposure method provided by cloud manufacturers. If we need to build it by ourselves, it will be time-consuming and laborious, so we choose Ingress to expose the service. The implementation of Ingress is a combination of the Ingress policy definition and a specific Ingress Controller implementation to achieve service exposure.

For the purpose of this article, it is practical to create the Ingress resource directly to complete the service exposure based on how the Ingress resource is defined.

Step 3: Create Ingress

If TLS is enabled for INGress, a corresponding secret resource needs to be created. The secret resource of Kubernetes is mainly used to store some sensitive information, such as password and token. You can start by using the openssl command to generate the tls.cert and tls.key files.

$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout 
/tmp/tls.key -out /tmp/tls.crt -subj "/CN=devops.test.com"

Then use these two files to create the secret

$ kubectl -n devops create secret tls test-tls --key=/tmp/tls.key --cert=/tmp/tls.crt

Write the following to an ingress.yaml file

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: hello-devops-ingress
  annotations:
    nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
  tls:
  -hosts:
    - hello-devops.test.com
    secretName: test-tls
  rules:
  - host: hello-devops.test.com
    http:
      paths:
      - path: /
        backend:
          serviceName: hello-devops-svc
          servicePort: 8888

Then use the command kubectl-n devops creat-fingress. yaml to create the Ingress resource under the devops namespace and view it

$kubectl-n devops get ing NAME HOSTS ADDRESS PORTS AGE hello-devops-ingress devops.test.com 10.208.70.22 80, 443 1000s

Using the curl command, look at curl-k-s https://devops.test.com and you’ll see the following

$ curl -k -s https://devops.test.com
Hello DevOps, pod name is hello-devops-78dd67d9cc-klxlm

At this point, the deployment of the Hello DevOps application is completed by creating Deployment, Service, and Ingress resources. A simple demo can complete the deployment on Kubernetes according to the simple steps, but when the number of micro-services increases, the resource management files will also be massive, not only the above Deployment, Service, Ingress, but also secret, ConfigMap, PVC, etc.

Source: Devsecops Sig


Author: Little horse brother


Disclaimer: The article was forwarded on IDCF public account (devopshub) with the authorization of the author. Quality content to share with the technical partners of the Sifou platform, if the original author has other considerations, please contact Xiaobian to delete, thanks.

Every Thursday in July at 8 PM, “Dong Ge has words” R & D efficiency tools special, public account message “R & D efficiency” can be obtained address

  • “Azure DevOps Toolchain” by Zhou Wenyang (July 8)
  • July 15, to be determined
  • July 22, Yang Yang “Infrastructure is code automation test exploration”
  • July 29, Xianbin Hu, “Automated Test, How to” Attack and Defend “?