Ps: Because the server developed by the company was transformed into K8s, most of the company did not start to build, so I first recorded this by myself

Kubernetes deployment documentation

Kubernetes specific service deployment

  1. Introduction of Kubernetes

Kubernetes is a portable, extensible, open source platform for managing containerized workloads and services that facilitates declarative configuration and automation. Kubernetes has a large and rapidly growing ecosystem. Kubernetes’ services, support and tools are widely available.

There are four cores for the service for Kuberntes:

ConfigMap Name/NameSpace Pod/Pod Controller Service/Ingress

1.1 ConfigMap

In production and corresponding environments, it is often encountered to modify configuration files. Traditional modification methods not only affect the normal operation of services, but also the modification and operation procedures are very cumbersome. To solve this problem, Kubernetes provides ConfigMap function, which is used to separate configuration and program. This approach can not only achieve program reuse, but also improve the flexibility of configuration, users can be through environment variables or plug-in files into the container

ConfigMap is stored as Key: Value, which represents a default state Value

It can also be used to store file contents, for example, server. XML:….

ApiVersion: v1 kind: ConfigMap metadata: name: data-platform-3 # ConfigMap namespace: develop # to import namespaces (project) labels: Name: data - platform - 3 # tag (to binding behind the tag) data: hbase - site. XML: | <? The XML version = "1.0"... . server.xml: | <? The XML version = "1.0"... .Copy the code

1.1.2 Execution Mode

Execute this yaml file -f on the Kubernetes master node to specify the file name, or you can specify the URL

kubelctl apply -f data-platform-3-configmap.yaml
Copy the code

Import this yaml file from the graphical management Kubernetes page, for example :(rancher)

1.2 the Name/NameSpcae/Pod

1.2.1 the Name

Internally, Kubernetes uses “resource” to define each logical concept (function), so each resource should have its own “name”. Configuration information such as API Version, category, metadata, definition list, spec, status, etc. “name” is usually defined in the “metadata” information of “resource”

1.2.2 NameSpace

As projects grow, people grow, and clusters grow, there needs to be a way to isolate resources within Kubernetes, called namespaces (command Spaces)

The command space can be understood as Kubernetes’ internal virtual group

The names of resources in different namespaces can be the same, but the names of resources in the same namespace cannot be the same

To query specific resources in Kubernets, you need to bring the corresponding namespace

1.3 Pod/Pod controller

1.3.1 Pod

Pod is the smallest logical unit in Kubernetes

A Pod can run multiple containers, sharing UTS+NET+IPC namespace with each other

You can think of a Pod as a pea Pod, and each container within the same Pod is a pea

Run multiple containers in a Pod, also known as SideCar mode

1.3.2 Pod controller

Pod controller is a template for Pod startup, which is used to ensure that the Pod startup in Kubernetes always runs as expected (number of copies, life cycle, health check status, etc.). Kubernetes internal scheduling deployment, deployment according to the number of replica sets, adjust the number of replica sets to achieve highly available DaemonSet: The deployment is scheduled internally by Kubernetes. It is deployed on each node of Kubernetes with unified ClusterIp and DaemonSet deployment mode, which automatically realizes load balancing

1.4 the Service/Ingress

1.4.1 Service

In Kubernetes’ world, although each Pod is assigned its own IP, But the Ip will disappear when the Pod is destroyed. Service is the core concept used to solve this problem. A Service can be thought of as an external access interface for a set of pods that provide services

1.4.2 Ingress

Ingress is a layer 7 application in Kubernetes cluster that works under the OSI network reference model. The exposed interface Service can only provide L4 traffic scheduling. In the form of IP+PORT Ingress, it can schedule the traffic of different Service domains and different URL access paths

2. Service deployment Kubernetes

Service deployment consists of four steps:

ConfigMap Deployment Service Ingress

Taking the data platform as the object, the following four steps are performed:

Import the configuration file of data-platform-3 to the configuration file management center. That is, you do not need to reconfigure the configuration file due to the version change. Then you deploy data-platform-3 and pull the image. Mount the configuration file inside the container, and other configuration is done in this step, which is called Deployment services. After the service is successfully deployed, such as EMS-saas-Web to invoke the data platform and access the data platform, how about that? So that’s what a Service does, it maps the PORT on which it starts and the PORT on which it needs from inside the container and binds it to the Service so that the PORT on the application is exposed, and when other services access it, they just use the name of the Service and the PORT and use the last step like IP:PORT, This step is to expose the service to the outside of the Kubernetes cluster, from which the service can be accessed. If the service does not need to be accessed externally, the Ingress is not required.

2.1 Tomcat (Data Platform)

Take data-platform-3 as an example.

apiVersion: v1 kind: ConfigMap metadata: name: data-platform-3 namespace: develop labels: name: data-platform-3 data: hbase-site.xml: | <? The XML version = "1.0" encoding = "utf-8"? >... config.properties: | env=prod ... --- apiVersion: v1 kind: Service metadata: name: data-platform-3 labels: app: data-platform-3 spec: selector: app: data-platform-3 ports: - name: server-port port: 8080 targetPort: 8080 --- apiVersion: apps/v1 kind: Deployment metadata: name: data-platform-3 spec: replicas: 1 selector: matchLabels: app: data-platform-3 template: Metadata: labels: app: data-platform-3 spec: hostAliases: -ip: "192.168.100.90" Hostnames: - "node1" - "namenode" - "secondarynamenode" - "zookeeper1" -ip: "192.168.100.91" hostnames: - "node2" - "datanode1" - "zookeeper2" -IP: "192.168.100.92" hostnames: - "node3" - "datanode2" - "zookeeper3" containers: - name: data-platform-3 image: Persagy - the registry: 8080 / jixian/data - platform - 3:1. 1.2 ports: - containerPort: 8080 name: server - port volumeMounts: - name: hbasesite mountPath: /usr/local/tomcat/webapps/ROOT/WEB-INF/classes/hbase-site.xml subPath: path/to/hbase-site.xml - name: config mountPath: /usr/local/tomcat/webapps/ROOT/WEB-INF/classes/config.properties subPath: path/to/config.properties - name: "nfs-pvc" mountPath: "/usr/local/tomcat/webapps/ROOT/WEB-INF/classes/property" subPath: "data-platform-3/property" volumes: - name: hbasesite configMap: name: data-platform-3 defaultMode: 0777 items: - key: hbase-site.xml path: path/to/hbase-site.xml - name: config configMap: name: data-platform-3 defaultMode: 0777 items: - key: config.properties path: path/to/config.properties - name: "nfs-pvc" persistentVolumeClaim: claimName: "nfs-develop-persagy"Copy the code

2.2 SpringCloud (ems – saas – web)

Yml configuration file, so all SpringCloud applications use the same ConfigMap configuration file to import this configuration before deploying SpringCloud applications

The SpringCloud configuration files are as follows:

apiVersion: v1 kind: ConfigMap metadata: name: springcloud namespace: develop labels: name: springcloud data: The bootstrap. Yml: | spring: cloud: config: discovery: # open configuration center enabled: The Eureka Server has the same service id as the Eureka Server. Profile: dev # indicates which git branch to request. The default git branch is master. # indicates whether to fetch the registration information from Eureka Server. The default value is true register-with-eureka: true service-url: #defaultZone: http://localhost:8761/eureka/,http://localhost:8762/eureka/ defaultZone: http://pbsage:123456@poems-eureka:9931/eureka/Copy the code

The SpringCloud program configuration list is as follows:

--- apiVersion: v1 kind: Service metadata: name: ems-saas-web labels: app: ems-saas-web spec: selector: app: ems-saas-web ports: - name: server-port port: 9939 targetPort: 9939 --- apiVersion: apps/v1 kind: Deployment metadata: name: ems-saas-web spec: replicas: 1 selector: matchLabels: app: ems-saas-web template: metadata: labels: app: Ems-saas-web spec: hostAliases: -ip: "192.168.100.90" HostNames: - "node1" - "namenode" - "secondarynamenode" - "zookeeper1" -ip: "192.168.100.91" hostnames: - "node2" - "datanode1" - "zookeeper2" -IP: "192.168.100.92" hostnames: - "node3" - "datanode2" - "zookeeper3" containers: - name: ems-saas-web image: Persagy-registry :8080/ Jixian/EMS-saas-Web :2.0.1 ports: -containerPort: 9939 Name: server-port volumeMounts: - name: springcloud mountPath: /data/SpringCloud/bootstarp.yml subPath: path/to/application.yml volumes: - name: springcloud configMap: name: springcloud defaultMode: 0777 items: - key: bootstrap.yml path: path/to/application.ymlCopy the code

Front-end static file

Because front-end static files depend on Nginx, each front-end static file depends on nginx image making, so the nginx unified configuration file is front-end nginx.conf

Nginx. conf configuration file

apiVersion: v1 kind: ConfigMap metadata: name: nginx-web-conf namespace: persagy-standard labels: name: nginx-web-conf data: nginx.conf: | #user tony; worker_processes 4; error_log /var/log/nginx/error.log; pid /run/nginx.pid; worker_rlimit_nofile 100001; # Load dynamic modules. See /usr/share/nginx/README.dynamic. include /usr/share/nginx/modules/*.conf; events { worker_connections 1024; } http { log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for" "$request_time"'; access_log /var/log/nginx/access.log main; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; include /etc/nginx/mime.types; default_type application/octet-stream; gzip on; gzip_min_length 1k; gzip_buffers 4 16k; Gzip_http_version 1.1; gzip_comp_level 2; gzip_types text/plain application/x-javascript application/css text/css application/xml text/javascript application/x-httpd-php gzip_vary on; server { listen 80 default_server; root /usr/persagy/saas-web; location /app { try_files $uri $uri/ /app/index.html; } # FMS location /fms { try_files $uri $uri/ /fms/index.html; }}}Copy the code

2.4.2 Front-end Static Resource Deployment List (FMS)

--- apiVersion: v1 kind: Service metadata: name: fms labels: app: fms spec: selector: app: fms ports: - name: server-port port: 80 targetPort: 80 --- apiVersion: apps/v1 kind: Deployment metadata: name: fms spec: replicas: 1 selector: matchLabels: app: fms template: metadata: labels: app: fms spec: containers: - name: fms image: Persagy - the registry: 8080 / jixian/FMS: v1.1.0 ports: - containerPort: 80 name: server - port volumeMounts: - name: config mountPath: /etc/nginx/nginx.conf subPath: path/to/nginx.conf.js volumes: - name: config configMap: name: nginx-web-conf defaultMode: 0777 items: - key: nginx.conf path: path/to/nginx.conf.jsCopy the code