This paper introduces how to combine the commonly used GitLab and Jenkins to realize the automatic deployment of the project through K8s from a practical point of view. The production architecture diagram currently used by the company is taken as the focus of this explanation, as shown in the figure:

The tools and techniques covered in this article include:

  • GitLab: a common source code management system;
  • Jenkins (Jenkins Pipeline) : a commonly used automatic construction and deployment tool, Pipeline organizes the construction and deployment of each step in the way of pipelining;
  • Docker (Dockerfile) : container engine, all applications will eventually run with docker container, Dockerfile is docker image definition file;
  • Kubernetes: Google open source container Choreography management system.

Environmental Background:

  • GitLab has been used to do source management, source code according to different environment to establish different branches, such as dev (development branch), test (test branch), Pre (pre-release branch), master (production branch);
  • Jenkins service has been set up;
  • Docker Registry service is available for the storage of Docker images (it can be self-built based on Docker Registry or Harbor, or cloud service is used. Aliyun container image service is used in this paper);
  • The K8s cluster has been deployed.

Expected effect:

  • The application is deployed in different environments to isolate the development environment, test environment, pre-release environment and production environment. The development, test and pre-release environment are deployed in the same K8s cluster but use different namespaces. The production environment is deployed in Ali Cloud and uses ACK container service.
  • The configuration can be generalized as much as possible. You only need to modify a few configuration properties of a few configuration files to complete the automatic deployment configuration of new projects.
  • The development, test and pre-release environment can be set to automatically trigger the construction and deployment when pushing the code. The specific configuration is based on the actual situation. The production environment uses separate ACK cluster and separate Jenkins system for deployment.
  • The overall interaction flow chart is as follows:

Project profile

The first step is to add the necessary configuration files to the root path of the project. As is shown in

Include:

  • Dockerfile file, used to build docker image file;
  • Docker_build.shFile, used to push the Docker image Tag to the image warehouse;
  • The project Yaml file, which is the main file for deploying the project to the K8s cluster.

dockerfile

Add a dockerfile file to the project root directory and define how to build a Docker image. For example, Java project:

# mirror source
FROM xxxxxxxxxxxxxxxxxxxxxxxxxx.cr.aliyuncs.com/billion_basic/alpine-java:latest

Copy the current directory application to the image
COPY target/JAR_NAME /application/

Declare the working directory, otherwise the dependency package will not be found, if any
WORKDIR /application

Declare a dynamic container volume
VOLUME /application/logs

Start command
# Set time zone
ENTRYPOINT ["java"."-Duser.timezone=Asia/Shanghai"."-Djava.security.egd=file:/dev/./urandom"]
CMD ["-jar"."-Dspring.profiles.active=SPRING_ENV"."-Xms512m"."-Xmx1024m"."/application/JAR_NAME"]
Copy the code

docker_build.sh

Create a deploy folder in the root directory of the project. This folder contains the configuration files of each environment project. The docker_build. sh file is used to trigger the project to be packaged as an image file, re-tagged and pushed to the image repository.

# !/bin/bash

Module name
PROJECT_NAME=The $1

# namespace directory
WORKSPACE="/home/jenkins/workspace"

Module directory
PROJECT_PATH=$WORKSPACE/pro_$PROJECT_NAME

Jar package directory
JAR_PATH=$PROJECT_PATH/target

Jar package name
JAR_NAME=$PROJECT_NAME.jar

# dockerfile directory
dockerFILE_PATH="/$PROJECT_PATH/dockerfile"

# sed -i "s/VAR_CONTAINER_PORT1/$PROJECT_PORT/g" $PROJECT_PATH/dockerfile
sed -i "s/JAR_NAME/$JAR_NAME/g" $PROJECT_PATH/dockerfile
sed -i "s/SPRING_ENV/k8s/g" $PROJECT_PATH/dockerfile

cd $PROJECT_PATH

# Log in to Aliyun WarehouseDocker login xxxxxxxxxxxxxxxxxxxxxxxxxx.cr.aliyuncs.com net bottle - p XXXXXXXXXXXXXXXXXXXXXXXXXX -u# Build module image
docker build -t $PROJECT_NAME  . 
docker tag $PROJECT_NAME xxxxxxxxxxxxxxxxxxxxxxxxxx.cr.aliyuncs.com/billion_pro/pro_$PROJECT_NAME:$BUILD_NUMBER

# Push to Ali Cloud warehouse
docker push xxxxxxxxxxxxxxxxxxxxxxxxxx.cr.aliyuncs.com/billion_pro/pro_$PROJECT_NAME:$BUILD_NUMBER
Copy the code

project.yamlfile

Project. Yaml defines the project name, PV, PVC, namespace, number of copies, image address, service port, eye-catching self-check, project resource request configuration, file mount, and service required for project deployment to K8s cluster:

# -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- PersistentVolume definition (PV) -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - #
apiVersion: v1
kind: PersistentVolume
metadata:
# Project name
  name: pv-billionbottle-wx
  namespace: billion-pro
  labels:  
    alicloud-pvname: pv-billionbottle-wx
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteMany
  csi:
    driver: nasplugin.csi.alibabacloud.com
    volumeHandle: pv-billionbottle-wx
    volumeAttributes:
      server: "xxxxxxxxxxxxx.nas.aliyuncs.com"
      path: "/k8s/java"
  mountOptions:
  - nolock,tcp,noresvport
  - vers=3

---
# -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- PersistentVolumeClaim definition (PVC) -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - #
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pvc-billionbottle-wx
  namespace: billion-pro
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 5Gi
  selector:
    matchLabels:
      alicloud-pvname: pv-billionbottle-wx      

---      
# ------------------- Deployment ------------------- #
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: billionbottle-wx
  name: billionbottle-wx
# define the namespace
  namespace: billion-pro
spec:
Define the number of copies
  replicas: 2
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: billionbottle-wx
  template:
    metadata:
      labels:
        k8s-app: billionbottle-wx
    spec:
      serviceAccountName: default
      imagePullSecrets:
        - name: registrykey-k8s
      containers:
      - name: billionbottle-wx
# Define the mirror address
        image: $IMAGE_NAME 
        imagePullPolicy: IfNotPresent
Define self-check
        livenessProbe:
          failureThreshold: 3
          initialDelaySeconds: 60
          periodSeconds: 60
          successThreshold: 1
          tcpSocket:
            port: 8020
          timeoutSeconds: 1
        ports:
Define the service port
          - containerPort: 8020
            protocol: TCP
        readinessProbe:
          failureThreshold: 3
          initialDelaySeconds: 60
          periodSeconds: 60
          successThreshold: 1
          tcpSocket:
            port: 8020
          timeoutSeconds: 1
Define the project resource configuration
        resources:
          requests:
            memory: "1024Mi"
            cpu: "300m"
          limits:
            memory: "1024Mi"
            cpu: "300m"
# define file mount
        volumeMounts:
          - name: pv-billionbottle-key
            mountPath: "/home/billionbottle/key"         
          - name: pvc-billionbottle-wx
            mountPath: "/billionbottle/logs"
      volumes:
        - name: pv-billionbottle-key
          persistentVolumeClaim:
            claimName: pvc-billionbottle-key  
        - name: pvc-billionbottle-wx
          persistentVolumeClaim:
            claimName: pvc-billionbottle-wx

---
# -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- Dashboard Service definition (Service) -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - #
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: billionbottle-wx
  name: billionbottle-wx
  namespace: billion-pro
spec:
  ports:
    - port: 8020
      targetPort: 8020
  type: ClusterIP
  selector:
    k8s-app: billionbottle-wx
Copy the code

$IMAGE_NAME = $IMAGE_NAME = $IMAGE_NAME = $IMAGE_NAME = $IMAGE_NAME = $IMAGE_NAME = $IMAGE_NAME The ENV configuration is also added. You can read the ConfigMap configuration file directly. Change the Service Type from the default NodePort to ClusterIp to ensure that projects communicate only internally. You only need to change the environment variables, Project name, and a few other configuration items in docker_build.sh and project.yaml when deploying different projects. The dockerfile file in the root directory can be reused for each environment.

When deploying, we need to pull images from the Docker image repository in the K8s cluster, so we need to create the image repository access credentials (imagePullSecrets) in K8s first.

# login docker Registry generated/root/docker/config. The json file
docker login --username=your-username registry.cn-xxxxx.aliyuncs.com
# Create namespace billion-pro (I created a namespace based on the name of the project's environment branch)
kubectl create namespace billion-pro
Create a secret in namespace billion- Pro
kubectl create secret registrykey-k8s aliyun-registry-secret --from-file=.dockerconfigjson=/root/.docker/config.json --type=kubernetes.io/dockerconfigjson --name=billion-pro
```sh

### **Jenkinsfile (Pipeline)**Jenkinsfile is a Jenkins Pipeline configuration file that follows the Groovy scripting specification. For Java project build deployment,Jenkinsfile's Pipeline script file is as follows: Pipeline#! /bin/sh -ilex
def env = "pro"
def registry = "xxxxxxxxxxxxxxx.cn-shenzhen.cr.aliyuncs.com"
def git_address = "http://xxxxxxxxx/billionbottle/billionbottle-wx.git"
def git_auth = "1eb0be9b-ffbd-457c-bcbf-4183d9d9fc35"
def project_name = "billionbottle-wx"
def k8sauth = "8dd4e736-c8a4-45cf-bec0-b30631d36783"
def image_name = "${registry}/billion_pro/pro_${project_name}:${BUILD_NUMBER}"

pipeline{
      environment{
        BRANCH =  sh(returnStdout: true,script: 'echo $branch').trim()
        } 
        agent{
            node{
              label 'master'
            }
        }
        stages{
            stage('Git'){
            steps{
            git branch: '${Branch}', credentialsId: "${git_auth}", url: "${git_address}"
            }
        }
        stage('maven build'){
            steps{
            sh "mvn clean package -U -DskipTests"
            }
        }
        stage('docker build'){
            steps{
            sh "chmod 755 ./deploy/${env}_docker_build.sh && ./deploy/${env}_docker_build.sh ${project_name} ${env}"
            }
        }
        stage('K8s deploy'){
            steps{
                sh "pwd && sed -i 's#\$IMAGE_NAME#${image_name}#' deploy/${env}_${project_name}.yaml"
                kubernetesDeploy configs: "deploy/${env}_${project_name}.yaml", kubeconfigId: "${k8sauth}"}}}}Copy the code

Jenkinsfile’s Pipeline script defines the entire automated build deployment process:

  • Code Analyze: You can use a static Code analysis tool like SonarQube to perform Code analysis, which is ignored here.
  • Maven Build: Start a Maven program to complete the project Maven Build package. You can also start a Maven container to mount the Maven local repository directory to the host, so that you do not need to re-download the dependency package each time.
  • Docker Build: Build a Docker image and push it to the image repository. Images in different environments are distinguished by a tag prefix, such as the development environmentdev_, the test environment istest_, the pre-delivery environment ispre_, the production environment ispro_;
  • K8s Deploy: Use Jenkins plug-in to complete the deployment of the project, or update the iteration of the existing project, different environments use different parameter configuration,K8s cluster access credentials can be directly configured by kube_config.

Jenkins configuration

Jenkins Task Configuration

Create a Pipeline task in Jenkins, as shown below:

Configure the build trigger to set the target branch to master, as shown in the following figure:

Configure Pipeline, select “Pipeline Script” and configure Pipeline script file, configure project Git address, pull source code credentials, etc., as shown in the figure:

The key credential referenced in the figure above needs to be configured in Jenkins in advance, as shown below:

The Jenkins configuration of the production environment of the project is completed by saving it. Similarly for other environments, it is necessary to distinguish the corresponding branches of each environment

This section describes the Kubernetes cluster function

K8s is a container-based cluster orchestration engine, with cluster expansion, rolling upgrade rollback, elastic scaling, automatic healing, service discovery and other features, combined with the actual situation of the current production environment, focuses on several common function points, if you need to learn more about other functions, please directly query on Kubernets official website.

Kubernetes architecture diagram

From a macro point of view, the overall architecture of Kubernetes includes Master, Node and Etcd.

Master is the Master node that controls the entire Kubernetes cluster. It contains Api Server, Scheduler, Controller, and so on, all of which need to interact with Etcd to store data.

  • Api Server: provides a unified portal for resource operations, shielding direct interaction with Etcd, including security, registration, discovery, etc.
  • Scheduller: Is responsible for scheduling pods to nodes according to certain scheduling rules.
  • Controller: Resource control center to ensure that resources are in the expected state.

A Node is a working Node that provides computing power for the entire cluster. It is where the containers actually run, including the running container, Kubelet, and Kube-proxy.

  • Kubelet: The main work is to manage the life cycle of the container, carry out monitoring, health check and regularly report node status in combination with cAdvisor;
  • Kube-proxy: uses Service to provide service discovery and load balancing within the cluster, and monitors service/endpoints changes to refresh load balancing.

The container arrangement

There are many control resources related to scheduling in Kubernetes, such as deployment of stateless applications, statefulset of stateful applications, daemonset of daemonset and job/cronjob of offline tasks, etc.

Let’s take deployment in the current production environment as an example. The relationship between Deployment, Replicatset and POD is a hierarchical control relationship. In simple terms, Replicatset controls the number of POD, while Deployment controls replicatset This design pattern also provides the basis for two of the most basic choreography actions: horizontal scaling for volume control and update/rollback for version property control.

Horizontal expansion and contraction capacity

Horizontal scaling is easy to understand. Simply change the number of POD replicas controlled by replicatSet from 2 to 3, for example, and horizontal scaling is complete.

Rolling Update Deployment

Rolling deployment, the default deployment strategy in K8s, replaces the previous version of the application’s POD one by one with the new version’s pod, without any cluster downtime, rolling deployment slowly replaces the previous version’s application instance with the new version’s application instance, as shown in figure:

In the real world of rolling updates we can configure the RollingUpdateStrategy to control the RollingUpdateStrategy, and there are two options that allow us to fine-tune the update process:

  • MaxSurge: The number of pods that can be created during an update exceeds the number of pods required. This can be the absolute number or percentage of the replica count. The default is 25%.
  • MaxUnavailable: The number of pods that are not available during the update process. This can be the absolute number or percentage of replica counts. The default is 25%.

Microservices

To understand microservices, we need to understand an important resource object called Service

In microservices, POD can correspond to instances, and service corresponds to microservices. In the process of service invocation, the emergence of Service solves two problems:

  • The IP of POD is not fixed, so it is not practical to use the IP to call the network.
  • The service invocation needs to load balance the different pods.

The service uses the Label selector to select the appropriate pod and construct an EndPoints, i.e. pod load balancing list. In practice, we usually attach a label like APP = XXX to each pod instance of the same microservice and create a label selector for the microservice App = XXX service.

Network in Kubernetes

K8s network communication, first of all, must have the “three” foundation:

  • Node and POD can communicate with each other.
  • Node pods can communicate with each other.
  • Pods of different nodes can communicate with each other.

Simply speaking, different pods communicate with each other through the CNI0 / Docker0 bridge, and Node accesses POD through the bridge. There are many ways to implement POD communication between different nodes, including the popular VXLAN/HOSTGW mode of Flannel. Flannel obtains the network information of other nodes through ETCD and creates a routing table for the node to make the difference Nodes can communicate with each other across hosts.

summary

So far has been largely introduced clearly we used in the production environment overall architecture to the components of the basis of related concepts, how they are run, and micro service is how to run in the Kubernetes, but involves the allocation center, some other components, such as monitoring and alarm temporarily not in detail, to update this part as soon as possible.

More exciting, please pay attention to our public number “100 bottle technology”, there are not regular benefits!