rancher2.1

Rancher container Management platform for easy and fast deployment and management in production environment container management K8s built-in CI/CD quickly build import and manage centralized identity

1. Set up the Rancher Server

  • Install the Rancher 2.0

Basic environment configuration and installation documents

Follow the above documentation to configure and install the daemon.json required by Docker directly using the following JSON code.

Requirements (list general requirements, and refer to the basic environment configuration above for specific steps, especially for docker installation) : - CPU: 4C; - Memory: more than 8G (16G is more comfortable, 4G is also ok, but the deployment application basically can not get 3 PODS); - Centos/RedHat Linux 7.5+(64-bit); - Docker 17.03.2; - Set hostname and host, guard wall and Selinux /etc/hosts /etc/hostname /etc/ daemon.json /etc/ Docker /daemon.json /etc/ Docker /daemon.json /etc/ Docker /daemon.json /etc/ Docker /daemon.json Can't repeatCopy the code

# to create the/etc/docker/daemon. Json
{
   "max-concurrent-downloads": 3."max-concurrent-uploads": 5,
    "registry-mirrors": ["https://7bezldxe.mirror.aliyuncs.com/"."https://IP:PORT/"]."storage-driver": "overlay2"."storage-opts": ["overlay2.override_kernel_check=true"]."log-driver": "json-file"."log-opts": {
        "max-size": "100m"."max-file": "3"}}Copy the code
Sudo docker run -d --restart=unless-stopped -p 8888:80 -p 8443:443 sudo docker run -d --restart=unless-stopped -p 8888:80 -p 8443:443 Rancher/Rancher conflicts with machine child portCopy the code
  • Create the first cluster to create a cluster
IP role
192.168.242.80 rancher server
192.168.242.81 master1
192.168.242.82 master2
192.168.242.83 node1
192.168.242.84 node2

I setup is 1Server2Master2Node cluster, select a good role, paste to each host to run, in the advanced options write corresponding host external IP; Select etcd and Control for master, and worker for node; There is no problem with implementing high availability with multiple masters.

  • rke && import && vsphere

2. Installation of K8s Dashbord in rancher2.0

One of the things that makes rancher2.0 different from its predecessor is that there is no native Dashboard, which we had to install manually.

Kubectl needs to be installed before manually installing Dashbord.

  • Install kubectl

    Kubectl can follow the official documentation. If you can’t use the scientific Internet, you can use the installation package provided by Rancher as follows:

    wget -O kubectl https://www.cnrancher.com/download/kubectl/kubectl_amd64-linux
    
    chmod +x ./kubectl
    
    sudo mv ./kubectl /usr/local/bin/kubectl
    
    kubectl cluster-info
    
    kubectl get all
    
    kubectl get nodes
    
    kubectl get nodes --all-namespaces
    
    Kube /kubeconfig files are not copied from the cluster
    
    Copy the code

  • Manually installing Dashbord I’ve tried two methods so far, and I’ll describe each one below.

  • Method 1:

Install manually using the tutorials on Github

Deploy Kubernetes – Dashboard on Rancher 2.0 Cluster Exposed using NodePort

Step1 is installing and verifying kubectl. Step2 deploy dashbord. If you cannot climb over the wall, the dashbord image cannot be obtained. You can modify the image source of the YAML file. Here is a source uploaded by someone else

Siriuszg/kubernetes - dashboard - amd64: v1.10.0Copy the code

Or you can just use this one

kubectl apply -f https://raw.githubusercontent.com/usernamecantbeXXX/kubernetes-dashboard/master/kubernetes-dashboard.yaml
Copy the code

In Step4, note that name is admin-user in dashbord.yml, and the following command to generate token describe secret also needs to write the corresponding name.

  • Method 2:

Use the rancher2.1 app store for dashbord deployment, and as of now (18/11/13) dashbord is still available in either 0.6.8 or 0.8 on the store, so I changed the mirror source to v.10.0

It’s easy to set up an app store. Then you just need to generate tokens:

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep kubernetes-dashboard | awk '{print The $1} ')Copy the code

Ps:

Dahbord token-ttL-args: dahbord token-ttL-args: dahbord token-ttL-args: dahbord token-ttL-args: dahbord token-ttL-args - --auto-generate-certificates - --token-ttl=0 eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9 uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1 0b2tlbi1wbHNxdiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt 1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImY4ZjhiODBmLWUzMzMtMTFlOC1iZjgwLTAwNTA1NmExZWEyMyIsInN 1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.a6UIUisGF9bsQ9Od3DVP0CyeZBoSQ_sO64LrEc 9GYfBpcRCRpoXDDqOGGgJb54mu0hNkykCKUdY1dqJHDIjebrsKUKfno-yFR9JXhUItPQrUT6zhHeEzWGjaywe0dGoPdBNcU6C98FHSgWMo1PmTGxXX2odm1f wpSvLLqADmfc8MQEbPbB58B1Z6e0SyNXx6i6hIT6bSqtWznqmzsRWJHnOxHkwaCTNRwm1G1QkrEcC0l2sChWsnkEDvTR2gCRRa5pU0vqBwBRxq6z2h5shRZt 0pgiQ_pV1hWcif1nNCnN4iZr2eEkSOpPec5WMwCJ62otBNHBsSRn9JcsRel2rb-ACopy the code

3. Pipeline authorization Settings

3.1 Related Configurations

Configuring Pipelines

Gitlab version must be later than 9, the authorization account must be Maintainer of the project (Master in Gitlab 8), create the Application in Gitlab, and copy the CallbackURL on rancher

Enable and display the warehouse list after configuration

Configuring the Mirror Warehouse

3.2 Deploying pipeline Configuration

Select a project and click Edit Configuration

Step 1 -clone is automatically written by Rancher. Step 2, step 3 and step 4 modify their own configurations as required. My configuration process is 2-build (Maven build). 3-publish (package image and push private image repository), 4-deploy (call rancherAPI, update Pod image, complete automatic project deployment).

3.2.1 Current Directory for Running the Script

When configuring rancher pipeline, we will execute some Linux commands, so the first thing to be clear, the command is executed in the current directory hierarchy.

Rancher’s pipeline, which ran as a Pod, was built based on Jenkins.

In the Default directory of the cluster workload, you can find an X-XXXX-Pipeline namespace with a Jenkins Pod. This Pod will appear when pipeline is enabled and will always exist.

When we run a pipeline, a Jenkins-slave-xxxxx POD will be generated. Expand the POD node to view the POD log, or enter the POD to execute the command, enter the CONSOLE of the POD, and execute it

cd  ./workspace/pipeline_p-cdk7j-13/
Copy the code

As you can see, the first step of the Pipeline clone code is done in this directory, which is also the current directory where we configure the pipeline script to run.

Since the pod of Jenkins-slave-xxx was created dynamically after pipeline running, it belongs to the mode of burning after use, so the jar package after code compilation or static file of front-end Dist directory pulled down from this POD should be moved to the image directory to be packaged.

3.2.2 build configuration

Knowing the current directory, you can start writing configuration boldly

The first step is build, the selected type is to run the script, execute the MVN clean package command, the selected image is my own package maven3.6 image. If your company has its own Maven server, you can set the setting. XML file and upload it to the harbor repository. For your own demo project, you can also switch to the public Maven :latest; In this case, you can also upload setting.xml to the code root directory, and then copy it to Maven. This is useful if you don’t want to package maven images, but you have your company’s private server

mkdir -p /root/.m2 && mv setting.xml /root/.m2
Copy the code

The latest version of maven is the latest version of Maven, and the JDK environment is OpenJDK. Some older projects may have some strange problems with openJDK compilation, and it will not be able to pass MVN compilation. That’s why I packaged an Oracle JDK called Maven myself.

In addition, it is better not to bring -u after the MVN package, because it will check whether the dependent version is the latest, which will be very slow.

3.2.3 the publish configuration

Git Clone directory is still the directory after git Clone, specify the relative location of the Dockerfile, and name the package after package. ${CICD_EXECUTION_SEQUENCE} is a variable provided by Rancher. This variable is used to distinguish the version of the image.

Dockerfile configuration:

General Web background project, MVN compiled jar package into the Tomcat directory, where base_image is cNEtos7 + Tomcat8 + OracleJDk8

The SpringBoot project, because of its built-in Tomcat, copies the JAR directly into the image directory and runs it directly

For the front-end vue project, the base image is the public nodeJS mirror node:current-slim on dockerhub, also first copy the code, then NPM install, finally NPM start(NPM run dev), I am directly run dev environment.

Ps: If you build a static file, you can package a nginx image

Also, open the host limit for dev environments for VUE projects

/build/webpack.dev.conf.js

watchOptions: { poll: config.dev.poll, }, # 1. Do not check host disableHostCheck: trueCopy the code

/config/index.js

ProxyTable: {'/updance: {# 2. Target is specified as the background of pod node IP + port target: 'http://192.168.242.83:32583', changeOrigin: True, pathRewrite: {' ^ / updance ':'/updance '}}}, # 3. The host is set to 0.0.0.0 host: '0.0.0.0'Copy the code
3.2.4 the deploy configuration

The last step is to call rancher’s API to update the POD node

curl -k -u token-zcgsp:****************************************** \ -X PUT \ -H 'Accept: application/json' \ -H 'Content-Type: application/json' \ -d '{"containers":[{"image":"harbor.bluemoon.com.cn/rancher/buying-center-parent:'${CICD_EXECUTION_SEQUENCE}'","name":"snc- Anyway: spreads over gse-backed loans "}]} '\' https://192.168.242.80/v3/project/c-zrq7x:p-kql7m/workloads/deployment:default:snc-backed 'Copy the code

See rancher’s API for more options on the POD node

3.3 run the pipeline

After the configuration is complete, the related configuration generates a.rancher-pipelines.yml file, which can be downloaded locally or uploaded to the corresponding code branch.

The pod node must be created before the last API modification can be successfully executed

During the execution process, you can watch the run log in real time. The successful execution of each step is green, and the failure is red. Finally, check the POD node, and you can see the POD automatic update.

The e above is a complete process, slip slip, play Odyssey to.