This is the sixth day of my participation in Gwen Challenge

Drone for Kubernetes Continuous integration

To read this article, understand containerization, continuous integration, and Kubernetes.

The introduction

Drone is a container-native continuous integration system, which aims to become the self-service replacement of the old Jenkins installation. Drone is relatively simple to use and has excellent performance and is very flexible. Each step is run by an independent container and shares workdir, but with slightly fewer plug-ins and documents. Here based on Kubernetes will DroneCi integration, to achieve continuous integration, this article will detail from git repository to mirror repository, trigger the whole process of construction, as well as the corresponding K8S-YML writing, the production of mirror, etc. Non-kubernetes refer to the previous article Docker environment DroneC practice

For quick setup, none of the following services have persistent files mounted and NodeSelect configured

prepared

A set of K8S cluster I here is the high availability 3 three-node version, shut down a node to save memory

NAME STATUS MESSAGE ERROR etcd-0 Healthy {"health":"true"} etcd-2 Healthy {"health":"true"} controller-manager Healthy Ok scheduler Healthy ok Unhealthy etcd - 1 Get "https://172.16.3.131:2379/health" : dial TCP 172.16.3.131:2379: connect: No route to host [root@node0 kubectls]# kubectl get nodes NAME STATUS ROLES AGE VERSION node0 Ready master 207d v1.19.4 Node1 NotReady master 207d v1.19.4 node2 Ready < None > 207d v1.19.4Copy the code

The old rule is to steal configuration files, generate a template, and then modify the template

kubectl create deployment gitea --image drone -o yaml --dry-run=client > gitea.yml
Copy the code

1. Gitea structures

Here, The selection of Git repository is Gitea, which is relatively lightweight. The Yaml file is written first. Since Gitea is a stateful service and requires a persistent file, it is generally deployed in StatefulSet mode. Gitea itself has no other configuration, so the configuration file is as follows.

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: gitea
spec:
  serviceName: gitea-headless
  replicas: 1
  selector:
    matchLabels:
      app: gitea
  template:
    metadata:
      labels:
        app: gitea
    spec:
      containers:
      - image: gitea/gitea
        name: gitea
        ports:
         - name: http
           containerPort: 3000
        securityContext:
            privileged: true
---
apiVersion: v1
kind: Service
metadata:
  name: gitea-svc
  labels:
    app: gitea
spec:
  ports:
  - name: http
    targetPort: 3000
    port: 3000
  selector:
    app: gitea
  type: NodePort
status:
  loadBalancer: {}
---
apiVersion: v1
kind: Service
metadata:
  name: gitea-headless
  labels:
    app: gitea
spec:
  ports:
  - name: http
    targetPort: 3000
    port: 3000
  selector:
    app: gitea
status:
  loadBalancer: {}
Copy the code

After deployment, take a look at the SVC and access the initialization Gitea

Kubectl get SVC | grep 'gitea gitea - headless ClusterIP 10.110.232.83 < none > 3000 / TCP 83 m gitea - SVC NodePort 10.111.69.19 < none > / TCP 83 3000-31466 mCopy the code

Access node:31466 Start to configure Gitea

Basic URL Host of the node where Gitea resides: nodeport Host of the node where Gitea residesCopy the code

Create an OAuth2 application and redirect the node whose URL is Drone :nodeport/login.

2. The Drone

Create a DroneServer service

apiVersion: apps/v1 kind: Deployment metadata: labels: app: drone name: drone spec: replicas: 1 selector: matchLabels: app: drone strategy: {} template: metadata: labels: app: drone spec: containers: - image: drone/drone name: drone ports: - containerPort: 80 name: http env: - name: DRONE_GITEA_SERVER valueFrom: configMapKeyRef: name: drone-cm key: DRONE_GITEA_SERVER - name: DRONE_GITEA_CLIENT_ID valueFrom: configMapKeyRef: name: drone-cm key: DRONE_GITEA_CLIENT_ID - name: DRONE_GITEA_CLIENT_SECRET valueFrom: configMapKeyRef: name: drone-cm key: DRONE_GITEA_CLIENT_SECRET - name: DRONE_RPC_SECRET valueFrom: configMapKeyRef: name: drone-cm key: DRONE_RPC_SECRET - name: DRONE_USER_CREATE valueFrom: configMapKeyRef: name: drone-cm key: DRONE_USER_CREATE - name: DRONE_SERVER_HOST valueFrom: configMapKeyRef: name: drone-cm key: DRONE_SERVER_HOST - name: DRONE_SERVER_PROTO valueFrom: configMapKeyRef: name: drone-cm key: DRONE_SERVER_PROTO volumeMounts: - mountPath: /var/run/docker.sock name: sock resources: {} volumes: - name: sock hostPath: path: /var/run/docker.sock status: {} --- apiVersion: v1 kind: ConfigMap metadata: name: drone-cm namespace: default data: DRONE_GITEA_SERVER: http://172.16.3.130:31466 # general cooperate nodeSelect DRONE_GITEA_CLIENT_ID: ee2484a3-8953-4f1b-bf2f-28e9a95663be DRONE_GITEA_CLIENT_SECRET: abFGMz0Q9kSX46LQLdq0bgvqQpWFbZ3VLvr7mrXMBs5M DRONE_RPC_SECRET: Generated DRONE_USER_CREATE dd6fed184d56520b5c72ff652f941eb2 # openssl rand - hex 16: DRONE_SERVER_HOST: 172.16.3.130:30270 #Drone SVC port DRONE_SERVER_PROTO: http --- apiVersion: v1 kind: Service metadata: name: drone-svc labels: app: drone spec: ports: - name: HTTP targetPort: 80 nodePort: 30270 #Drone SVC port: 80 selector: app: Drone type: nodePort status: loadBalancer: {}Copy the code

Visit Drone and authorize Drone to visit Gitea. I have authorized Drone to visit Gitea

Access Drone to open the trust mode

3. DroneRunner structures

DroneRunner was responsible for running the actual pipeline, so he needed to run a copy at each node, which we deployed on DaemonSet

apiVersion: apps/v1 kind: DaemonSet metadata: labels: app: drone-run name: drone-run spec: selector: matchLabels: app: drone-run template: metadata: labels: app: drone-run spec: containers: - image: drone/drone-runner-docker name: drone-runner ports: - containerPort: 3000 name: http env: - name: DRONE_RPC_PROTO valueFrom: configMapKeyRef: name: drone-run-cm key: DRONE_RPC_PROTO - name: DRONE_RPC_HOST valueFrom: configMapKeyRef: name: drone-run-cm key: DRONE_RPC_HOST - name: DRONE_RUNNER_CAPACITY valueFrom: configMapKeyRef: name: drone-run-cm key: DRONE_RUNNER_CAPACITY - name: DRONE_RPC_SECRET valueFrom: configMapKeyRef: name: drone-run-cm key: DRONE_RPC_SECRET - name: DRONE_RUNNER_NAME valueFrom: configMapKeyRef: name: drone-run-cm key: DRONE_RUNNER_NAME volumeMounts: - mountPath: /var/run/docker.sock name: sock volumes: - name: sock hostPath: path: /var/run/docker.sock --- apiVersion: v1 kind: ConfigMap metadata: name: drone-run-cm namespace: default data: DRONE_RPC_PROTO: HTTP DRONE_RPC_HOST: 172.16.3.130:30270 #Drone node:nodePort DRONE_RUNNER_CAPACITY: "2" DRONE_RPC_SECRET: dd6fed184d56520b5c72ff652f941eb2 # DRONE_RUNNER_NAME generated above: drone - runner - apiVersion: v1 kind: Service metadata: name: drone-run-svc labels: app: drone-run spec: ports: - name: http targetPort: 3000 port: 3000 selector: app: drone-run type: NodePort status: loadBalancer: {}Copy the code

Viewing health

kubectl get pod | grep 'drone' drone-68cf888fb-9888w 1/1 Running 0 54m drone-run-8h7ft 1/1 Running 0 43m drone-run-z5v95  1/1 Running 0 43mCopy the code

Check Runner’s log to see if there are any errors.

[root@node0 drone]# kubectl logs -f drone-run-8h7ft time="2021-06-20T05:33:43Z" level=info msg="starting the server" addr=":3000" time="2021-06-20T05:33:43Z" level=info msg="successfully pinged the remote server" time="2021-06-20T05:33:43Z" level=info msg="polling the remote server" arch=amd64 capacity=2 The endpoint = "http://172.16.3.130:30270" kind pipeline OS Linux type = = = dockerCopy the code

4. The Nexus

Use Nexus as mirror, not Harbor. Because it is also a Maven repository, it is deployed in StatefulSet mode because it needs to be persistent and stateful (which is a bit memory hungry).

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: nexus
spec:
  serviceName: nexus-headless
  replicas: 1
  selector:
    matchLabels:
      app: nexus
  template:
    metadata:
      labels:
        app: nexus
    spec:
      containers:
      - image: sonatype/nexus3
        name: nexus
        ports:
         - name: http
           containerPort: 8081
         - name: http2
           containerPort: 5000
        securityContext:
            privileged: true
---
apiVersion: v1
kind: Service
metadata:
  name: nexus-svc
  labels:
    app: nexus
spec:
  ports:
  - name: http
    targetPort: 8081
    port: 8081
  - name: http2
    targetPort: 5000
    port: 5000
  selector:
    app: nexus
  type: NodePort
status:
  loadBalancer: {}
---
apiVersion: v1
kind: Service
metadata:
  name: nexus-headless
  labels:
    app: nexus
spec:
  ports:
  - name: http
    targetPort: 8081
    port: 8081
  - name: http2
    targetPort: 5000
    port: 5000
  selector:
    app: nexus
status:
  loadBalancer: {}
Copy the code

After the deployment is complete, view the Svc to access the Nexus and create the Docker image repository

Configure docker. json for all nodes to add private repository address

vi /etc/docker/daemon.json "insecure-registries": ["172.16.3.130:31834"] systemctl daemon-reload service docker restartCopy the code

4. Kubectl image construction (did not find a ready-made, made a temporary)

Kubectl is required to execute yML in the last step of deployment, so we can create a mirror to execute Kubectl.

Run the following command to COPY the kubeconfig file FROM alpine WORKDIR /home/config/CMD tail -f /dev/nullCopy the code

/usr/bin/kubectl is mounted at startup

 kubectl --kubeconfig ./config apply -f ./deploy.yml
Copy the code

5. Modify.drone. yML (pipeline profile is similar to Jenkins Pipeline)

Modify the.drone.yml file in the root directory of the project, in order to quickly build up sonar, you can comment out this part.

Kind: Pipeline name: run type: docker steps: -name: package & unit test image: maven: 3.6.2-JDK-8 pull: if-not-exists Commands: - mvn clean - mvn org.jacoco:jacoco-maven-plugin:prepare-agent install -Dmaven.test.failure.ignore=true - mvn package Volumes: -name: cache path: /root/.m2 When: branch: master event: [push] # -name: cache path: /root/.m2 aosapps/drone-sonar-plugin # settings: # sonar_host: # from_secret: sonar_host # sonar_token: # from_secret: Sonar_token # when: # branch: master # event: [push] - name: plugins/docker pull: if-not-exists Settings Purge: false repo: 172.16.3.130:31465 / spirngboot/test username: admin registry: 172.16.3.130:31465 password: admin insecure: true tags: 1 volumes: - name: docker path: /var/run/docker.sock when: branch: master event: [sonar] :1 # pull: if-not-exists # environment: # accessKey: edd02de6d6402150514802d82505ba4b0b59314e186fc98f736255ab3156c029 # projectKeys: root:test # sonarUrl: # # # http://192.168.31.79:9000 when: status: - success # - failure - name: running container image: yujian1996 / kubectls pull: if-not-exists volumes: - name: kubectl path: /usr/bin/kubectl commands: - ls - cat ./deploy.yml - kubectl --kubeconfig /home/config apply -f ./deploy.yml when: branch: master event: [ push ] volumes: - name: cache host: path: /root/.m2 - name: docker host: path: /var/run/docker.sock - name: kubectl host: path: /usr/bin/kubectl trigger: branch: - master event: - promote - pushCopy the code

Add another DEPLOYMENT service K8S YML to the project root directory

apiVersion: v1 kind: Service metadata: name: srpingboot namespace: default labels: app: srpingboot spec: type: NodePort ports: - port: 8080 selector: app: srpingboot --- apiVersion: apps/v1 kind: Deployment metadata: name: srpingboot labels: app: srpingboot spec: replicas: 1 selector: matchLabels: app: srpingboot template: metadata: labels: App: srpingboot spec: containers: - name: srpingboot image: 172.16.3.130:31465 / spirngboot/test: 1 imagePullPolicy: IfNotPresent ports: - containerPort: 8080Copy the code

5. Test the pipeline

We push the code to trigger the assembly line, and then check the state of drone, we can see that it only took more than 40 seconds to complete the release, which is faster than Jenkins. Here I directly set the mirror version as 1 for convenience, normally the tag is commitid. Sed then replaces the configuration file and calls Kubectl apply to start

Then check the SVC and POD for k8S, srpingboot-686B7bFF78-9SHhm is already running

kubectl get pod NAME READY STATUS RESTARTS AGE drone-68cf888fb-24ghx 1/1 Running 0 51m drone-run-8h7ft 1/1 Running 1 156m drone-run-z5v95 1/1 Running 2 156m gitea-0 1/1 Running 3 3h33m mysql-6fc5954fc5-dw9k9 0/1 Terminating 226 111d nexus-0 1/1 Running 9 3h2m srpingboot-686b7bff78-9shhm 1/1 Running 0 5m18s traefik-hn6n8 1/1 Terminating 7 206d kubectl Get SVC srpingboot NodePort 10.110.55.83 < None > 8080:30093/TCPCopy the code

Let’s look at the log, no exception startup success! A Drone-based CICD was set up

[root@node0 kubectls]# kubectl logs -f srpingboot-686b7bff78-9shhm . ____ _ __ _ _ /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ \ (() ___ | '_ |' _ | | '_ \ / _ ` | \ \ \ \ \ \ / ___) | | _) | | | | | | | (_ | |))))' there comes | |. __ _ - | | | _ _ - | | | _ \ __, | / / / / =========|_|==============|___/=/_/_/_/ :: Spring Boot :: (v2.4.5) 2021-06-20 08:04:11. 667 INFO 6 - [the main] C.E.S.S pringBootTestDemoApplication: Starting SpringBootTestDemoApplication v0.0.1 - the SNAPSHOT using Java 1.8.0 comes with _292 on b7bff78 srpingboot - 686-9 SHHM with PID 6 (/ app. Jar started by root /) in the 2021-06-20 08:04:11. 672 INFO 6 - [the main] C.E.S.S pringBootTestDemoApplication: No active profile set, falling back to default profiles: The default 2021-06-20 08:04:13. 523 WARN 6 - [the main] IO. Undertow. Web sockets. JSR: UT026010: Buffer pool was not set on WebSocketDeploymentInfo, The default pool will be used 2021-06-20 08:04:13.547 INFO 6 -- [main] IO. Undertow. Servlet: Initializing Spring Embedded WebApplicationContext 2021-06-20 08:04:13.547 INFO 6 -- [main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: Initialization completed in 1788, the 2021-06-20 ms 08:04:15. 013 INFO 6 - [the main] O.S.B.A.E.W eb. EndpointLinksResolver: Exposing 5 - [main] 5-5: Exposing 6- [MAIN] o.s.s.concurrent.ThreadPoolTaskExecutor : Initializing ExecutorService 'applicationTaskExecutor' 2021-06-20 08:04:15.481 INFO 6 -- [main] IO. Undertow: Starting Server: undertow-2.2.7.final 2021-06-20 08:04:15.497 INFO 6 -- [main] org.xnio: XNIO version 3.8.0.final 2021-06-20 08:04:15.514 INFO 6 -- [main] org.xnio.nio: XNIO Implementation Version 3.8.0.Final 2021-06-20 08:04:15.657 INFO 6 -- [main] org.jboss.threads: JBoss Threads version 3.1.0. Final 08:04:15 2021-06-20. 720 INFO 6 - [the main] O.S.B.W.E.U ndertow. UndertowWebServer: Undertow started on port 8080 (HTTP) (s) the 2021-06-20 08:04:16. 080 INFO 6 - [the main] C.E.S.S pringBootTestDemoApplication: Started SpringBootTestDemoApplication in 4.915 seconds (JVM running for 5.544)Copy the code