Moment For Technology

Kubernetes and Jenkins -- Build Jenkins continuous integration platform based on Kubernetes

Posted on Dec. 2, 2022, 5:48 p.m. by Andrew Sutton
Category: The back-end Tag: The back-end kubernetes

Kubernetes+Docker+Jenkins Continuous integration architecture diagram

  1. Build a K8S cluster
  2. Jenkins scheduling the K8S API
  3. Dynamically generate Jenkins Slave POD
  4. Slave POD pulls Git code/compile/package image
  5. Push to Harbor
  6. Slave work completed, Pod self-destruct
  7. Deploy to test or production Kubernetes platforms

Kubernetes+Docker+Jenkins continuous integration solution benefits

  • Service high availability

    • When Jenkins Master fails, Kubernetes will automatically create a new Jenkins Master container and allocate the Volume to the new container to ensure that data is not lost and the cluster service is highly available
  • Dynamic scaling, rational use of resources

    • Each time a Job is run, a Jenkins Slave is automatically created. After the Job is complete, the Slave is automatically deregistered and the container is deleted, and resources are automatically released
    • In addition, Kubernetes will dynamically allocate slaves to idle nodes according to the usage of each resource, so as to reduce the situation that a node has high resource utilization and waits in a queue on this node.
  • Scalability When the resources of a Kubernetes cluster are severely insufficient, resulting in Job queues, you can easily add a Kubernetes Node to the cluster to achieve scalability.

Kubeadm installation Kubernetes

K8S details can refer to: Kubernetes

Kubernetes architecture

  • API Server: to expose the Kubernetes API, any resource request is invoked through the interface provided by kube-Apiserver.
  • Etcd: Kubernetes provides the default storage system for storing all cluster data. You need to provide a backup plan for Etcd data before using it.
  • The Controller - Manager: As the management control center of the cluster, it manages nodes, Pod replicas, endpoints, namespaces, serviceAccounts, and ResourceQuota in the cluster. When a Node breaks down unexpectedly, The Controller Manager discovers and executes an automated repair process in a timely manner, ensuring that the cluster is always working as expected.
  • Scheduler: Monitors newly created pods that are not assigned to nodes and selects a Node for the Pod.
  • Kubelet: Responsible for maintaining the container lifecycle, as well as Volume and network management
  • Kube Proxy is the core component of Kubernetes. Deployed on each Node, it is an important component to realize the communication and load balancing mechanism of Kubernetes Service

Installation Environment Preparation

The host ip Installed Software
k8s-master 192.168.188.116 Kube-apiserver, Kube-Controller-Manager, Kube-Scheduler, Docker, ETCD, Calico, NFS
k8s-node1 192.168.188.117 Kubelet, KubeProxy, Docker
k8s-node2 192.168.188.118 Kubelet, KubeProxy, Docker
Harbor server 192.168.188.119 Harbor
Gitlab server 192.168.188.120 Gitlab

All three K8S servers need to be completed

Disable systemctl stop firewalld systemctldisableFirewalld closed selinux# Temporary shutdown
setenforce 0
# Permanent shutdown
sed -i 's/enforcing/disabled/'The/etc/selinux/config to close the swap# temporary
swapoff -a 
# Permanent shutdown
sed -ri 's/.*swap.*/#/' /etc/fstab

Set the host name as planned.
hostnamectl set-hostname master
# Set host name according to plan
hostnamectl set-hostname node1
# Set host name according to planHostnamectl set-hostname node2 Add IP addresses to hosts cat  /etc/hosts EOF
192.168.188.116 master
192.168.188.117 node1
192.168.188.118 node2
EOFConfigure system parameters. Configure routing and forwarding without processing bridge data. Cat  /etc/sysctl.d/k8s.conf EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 vm.swappiness  = 0 EOFExecutable sysctl -p/etc/sysctl. D/k8s. Conf kube - open proxy ipvs pre-conditions cat  / etc/sysconfig/modules/ipvs modules  EOF #! /bin/bash modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack_ipv4 EOF

chmod 755 /etc/sysconfig/modules/ipvs.modules  bash 

/etc/sysconfig/modules/ipvs.modules  lsmod | grep -e ip_vs -e nf_conntrack_ipv4
Copy the code

Install Docker, Kubelet, Kubeadm, and Kubectl

All nodes install Docker, kubeadm, kubelet, ==Kubernetes default CRI (container run) is Docker==, so install Docker first

Install the Docker

Yum-config-manager --add-repo -- yum-utils -- yum-config-manager --add-repo -- yum-config-manager http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo update yum yum package index makecache fast installation docker docker - ce community yum -y install docker-ce View the docker version. Docker version Starts systemCTL on the startupenableDocker --now configure docker image source mkdir -p /etc/docker this is my own alicloud to rev up everyone is different can go to alicloud official view tee /etc/docker/daemon.json -EOF { "registry-mirrors": ["https://m0rsqupc.mirror.aliyuncs.com"] } EOFValidation/root @ master ~# cat /etc/docker/daemon.json 
 {
   "registry-mirrors": ["https://m0rsqupc.mirror.aliyuncs.com"]} then restart docker systemctl restart dockerCopy the code

Install kubelet, Kubeadm, and Kubectl

  • Kubeadm: command used to initialize the cluster
  • Kubelet: Used to start pods, containers, etc., on each node in the cluster
  • Kubectl: Command line tool used to communicate with the cluster
Add kubernetes software source cat  / etc/yum repos. D/kubernetes. ' EOF [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1  gpgcheck=0 repo_gpgcheck=0 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF# yum install -y kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0 systemctl # yum install -y kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0 systemctlenable kubelet --now
Copy the code

Deploy the Kubernetes Master (Master node)

Kubeadm init --kubernetes-version=1.18.0 --apiserver-advertise-address=192.168.188.116 --image-repository Registry.aliyuncs.com/google_containers - kubernetes - version v1.18.0 - service - cidr = 10.96.0.0/12 -- pod-netword-cidr =10.244.0.0/16 -- pod-netword-cidr =10.244.0.0/16 -- pod-netword-cidr =10.244.0.0/16 -- pod-netword-cidr =10.244.0.0/16 The docker images command allows you to view the image that has been pulledCopy the code

The red circle indicates that the kubernetes image has been installed. The red circle indicates the following command == join slave node == to use

Kubeadm join 192.168.188.116:6443 --token IC49lm. zuwab84r0Zfs6bbr \ --discovery-token-ca-cert-hash sha256:270285cba2080b1e291a3a2b3b21730616b59c95c55ca6f950fecf7b68869b97Copy the code
Use the kubectl tool mkdir -p$HOME/.kube

cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

chown $(id -u):$(id -g) $HOMEKubectl get nodes There is a master node running, but it is not readyCopy the code

Install Calico. Calico is a network component that enables the master and its children to communicate on the network

mkdir k8s 
cdk8s wget https://docs.projectcalico.org/v3.10/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networki Ng / 1.7 / calico. Yaml sed - I's / 192.168.0.0/10.244.0.0 / g'Yaml Install calico component kubectl apply -f calico. Yaml Check pod kubectl get POD --all-namespaces must be all READY and Running Kubectl get pod --all-namespaces -o wideCopy the code

Join the Kubernetes Node (Slave Node).

You need to go to node1 and Node2 servers to add new nodes to the cluster

Execute the kubeadm join command on the kubeadm init output

The following command is used after the master initialization is complete. Each person's == is different. You need to copy what you've generated

Kubeadm join 192.168.188.116:6443 --token IC49lm. zuwab84r0Zfs6bbr \ --discovery-token-ca-cert-hash Sha256:270285 cba2080b1e291a3a2b3b21730616b59c95c55ca6f950fecf7b68869b97 see if kubelet open systemctl status kubelet Kubectl get nodes [[email protected] k8s]# kubectl get nodesNAME STATUS ROLES AGE VERSION Master Ready Master 17m V1.18.0 node1 Ready  None  2M27s V1.18.0 node2 Ready  None  2M25s V1.18.0 If the Status is Ready, the cluster environment is set up successfullyCopy the code

Kubectl common commands

Kubectl get Nodes Check the status of all primary and secondary nodes. Kubectl get ns Obtain all namespace resources. Kubectl get Pods -n {$nameSpaceKubectl describe pod for namespace -n {$nameSpace} at the pod execution kubectl logs - tail = 1000 pod name | less view logs kubectl create - f XXX. Yml kubectl through a configuration file to create a cluster resource object delete - f Kubectl delete pod name -n {$nameSpaceKubectl get service -n {$nameSpace} Check the pod serviceCopy the code

Jenkins Continuous integration platform based on Kubernetes

  • Create a Jenkins master node based on K8S
  • Create Jenkins' slave node on K8S to help us build the project

Install and configure NFS

Network File System (NFS), its biggest function is to allow different machines and different operating systems to share files with each other through the Network. We can use NFS to share configuration files ==Jenkins running, Maven repository dependency files ==, etc

We installed the NFS server on the K8S primary node

NFS service (the need to install on all K8S node) yum install - y NFS - utils to create the Shared directory (this only needs to master node) mkdir -p/opt/NFS/Jenkins. Write the NFS Shared configuration vim /etc/exports /opt/ NFS/Jenkins *(rw,no_root_squash) * indicates that the directory is open to all IP addresses. Rw is read and write, and no_root_squash does not suppress the root permission to start systemctlenableNFS --now View the NFS shared directory showmount -e 192.168.188.116 [[email protected] ~]# showmount -e 192.168.188.116
Export list for192.168.188.116: / opt/NFS/Jenkins *Copy the code

Install jenkins-Master in Kubernetes

Example Create NFS client provisioner

Nfs-client-provisioner is an external provisioner for Kubernetes' simple NFS that does not provide NFS itself and requires an existing NFS server to provide storage

You need to write YAML

  • StorageClass.yaml
  • Persistent storage Storageclass
  • kubectl explain StorageClassView kind and version
  • Define the name asmanaged-nfs-storage
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
  archiveOnDelete: "true"
Copy the code
  • deployment.yaml
  • Deploy an NFS client provisioner
apiVersion: storage.k8s.io/v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: nfs-client-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: lizhenliang/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: fuseim.pri/ifs
            - name: NFS_SERVER
              value: 192.168188.116.
            - name: NFS_PATH
              value: /opt/nfs/jenkins/
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168188.116.
            path: /opt/nfs/jenkins/
Copy the code

Finally, there is a permission file rbac.yaml

kind: ServiceAccount
apiVersion: v1
metadata:
  name: nfs-client-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get"."list"."watch"."create"."delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get"."list"."watch"."update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get"."list"."watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create"."update"."patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get"."list"."watch"."create"."update"."patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io
Copy the code

Build the pod

Kubectl create -f. [[email protected] nfs-client]# kubectl create -f .
storageclass.storage.k8s.io/managed-nfs-storage created
serviceaccount/nfs-client-provisioner created
deployment.apps/nfs-client-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
Error from server (AlreadyExists): error when creating "rbac.yaml": serviceaccounts "nfs-client-provisioner"Already exists Check pod kubectl get pod [[email protected] nfS-client]# kubectl get pod
NAME                                      READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-795b4df87d-zchrq   1/1     Running   0          2m4s
Copy the code

Install Jenkins - Master

We still need to write Jenkins' Yaml file ourselves

  • rbac.yaml
  • For information about permissions, add some permissions to the Jenkins master node under k8S
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: jenkins
  namespace: kube-ops
rules:
  - apiGroups: ["extensions"."apps"]
    resources: ["deployments"]
    verbs: ["create"."delete"."get"."list"."watch"."patch"."update"]
  - apiGroups: [""]
    resources: ["services"]
    verbs: ["create"."delete"."get"."list"."watch"."patch"."update"]
  - apiGroups: [""]
    resources: ["pods"]
    verbs: ["create"."delete"."get"."list"."patch"."update"."watch"]
  - apiGroups: [""]
    resources: ["pods/exec"]
    verbs: ["create"."delete"."get"."list"."patch"."update"."watch"]
  - apiGroups: [""]
    resources: ["pods/log"]
    verbs: ["get"."list"."watch"]
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["get"]

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: jenkins
  namespace: kube-ops
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: jenkins
subjects:
  - kind: ServiceAccount
    name: jenkins
    namespace: kube-ops
    
---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: jenkinsClusterRole
  namespace: kube-ops
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["create"."delete"."get"."list"."patch"."update"."watch"]
- apiGroups: [""]
  resources: ["pods/exec"]
  verbs: ["create"."delete"."get"."list"."patch"."update"."watch"]
- apiGroups: [""]
  resources: ["pods/log"]
  verbs: ["get"."list"."watch"]
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["get"]
 
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: jenkinsClusterRuleBinding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: jenkinsClusterRole
subjects:
- kind: ServiceAccount
  name: jenkins
  namespace: kube-ops
Copy the code
  • ServiceaAcount.yaml
  • Some ServiceaAcount information for Jenkins' master node
apiVersion: v1
kind: ServiceAccount
metadata:
  name: jenkins
  namespace: kube-ops
Copy the code
  • StatefulSet.yaml
  • Is a stateful application that defines previous NFS information
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: jenkins
  labels:
    name: jenkins
  namespace: kube-ops
spec:
  serviceName: jenkins
  selector:
    matchLabels:
      app: jenkins
  replicas: 1
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      name: jenkins
      labels:
        app: jenkins
    spec:
      terminationGracePeriodSeconds: 10
      serviceAccountName: jenkins
      containers:
        - name: jenkins
          image: jenkins/jenkins:lts-alpine
          imagePullPolicy: IfNotPresent
          ports:
          - containerPort: 8080
            name: web
            protocol: TCP
          - containerPort: 50000
            name: agent
            protocol: TCP
          resources:
            limits:
              cpu: 1
              memory: 1Gi
            requests:
              cpu: 0.5
              memory: 500Mi
          env:
            - name: LIMITS_MEMORY
              valueFrom:
                resourceFieldRef:
                  resource: limits.memory
                  divisor: 1Mi
            - name: JAVA_OPTS
              value: -Xmx$(LIMITS_MEMORY)m -XshowSettings:vm -Dhudson.slaves.NodeProvisioner.initialDelay=0 -Dhudson.slaves.NodeProvisioner.MARGIN=50 - Dhudson. Slaves. NodeProvisioner. MARGIN0 = 0.85
          volumeMounts:
            - name: jenkins-home
              mountPath: /var/jenkins_home
          livenessProbe:
            httpGet:
              path: /login
              port: 8080
            initialDelaySeconds: 60
            timeoutSeconds: 5
            failureThreshold: 12
          readinessProbe:
            httpGet:
              path: /login
              port: 8080
            initialDelaySeconds: 60
            timeoutSeconds: 5
            failureThreshold: 12
      securityContext:
        fsGroup: 1000
  volumeClaimTemplates:
  - metadata:
      name: jenkins-home
    spec:
      storageClassName: "managed-nfs-storage"
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 1Gi
Copy the code
  • Service.yaml
  • Expose some ports
apiVersion: v1
kind: Service
metadata:
  name: jenkins
  namespace: kube-ops
  labels:
    app: jenkins
spec:
  selector:
    app: jenkins
  type: NodePort
  ports:
  - name: web
    port: 8080
    targetPort: web
  - name: agent
    port: 50000
    targetPort: agent
Copy the code
cd jenkins-master/
[[email protected] jenkins-master]# lsYaml serviceaacount.yaml service.yaml statefulset. yaml Create a new namespace kubectl create Namespace kube-ops Create Jenkins master node pod kubectl create-f. [[email protected] jenkins-master]# kubectl create namespace kube-ops
namespace/kube-ops created
[[email protected] jenkins-master]# kubectl create -f .service/jenkins created serviceaccount/jenkins created statefulset.apps/jenkins created role.rbac.authorization.k8s.io/jenkins created rolebinding.rbac.authorization.k8s.io/jenkins created clusterrole.rbac.authorization.k8s.io/jenkinsClusterRole created Rolebinding, rbac authorization. K8s. IO/jenkinsClusterRuleBinding created we can through many a command to check the pod and information service to view the need to add our own namespace kubectl get pod --namespace kube-ops [[email protected] jenkins-master]# kubectl get pod --namespace kube-opsNAME READY STATUS RESTARTS AGE Jenkins -0 1/1 Running 0 7m38s View the information about the service. Kubectl get service -- Namespace kube-ops [[email protected] jenkins-master]# kubectl get service --namespace kube-opsThe NAME TYPE CLUSTER - EXTERNAL IP - the IP PORT (S) AGE Jenkins NodePort 10.101.126.122  none  8080:30942 / TCP, 50000:31796 / TCP m2s 8 -o wide For more information kubectl get service --namespace kube-ops -o wide [[email protected] jenkins-master]# kubectl get service --namespace kube-ops -o wideNAME TYPE cluster-ip external-ip PORT(S) AGE SELECTOR Jenkins NodePort 10.101.126.122 none 8080:30942 / TCP, 50000:31796 / TCP m20s app = 8 Jenkins kubectl pod more detailed information get pod - namespace kube - ops - o wide [root @ master jenkins-master]# kubectl get pod --namespace kube-ops -o wideNAME READY STATUS RESTARTS AGE IP NODE convention NODE READINESS GATES Jenkins -0 1/1 Running 0 8M48s 10.244.166.129 node1  None   None  By looking at the details, our Jenkins is deployed on node1 and we have access to NODE1's IP + port via the browserCopy the code

We need to unlock Jenkins. We need to go to Jenkins' shared directory, NFS

The NFS directory is /opt/ NFS [[email protected] jenkins-master].# cd /opt//nfs/
[[email protected] nfs]# ls
jenkins
[[email protected] nfs]# cd jenkins/
[[email protected] jenkins]# lltotal 4 drwxrwxrwx 13 root root 4096 May 13 16:26 Kube-ops-jenkins -home- Jenkins - 0-PVC - 0d59AF09-0339-46eb-a6A0-550c40365efb# cd kube-ops-jenkins-home-jenkins-0-pvc-0d59af09-0339-46eb-a6a0-550c40365efb/
[[email protected] kube-ops-jenkins-home-jenkins-0-pvc-0d59af09-0339-46eb-a6a0-550c40365efb]# ls
config.xml                     jenkins.telemetry.Correlator.xml  nodes                     secrets      war
copy_reference_file.log        jobs                              plugins                   updates
hudson.model.UpdateCenter.xml  logs                              secret.key                userContent
identity.key.enc               nodeMonitors.xml                  secret.key.not-so-secret  users
[[email protected] kube-ops-jenkins-home-jenkins-0-pvc-0d59af09-0339-46eb-a6a0-550c40365efb]# cat secrets/initialAdminPassword This is the administrator password 5496 a0ee3afd449fb65c709d6d721c5d Copy it to the browser unlock JenkinsCopy the code

Skip the plug-in installation, we will install the plug-in later after modifying the download sourceChoose noCreate a userJenkins was successfully run in k8S

Jenkins Plugin Management

Jenkins-Manage Jenkins-Manage PluginsThe purpose of this is to download Jenkins' official plugin list locally and then modify the address file to replace it with the domestic plugin address

[[email protected] kube-ops-jenkins-home-jenkins-0-pvc-0d59af09-0339-46eb-a6a0-550c40365efb]# pwd
/opt/nfs/jenkins/kube-ops-jenkins-home-jenkins-0-pvc-0d59af09-0339-46eb-a6a0-550c40365efb
[[email protected] kube-ops-jenkins-home-jenkins-0-pvc-0d59af09-0339-46eb-a6a0-550c40365efb]# cd updates/
[[email protected] updates]# lsDefault. The json Hudson. Tasks. Maven. MavenInstaller default. Json is plug-in download address We modify the plug-in download address sed - I's/http:\/\/updates.jenkins-ci.org\/download/https:\/\/mirrors.tuna.tsinghua.edu.cn\/jenkins/g' default.json  sed -i 's/http:\/\/www.google.com/https:\/\/www.baidu.com/g' default.json

sed -i 's/http:\/\/updates.jenkins-ci.org\/download/https:\/\/mirrors.tuna.tsinghua.edu.cn\/jenkins/g'default.json  sed -i 's/http:\/\/www.google.com/https:\/\/www.baidu.com/g' default.json
Copy the code

Finally, click On Advanced for Manage Plugins and change the Update Site to a domestic download address

Mirrors.tuna.tsinghua.edu.cn/jenkins/upd...

After the browser IP /restart and restart JenkinsInstalling basic plug-ins

  • Chinese
  • Git
  • Pipeline
  • Extended Choice Parameter

Waiting for the installation

Jenkins integrated with Kubernetes

The installationKubernetesThe plug-in

Go to the bottom of global configuration when the installation is complete

  • Kubernetes address adopted kube server found: kubernetes. Default. SVC. Cluster. The local
  • If the message "Connection Test successful" appears, Jenkins can communicate with Kubernetes properly
  • Jenkins URL: Jenkins. Kube - ops. SVC. Cluster. The local: 8080

Click connect test

Build a Jenkins-Slave custom mirror

When a Jenkins-Master builds a Job, Kubernetes creates a Jenkins-slave Pod to complete the Job construction. We chose to run Jenkins-Slave image as the official recommended image: Jenkins/jnlP-slave :latest, but there is no Maven environment in this image, so we need == custom == a new image for easy use

So we're going to write a custom image for Dockerfile

vim Dockerfile

FROM jenkins/jnlp-slave:latest

MAINTAINER xiaotian

# Switch to the root account
USER root

# install mavenCOPY apache-maven-3.6.2-bin.tar.gz. RUN tar -zxf apache-maven-3.6.2-bin.tar.gz \ mv apache-maven-3.6.2 /usr/local  \
    rm -f apache-maven-3.6.2-bin.tar.gz  \
    ln -s /usr/local/apache-maven-3.6.2/bin/mvn /usr/bin/mvn \ ln -s /usr/local/ apache maven - 3.6.2 / usr /local/apache-maven  \
    mkdir -p /usr/local/apache-maven/repo

COPY settings.xml /usr/local/apache-maven/conf/settings.xml

USER jenkins
Copy the code

You also need a Settings file with an Ali source image


      

! -- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You are under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --

! -- | This is the configuration file for Maven. It can be specified at two levels: | | 1. User Level. This settings.xml file provides configuration for a single user, | and is normally provided in ${user.home}/.m2/settings.xml. | |NOTE: This location can be overridden with the CLI option:
 |
 |                 -s /path/to/user/settings.xml
 |
 |  2. Global Level. This settings.xml file provides configuration for all Maven
 |                 users on a machine (assuming they're all using the same Maven
 |                 installation). It's normally provided in
 |                 ${maven.conf}/settings.xml.
 |
 |                 NOTE: This location can be overridden with the CLI option:
 |
 |                 -gs /path/to/global/settings.xml
 |
 | The sections in this sample file are intended to give you a running start at
 | getting the most out of your Maven installation. Where appropriate, the default
 | values (values used when the setting is not specified) are provided.
 |
 |--
settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
          xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
          xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd"
  ! -- localRepository | The path to the local repository maven will use to store artifacts. | | Default: ${user.home}/.m2/repository localRepository/path/to/local/repo/localRepository --
  localRepository/usr/local/apache-maven/repo/localRepository

  ! -- interactiveMode | This will determine whether maven prompts you when it needs input. If set to false, | maven will use a sensible default value, perhaps based on some other setting, for | the parameter in question. | | Default: true interactiveModetrue/interactiveMode --

  ! -- offline | Determines whether maven should attempt to connect to the network when executing a build. | This will have an effect on artifact downloads, artifact deployment, and others. | | Default: false offlinefalse/offline --

  ! -- pluginGroups | This is a list of additional group identifiers that will be searched when resolving plugins by their prefix, i.e. | when invoking a command line like "mvn prefix:goal". Maven will automatically add the group identifiers | "org.apache.maven.plugins" and "org.codehaus.mojo" if these are not already contained in the list. |--
  pluginGroups
    ! -- pluginGroup | Specifies a further group identifier to use for plugin lookup. pluginGroupcom.your.plugins/pluginGroup --
  /pluginGroups

  ! -- proxies | This is a list of proxies which can be used on this machine to connect to the network. | Unless otherwise specified (by system property or command-line switch), the first proxy | specification in this list marked as active will be used. |--
  proxies
    ! -- proxy | Specification for one proxy, to be used in connecting to the network. | proxy idoptional/id activetrue/active protocolhttp/protocol usernameproxyuser/username passwordproxypass/password hostproxy.host.net/host port80/port nonProxyHostslocal.net|some.host.com/nonProxyHosts /proxy --
  /proxies

  ! -- servers | This is a list of authentication profiles, keyed by the server-id used within the system. | Authentication profiles can be used whenever maven must make a connection to a remote server. |--
  servers
    ! -- server | Specifies the authentication information to use when connecting to a particular server, identified by | a unique name within the system (referred to by the 'id' attribute below). | |NOTE: You should either specify username/password OR privateKey/passphrase, since these pairings are
     |       used together.
     |
    server
      iddeploymentRepo/id
      usernamerepouser/username
      passwordrepopwd/password
    /server
    --

    ! -- Another sample, using keys to authenticate. server idsiteServer/id privateKey/path/to/private/key/privateKey passphraseoptional; leave empty if not used./passphrase /server --
  /servers

  ! -- mirrors | This is a list of mirrors to be used in downloading artifacts from remote repositories. | | It works like this: a POM may declare a repository to use in resolving certain artifacts. | However, this repository may have problems with heavy traffic at times, so people have mirrored | it to several places. | | That repository definition will have a unique id, so we can create a mirror reference for that | repository, to be used as an alternate download site. The mirror site will be the preferred | server for that repository. |--
  mirrors
    ! -- mirror | Specifies a repository mirror site to use instead of a given repository. The repository that | this mirror serves has an ID that matches the mirrorOf element of this mirror. IDs are used | for inheritance and direct lookup purposes, and must be unique across the set of mirrors. | mirror idmirrorId/id mirrorOfrepositoryId/mirrorOf nameHuman  Readable Name for this Mirror./name urlhttp://my.repository.com/repo/path/url /mirror --
    mirror     
      idcentral/id     
      mirrorOfcentral/mirrorOf     
      namealiyun maven/name
      urlhttps://maven.aliyun.com/repository/public/url     
    /mirror
  /mirrors

  ! -- profiles | This is a list of profiles which can be activated in a variety of ways, and which can modify | the build process. Profiles provided in the settings.xml are intended to provide local machine- |  specific paths and repository locations which allow the build to work in the local environment. | | For example, if you have an integration testing plugin - like cactus - that needs to know where | your Tomcat instance is installed, you can provide a variable here such that the variable is | dereferenced during the build process to configure the cactus plugin. | | As noted above, profiles can be activated in a variety of ways. One way - the activeProfiles | section of this document (settings.xml) -  will be discussed later. Another way essentially | relies on the detection of a system property, either matching a particular value for the property, | or merely testing its existence. Profiles can also be activated by JDK version prefix, Where a value of '1.4' | took activate a profile when the build is executed on a JDK version of '1.4.2 _07'. | the Finally, the list of active profiles can be specified directly from the command line. | |NOTE: For profiles defined in the settings.xml, you are restricted to specifying only artifact
   |       repositories, plugin repositories, and free-form properties to be used as configuration
   |       variables for plugins in the POM.
   |
   |--
  profiles
    ! -- profile | Specifies a set of introductions to the build process, to be activated using one or more of the | mechanisms described above. For inheritance purposes, and to activate profiles via activatedProfiles/ | or the command line, profiles have to have an ID that is unique. | | An encouraged best practice for profile identification is to use a consistent naming convention | for profiles, such as 'env-dev', 'env-test', 'env-production', 'user-jdcasey', 'user-brett', etc. | This will make it more intuitive to understand what the set of introduced profiles is attempting | to accomplish,  particularly when you only have a list of profile id's for debug. | | This profile example uses the JDK version to trigger activation, id activation  JDK 1.4/ JDK  / JDK  /activation  repository   repository   id  jdk14  id   name  repository for JDK 1.4 builds  / name   url  http://www.myhost.com/maven/jdk14  / url  layoutdefault/layout snapshotPolicyalways/snapshotPolicy /repository /repositories /profile --

    ! -- | Here is another profile, activated by the system property 'target-env' with a value of 'dev', | which provides a specific path to the Tomcat instance. To use this, your plugin configuration | might hypothetically look like: | | ... | plugin | groupIdorg.myco.myplugins/groupId | artifactIdmyplugin/artifactId | | configuration | tomcatLocation${tomcatPath}/tomcatLocation | /configuration | /plugin | ... | |NOTE: If you just wanted to inject this configuration whenever someone set 'target-env' to
     |       anything, you could just leave off the value/ inside the activation-property.
     |
    profile
      idenv-dev/id

      activation
        property
          nametarget-env/name
          valuedev/value
        /property
      /activation

      properties
        tomcatPath/path/to/tomcat/instance/tomcatPath
      /properties
    /profile
    --
  /profiles

  ! -- activeProfiles | List of profiles that are active for all builds. | activeProfiles activeProfilealwaysActiveProfile/activeProfile activeProfileanotherAlwaysActiveProfile/activeProfile /activeProfiles --
/settings
Copy the code

The last thing you need is a maven installation package. I used apache-maven-3.6.2-bin.tar here

[[email protected] jenkins-slave]# lsApache-maven-3.6.2-bin.tar. gz Dockerfile settings. XML Create image docker build -t Jenkins -slave-maven:latest.Copy the code

View the image Docker imagesCopy the code

Then upload the image to Harbor's public library (since K8s needs to download the image from every node, it is most convenient to put it in the public library).

Tagging docker tag Jenkins - slave - maven: latest 192.168.188.119:85 / library/Jenkins - slave - maven: latest vim /etc/docker/daemon.json {"registry-mirrors": ["https://m0rsqupc.mirror.aliyuncs.com"]."insecure-registries": ["192.168.188.119:85"} Docker login -u admin -p Harbor12345 192.168.188.119:85 Upload image Docker push 192.168.188.119:85 / library/Jenkins - slave - maven: the latestCopy the code

Image uploaded successfully

Tests whether jenkins-slave can be created

Create a Jenkins pipeline project

Add the gitLab credentials that the pipeline script needs to use later

Write Pipeline and pull code from GItlab

def git_address = "http://47.108.13.86:82/maomao_group/tensquare_back.git" 
def git_auth = "bb5a1b2e-8dfa-4557-a79b-66d0f1f05f5c" 

// Create a Pod template with jenkins-slave as label
podTemplate(label: 'jenkins-slave'.cloud: 'kubernetes'.containers: [ 
    containerTemplate( 
        name: 'jnlp'.image: "192.168.188.119:85 / library/Jenkins - slave - maven: latest")]) {// Build a Jenkins-slave POD by referencing the Jenkins-slave pod module
  node("jenkins-slave") {/ / the first step
      stage('Pull code'){ 
          checkout([$class: 'GitSCM'.branches: [[name: 'master']], 
          userRemoteConfigs: [[credentialsId: "${git_auth}".url: "${git_address}"]]])}}}Copy the code
  • PodTemplate is the POD template for K8s
  • Label is the name of the template
  • Cloud is the name of the cloud in the global configuration in the cloud itself
  • ContainerTemplate specifies the container to run inside the POD

Building a successful

Errors encountered in the middleAll K8S nodes must authorize the harbor warehouse address

vi /etc/docker/daemon.json

{
"registry-mirrors": ["https://zydiol88.mirror.aliyuncs.com"]."insecure-registries": ["harbor-ip:85"]}Copy the code

Jenkins+Kubernetes+Docker complete continuous integration of microservices

Pull the code and build the image

Example Create an NFS shared directory

Have all Jenkins-slaves build Maven's shared repository directory that points to NFS

mkdir -p /opt/nfs/maven
vi /etc/exports

/opt/nfs/jenkins        *(rw,no_root_squash)
/opt/nfs/maven          *(rw,no_root_squash)

systemctl restart nfs

showmount -e 192.168.188.116 
Export list for192.168.188.116: /opt/ NFS /maven * /opt/ NFS/Jenkins *Copy the code

Create the project and write the build Pipeline

Add multiple option parameters in the pipelineThen add character parameters

save

Next, add the harbor's credentialsWriting pipeline scripts

def git_address = "http://192.168.188.120:82/maomao_group/tensquare_back.git"
def git_auth = "bb5a1b2e-8dfa-4557-a79b-66d0f1f05f5c" 
// The name of the build
def tag = "latest" 
//Harbor Private server address
def harbor_url = "192.168.188.119:85" 
// The name of the Harbor project
def harbor_project_name = "tensquare" 
/ / Harbor credentials
def harbor_auth = "4fe90544-8b7b-4f81-b282-d20a2eb6e437" 

podTemplate(label: 'jenkins-slave'.cloud: 'kubernetes'.containers: [ 
	containerTemplate( 
		name: 'jnlp'.image: "192.168.188.119:85 / library/Jenkins - slave - maven: latest"
	),
	containerTemplate(
		name: 'docker'.image: "docker:stable".ttyEnabled: true.command: 'cat')],volumes: [
    hostPathVolume(mountPath: '/var/run/docker.sock'.hostPath:'/var/run/docker.sock'),
	nfsVolume(mountPath: '/usr/local/apache-maven/repo'.serverAddress: '192.168.188.116' , serverPath: '/opt/nfs/maven'),
	],
)
{ 
  node("jenkins-slave") {/ / the first step
	   stage('Pull code'){
		  checkout([$class: 'GitSCM'.branches: [[name: '${branch}']],
		  userRemoteConfigs: [[credentialsId: "${git_auth}".url: "${git_address}"]]])}/ / the second step
	   stage('Compilation of public Engineering code') {// Compile and install the public works
		  sh "mvn -f tensquare_common clean install"
	   }
	   / / the third step
	   stage('Build image, deploy project') {// Convert the selected item information into an array
			 def selectedProjects = "${project_name}".split(', ')
			 
			 for(int i=0; iselectedProjects.size(); i++){// Fetch the name and port of each item
				def currentProject = selectedProjects[i];
				// Project name
				def currentProjectName = currentProject.split(The '@') [0]
				// Project startup port
				def currentProjectPort = currentProject.split(The '@') [1]
				
				// Define the image name
				def imageName = "${currentProjectName}:${tag}"
				
				// Compile to build the local image
				sh "mvn -f ${currentProjectName} clean package dockerfile:build"
				
				container('docker') {
				
					// Label the image
					sh "docker tag ${imageName} ${harbor_url}/${harbor_project_name}/${imageName}"
					
					// Log in to the Harbor and upload the image
					withCredentials([usernamePassword(credentialsId:"${harbor_auth}".passwordVariable: 'password'.usernameVariable: 'username')])
				{
						  / / login
						  sh "docker login -u ${username} -p ${password} ${harbor_url}"
						  
						  // Upload the image
						  sh "docker push ${harbor_url}/${harbor_project_name}/${imageName}"
					}
					// Delete a local mirror
					sh "docker rmi -f ${imageName}"
					sh "docker rmi -f ${harbor_url}/${harbor_project_name}/${imageName}"
				}
			}
		}
	}
}
Copy the code

Note: During the build process, you will find that the repository directory cannot be created because the NFS shared directory permissions are insufficient and need to be changed

Only in the k8S primary node

chmod -R 777 /opt/nfs/maven
Copy the code

If this error occurs

Docker command execution permission problem, all K8S servers need to execute

chmod 777 /var/run/docker.sock
Copy the code

You need to manually upload the parent project to the Maven shared repository directory that depends on NFS

record

It's really hard to write a script that can go wrong so many times

Error can only see the log step by step to troubleshoot

Since this step is so easy to get wrong, I'll document the build process in case I encounter problems later

Started by user maomao
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] Start of Pipeline
[Pipeline] podTemplate
[Pipeline] {
[Pipeline] node
Created Pod: kubernetes kube-ops/jenkins-slave-qv8g5-p9qlr
Agent jenkins-slave-qv8g5-p9qlr is provisioned from template jenkins-slave-qv8g5
---
apiVersion: "v1"
kind: "Pod"
metadata:
  annotations:
    buildUrl: "http://jenkins.kube-ops.svc.cluster.local:8080/job/tensquare__back/15/"
    runUrl: "job/tensquare__back/15/"
  labels:
    jenkins: "slave"
    jenkins/label-digest: "5059d2cd0054f9fe75d61f97723d98ab1a42d71a"
    jenkins/label: "jenkins-slave"
  name: "jenkins-slave-qv8g5-p9qlr"
spec:
  containers:
  - env:
    - name: "JENKINS_SECRET"
      value: "* * * * * * * *"
    - name: "JENKINS_AGENT_NAME"
      value: "jenkins-slave-qv8g5-p9qlr"
    - name: "JENKINS_NAME"
      value: "jenkins-slave-qv8g5-p9qlr"
    - name: "JENKINS_AGENT_WORKDIR"
      value: "/home/jenkins/agent"
    - name: "JENKINS_URL"
      value: "http://jenkins.kube-ops.svc.cluster.local:8080/"
    image: "192.168.188.119:85 / library/Jenkins - slave - maven: latest"
    imagePullPolicy: "IfNotPresent"
    name: "jnlp"
    resources:
      limits: {}
      requests: {}
    tty: false
    volumeMounts:
    - mountPath: "/usr/local/apache-maven/repo"
      name: "volume-1"
      readOnly: false
    - mountPath: "/var/run/docker.sock"
      name: "volume-0"
      readOnly: false
    - mountPath: "/home/jenkins/agent"
      name: "workspace-volume"
      readOnly: false
  - command:
    - "cat"
    image: "docker:stable"
    imagePullPolicy: "IfNotPresent"
    name: "docker"
    resources:
      limits: {}
      requests: {}
    tty: true
    volumeMounts:
    - mountPath: "/usr/local/apache-maven/repo"
      name: "volume-1"
      readOnly: false
    - mountPath: "/var/run/docker.sock"
      name: "volume-0"
      readOnly: false
    - mountPath: "/home/jenkins/agent"
      name: "workspace-volume"
      readOnly: false
  nodeSelector:
    kubernetes.io/os: "linux"
  restartPolicy: "Never"
  volumes:
  - hostPath:
      path: "/var/run/docker.sock"
    name: "volume-0"
  - name: "volume-1"
    nfs:
      path: "/opt/nfs/maven"
      readOnly: false
      server: "192.168.188.116"
  - emptyDir:
      medium: ""
    name: "workspace-volume"

Running on jenkins-slave-qv8g5-p9qlr in/ home/Jenkins/agent/workspace/tensquare__back [Pipeline] {/ Pipeline stage [Pipeline] {(pull code) [Pipeline] checkout The recommended git tool is: NONE using credential bb5a1b2e-8dfa-4557-a79b-66d0f1f05f5c Cloning the remote Git repository Cloning repository http://192.168.188.120:82/maomao_group/tensquare_back.git  git init/home/Jenkins/agent/workspace/tensquare__back# timeout=10
Fetching upstream changes from http://192.168.188.120:82/maomao_group/tensquare_back.git
  git --version # timeout=10
  git --version # 'git version 2.20.1'
using GIT_ASKPASS to setcredentials gitlab-http-auth  git fetch --tags --force --progress -- http://192.168.188.120:82/maomao_group/tensquare_back.git + refs/heads / * : refs/remotes/origin / *# timeout=10Avoid second fetch Checking out Revision dba0faf11591dc9aa572e43bb0b5134b3ebf195e (origin/master)  git config Remote. Origin. The url http://192.168.188.120:82/maomao_group/tensquare_back.git# timeout=10
  git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10
  git rev-parse origin/master^{commit} # timeout=10
  git config core.sparsecheckout # timeout=10
  git checkout -f dba0faf11591dc9aa572e43bb0b5134b3ebf195e # timeout=10
Commit message: "Cow"
  git rev-list --no-walk dba0faf11591dc9aa572e43bb0b5134b3ebf195e # timeout=10[Pipeline]} [Pipeline] // stage [Pipeline] stage [Pipeline] {(public engineering code compile) [Pipeline] sh + mvn-f tensquare_common clean install [INFO] Scanningforprojects... [INFO] [INFO] ------------------- com.tensquare:tensquare_common ------------------- [INFO] Building tensquare_common 1.0 the SNAPSHOT [INFO] -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- (jar) -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- [INFO] - [INFO] Maven-clean-plugin :3.0.0:clean (default-clean) @tensquare_common -- [INFO] Deleting maven-clean-plugin:3.0.0:clean (default-clean) @tensquare_common -- [INFO] Deleting /home/jenkins/agent/workspace/tensquare__back/tensquare_common/target [INFO] [INFO] --- Maven-resources-plugin :3.0.1:resources (default-resources) @tensquare_common -- [INFO] Using maven-resources-plugin:3.0.1:resources (default-resources) @tensquare_common -- [INFO] Using'UTF-8'encoding to copy filtered resources. [INFO] skip non existing resourceDirectory /home/jenkins/agent/workspace/tensquare__back/tensquare_common/src/main/resources [INFO] skip non existing resourceDirectory /home/jenkins/agent/workspace/tensquare__back/tensquare_common/src/main/resources [INFO] [INFO] --- Maven-compiler-plugin :3.7.0:compile (default-compile) @tensquare_common -- [INFO] Changes detected - recompiling the module! [INFO] Compiling 5sourcefiles to /home/jenkins/agent/workspace/tensquare__back/tensquare_common/target/classes [INFO] [INFO] --- Maven - resources - the plugin: 3.0.1: testResources (default - testResources) @ tensquare_common - [INFO] Using'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory /home/jenkins/agent/workspace/tensquare__back/tensquare_common/src/test/resources [INFO] [INFO] -- maven-compiler-plugin:3.7.0:testCompile (default-testcompile) @tensquare_common -- [INFO] -- maven-compiler-plugin:3.7.0:testCompile (default-testcompile) No sources to compile [INFO] [INFO] -- maven-surefire-plugin:2.21.0:test(default-test) @tensquare_common -- [INFO] No tests to run. [INFO] [INFO] -- Maven-jar-plugin :3.0.2:jar (default-jar)  @ tensquare_common --- [INFO] Building jar: / home/Jenkins/agent/workspace/tensquare__back tensquare_common/target/tensquare_common - 1.0 - the SNAPSHOT. Jar [INFO] [INFO] -- Maven-install-plugin :2.5.2:install (default-install) @tensquare_common -- [INFO] Installing / home/Jenkins/agent/workspace/tensquare__back tensquare_common/target/tensquare_common - 1.0 - the SNAPSHOT. Jar to/usr /local/ apache maven repo/com/tensquare tensquare_common / 1.0 - the SNAPSHOT/tensquare_common - 1.0 - the SNAPSHOT. Jar [INFO] Installing /home/jenkins/agent/workspace/tensquare__back/tensquare_common/pom.xml to /usr/local/ apache maven repo/com/tensquare tensquare_common / 1.0 - the SNAPSHOT/tensquare_common - 1.0 - the SNAPSHOT. Pom/INFO ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- [INFO] Total time: 4.162 s [INFO] Finished at: 2021-05-13T15:27:00Z [INFO] ------------------------------------------------------------------------ [Pipeline] } [Pipeline] // stage [Pipeline] stage [Pipeline] { Deployment project) [Pipeline] sh + MVN -f tensquare_Eureka_server Clean package dockerfile:build [INFO] Scanningforprojects... [INFO] [INFO] --------------- com.tensquare:tensquare_eureka_server ---------------- [INFO] Building 1.0 the SNAPSHOT tensquare_eureka_server [INFO] -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- (jar) -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- [INFO] [INFO] -- maven-clean-plugin:3.0.0:clean (default-clean) @tensquare_eureka_server -- [INFO] Deleting /home/jenkins/agent/workspace/tensquare__back/tensquare_eureka_server/target [INFO] [INFO] --- Maven-resources-plugin :3.0.1:resources (default-resources) @tensquare_eureka_server -- [INFO] Using maven-resources-plugin:3.0.1:resources (default-resources) @tensquare_eureka_server -- [INFO] Using'UTF-8'encoding to copy filtered resources. [INFO] Copying 1 resource [INFO] Copying 0 resource [INFO] [INFO] --- Maven-compiler-plugin :3.7.0:compile (default-compile) @tensquare_eureka_server -- [INFO] Changes detected - recompiling the module! [INFO] Compiling 1sourcefile to /home/jenkins/agent/workspace/tensquare__back/tensquare_eureka_server/target/classes [INFO] [INFO] --- Maven - resources - the plugin: 3.0.1: testResources (default - testResources) @ tensquare_eureka_server - [INFO] Using'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory /home/jenkins/agent/workspace/tensquare__back/tensquare_eureka_server/src/test/resources [INFO] [INFO] -- maven-compiler-plugin:3.7.0:testCompile (default-testcompile) @tensquare_eureka_server -- [INFO] No sources to compile [INFO] [INFO] -- maven-surefire-plugin:2.21.0:test(default-test) @tensquare_eureka_server -- [INFO] No tests to run. [INFO] [INFO] -- Maven-jar-plugin :3.0.2:jar (default-jar) @ tensquare_eureka_server --- [INFO] Building jar: / home/Jenkins/agent/workspace/tensquare__back tensquare_eureka_server/target/tensquare_eureka_server - 1.0 - the SNAPSHOT. The jar [INFO] [INFO] - spring - the boot - maven plugin: 2.0.1. RELEASE: repackage (default) @ tensquare_eureka_server - [INFO] [INFO] -- dockerfile-maven-plugin:1.3.6:build (default-cli) @tensquare_eureka_server -- [INFO] Building Docker context /home/jenkins/agent/workspace/tensquare__back/tensquare_eureka_server [INFO] [INFO] Image will be built as tensquare_eureka_server:latest [INFO] [INFO] Step 1/5 : FROM openjdk:8-jdk-alpine [INFO] [INFO] Pulling from library/openjdk [INFO] Digest: sha256:94792824df2df33402f201713f932b58cb9de94a0cd524164a0f2283343547b3 [INFO] Status: Image is up to datefor openjdk:8-jdk-alpine
[INFO]  --- a3562aa0b991
[INFO] Step 2/5 : ARG JAR_FILE
[INFO] 
[INFO]  --- Using cache
[INFO]  --- a2a3b3df4f15
[INFO] Step 3/5 : COPY ${JAR_FILE} app.jar
[INFO] 
[INFO]  --- 656a595f07ab
[INFO] Step 4/5 : EXPOSE 10086
[INFO] 
[INFO]  --- Running in 868cfdbdd284
[INFO] Removing intermediate container 868cfdbdd284
[INFO]  --- 4fa7a8297ad1
[INFO] Step 5/5 : ENTRYPOINT ["java"."-jar"."/app.jar"]
[INFO] 
[INFO]  --- Running inbee52b92749b [INFO] Removing intermediate container bee52b92749b [INFO] --- d0ec04357240 [INFO] Successfully built d0ec04357240 [INFO] Successfully tagged tensquare_eureka_server:latest [INFO] [INFO] Detected build of image with id d0ec04357240 [INFO] Building jar: / home/Jenkins/agent/workspace/tensquare__back tensquare_eureka_server/target/tensquare_eureka_server - 1.0 - the SNAPSHOT - docker -info.jar [INFO] Successfully built tensquare_eureka_server:latest [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 27.837s [INFO] Finished at: 2021-05-13T15:27:29Z [INFO] ------------------------------------------------------------------------ [Pipeline] container [Pipeline] { [Pipeline] sh + docker tag tensquare_eureka_server:latest 192.168.188.119:85 / tensquare tensquare_eureka_server: latest [Pipeline] withCredentials Masking supported the pattern matches  of$username or $password
[Pipeline] {
[Pipeline] sh
Warning: A secret was passed to "sh" using Groovy String interpolation, which is insecure.
		 Affected argument(s) used the following variable(s): [password, username]
		 See https://jenkins.io/redirect/groovy-string-interpolation for+ docker login -u **** -p **** 192.168.188.119:85 WARNING! Using --password via the CLI is insecure. Use --password-stdin. WARNING! Your password will be stored unencryptedin /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-storeLogin Succeeded [Pipeline] sh + docker push 192.168.188.119:85 / tensquare/tensquare_eureka_server: latest The push refers To the repository 192.168.188.119:85 / tensquare/tensquare_eureka_server c8a7e30e2667: Preparing ceaf9e1ebef5: Preparing 9b9b7f3d56a0: Preparing f1b5933fe4b5: Preparing ceaf9e1ebef5: Layer already exists 9b9b7f3d56a0: Layer already exists f1b5933fe4b5: Layer already exists c8a7e30e2667: Pushed latest: digest: sha256:320a88f1b5efa46bc74643c156652ca724211fff07578c8733ac7208f2ffa8a9 size: 1159 [Pipeline] } [Pipeline] // withCredentials [Pipeline] sh + docker rmi -f tensquare_eureka_server:latest Untagged: Tensquare_eureka_server: latest [Pipeline] sh + docker rmi -f 192.168.188.119:85 / tensquare/tensquare_eureka_server: the latest Untagged: 192.168.188.119:85 / tensquare/tensquare_eureka_server: latest Untagged: 192.168.188.119:85 / tensquare tensquare_eureka_server @ sha256:320 a88f1b5efa46bc74643c156652ca724211fff07578c8733ac7208f2ff a8a9 Deleted: sha256:d0ec043572402382bd3610e7fe9c63642c838edb43f7a16d822c374d231f9e9a Deleted: sha256:4fa7a8297ad170d46a9b5774b6f6adacd6276e9acae9a4cbbb918286a7617c67 Deleted: sha256:656a595f07ab8eb40c24f787cdff50cbf0082e450251cc007a6bb055e67824e2 Deleted: sha256:caa525b07e2d84ac4c0ef38c69f193cd72d094c285d2971b5017359b9c9cb1d5 [Pipeline] } [Pipeline] // container [Pipeline] } [Pipeline] // stage [Pipeline] } [Pipeline] // node [Pipeline] } [Pipeline] // podTemplate [Pipeline] End of Pipeline Finished: SUCCESSCopy the code

Microservices are deployed to K8S

  • You need to hand over the built microservice image to k8S's environment to run, so you need oneKubernetes Continuous DeployThe plug-in
  • This is a plug-in for continuous integration

Modified pipeline script

  • deploy.ymlDescribes how to deploy different microservices in K8S
  • KubeconfigId is a certificate that needs to be created in the Jenkins environment

Add k8S certificate, add credentialThis certificate is on the k8S master node

cdKube cat config: /root/. Kube cat config: /root/. Kube cat config: /root/Copy the code

Then looking again will produce an ID, which will be added to the script

def git_address = "http://192.168.188.120:82/maomao_group/tensquare_back.git"
def git_auth = "bb5a1b2e-8dfa-4557-a79b-66d0f1f05f5c" 
// The name of the build
def tag = "latest" 
//Harbor Private server address
def harbor_url = "192.168.188.119:85" 
// The name of the Harbor project
def harbor_project_name = "tensquare" 
/ / Harbor credentials
def harbor_auth = "2ec3c8b6-f9b6-4ef6-b4bc-fdba74f99420" 
/ / k8s credentials
def k8s_auth = ""9565b450- 3899.- 4892.- 8 -aed-d15b6f26f8fd

podTemplate(label: 'jenkins-slave'.cloud: 'kubernetes'.containers: [ 
	containerTemplate( 
		name: 'jnlp'.image: "192.168.188.119:85 / library/Jenkins - slave - maven: latest"
	),
	containerTemplate(
		name: 'docker'.image: "docker:stable".ttyEnabled: true.command: 'cat')],volumes: [
    hostPathVolume(mountPath: '/var/run/docker.sock'.hostPath:'/var/run/docker.sock'),
	nfsVolume(mountPath: '/usr/local/apache-maven/repo'.serverAddress: '192.168.188.116' , serverPath: '/opt/nfs/maven'),
	],
)
{ 
  node("jenkins-slave") {/ / the first step
	   stage('Pull code'){
		  checkout([$class: 'GitSCM'.branches: [[name: '${branch}']],
		  userRemoteConfigs: [[credentialsId: "${git_auth}".url: "${git_address}"]]])}/ / the second step
	   stage('Compilation of public Engineering code') {// Compile and install the public works
		  sh "mvn -f tensquare_common clean install"
	   }
	   / / the third step
	   stage('Build image, deploy project') {// Convert the selected item information into an array
			 def selectedProjects = "${project_name}".split(', ')
			 
			 for(int i=0; iselectedProjects.size(); i++){// Fetch the name and port of each item
				def currentProject = selectedProjects[i];
				// Project name
				def currentProjectName = currentProject.split(The '@') [0]
				// Project startup port
				def currentProjectPort = currentProject.split(The '@') [1]
				
				// Define the image name
				def imageName = "${currentProjectName}:${tag}"
				
				// Compile to build the local image
				sh "mvn -f ${currentProjectName} clean package dockerfile:build"
				
				container('docker') {
				
					// Label the image
					sh "docker tag ${imageName} ${harbor_url}/${harbor_project_name}/${imageName}"
					
					// Log in to the Harbor and upload the image
					withCredentials([usernamePassword(credentialsId:"${harbor_auth}".passwordVariable: 'password'.usernameVariable: 'username')])
				{
						  / / login
						  sh "docker login -u ${username} -p ${password} ${harbor_url}"
						  
						  // Upload the image
						  sh "docker push ${harbor_url}/${harbor_project_name}/${imageName}"
					}
					// Delete a local mirror
					sh "docker rmi -f ${imageName}"
					sh "docker rmi -f ${harbor_url}/${harbor_project_name}/${imageName}"
				}
				def deploy_image_name = "${harbor_url}/${harbor_project_name}/${imageName}"
				
				// Deploy to K8S
				sh """ sed -i 's#\$IMAGE_NAME#${deploy_image_name}#' ${currentProjectName}/deploy.yml sed -i 's#\$SECRET_NAME#${secret_name}#' ${currentProjectName}/deploy.yml """

				   kubernetesDeploy configs: "${currentProjectName}/deploy.yml".kubeconfigId: "${k8s_auth}"}}}}Copy the code

The Eureka cluster is deployed to Kubernetes

We create a deploy. Yml file under the Eureka project

---
apiVersion: v1
kind: Service
metadata:
  name: eureka
  labels:
    app: eureka
spec:
  type: NodePort
  ports:
    - port: 10086
      name: eureka
      targetPort: 10086
  selector:
    app: eureka
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: eureka
spec:
  serviceName: "eureka"
  replicas: 2
  selector:
    matchLabels:
      app: eureka
  template:
    metadata:
      labels:
        app: eureka
    spec:
      imagePullSecrets:
        - name: $SECRET_NAME
      containers:
        - name: eureka
          image: $IMAGE_NAME
          ports:
            - containerPort: 10086
          env:
            - name: MY_POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: EUREKA_SERVER
              value: "http://eureka-0.eureka:10086/eureka/,http://eureka-1.eureka:10086/eureka/"
            - name: EUREKA_INSTANCE_HOSTNAME
              value: ${MY_POD_NAME}.eureka
  podManagementPolicy: "Parallel"
Copy the code

Next, change eruka's configuration file application.yml

server:
  port: ${PORT:10086}
spring:
  application:
    name: eureka

eureka:
  server:
    # Renewal time, that is, the interval between scanning invalid services (default: 60*1000ms)
    eviction-interval-timer-in-ms: 5000
    enable-self-preservation: false
    use-read-only-response-cache: false
  client:
    How often does eureka client pull service registration information by default 30s
    registry-fetch-interval-seconds: 5
    serviceUrl:
      defaultZone: {${EUREKA_SERVER: http://127.0.0.1:$server port} / had been /}
  instance:
    # Heartbeat interval: how long before the next heartbeat is sent (default: 30s)
    lease-renewal-interval-in-seconds: 5
    # After receiving a heartbeat, wait for the next heartbeat interval, which is longer than the heartbeat interval, that is, the service renewal expiration time (default: 90s)
    lease-expiration-duration-in-seconds: 10
    instance-id: The ${EUREKA_INSTANCE_HOSTNAME: ${spring. Application. The name}} : ${server. The port} @ ${random. Long (1000000999999)}
    hostname: ${EUREKA_INSTANCE_HOSTNAME:${spring.application.name}}
Copy the code

Commit the code to the repositoryAn attempt was made to build the Eruka server, but an error message was reported indicating that it could not be foundsecret_nameThis attributeModify the script to define a variable secret_nameThen we went to make the certificate

Docker login -u maomao -p Xiaotian123 192.168.188.119:85 Then use a string of commands to generate the certificate kubectl create secret Docker - registry registry - auth - secret - the docker - server = 192.168.188.119:85 - docker - username = maomao --docker-password=Xiaotian123 [email protected]Copy the code

Check the certificate

kubectl get secret
Copy the code

Then build the Eruka server againAfter a successful build, you can view the information on the primary node

Kubectl Get Pods Check the port kubectl Get SVCCopy the code

We have access to all K8S server IP addresses through the port

Error screen

After careful consideration, I found that the problem was the key because I copied it directly, and there were two lines instead of one lineJust delete the newline

Wrong againI simply copied the key file directly to my own computer so that it would not be in the wrong format

Document the build process

Started by user maomao
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] Start of Pipeline
[Pipeline] podTemplate
[Pipeline] {
[Pipeline] node
Created Pod: kubernetes kube-ops/jenkins-slave-dlxsx-7pxm1
Agent jenkins-slave-dlxsx-7pxm1 is provisioned from template jenkins-slave-dlxsx
---
apiVersion: "v1"
kind: "Pod"
metadata:
  annotations:
    buildUrl: "http://jenkins.kube-ops.svc.cluster.local:8080/job/tensquare__back/20/"
    runUrl: "job/tensquare__back/20/"
  labels:
    jenkins: "slave"
    jenkins/label-digest: "5059d2cd0054f9fe75d61f97723d98ab1a42d71a"
    jenkins/label: "jenkins-slave"
  name: "jenkins-slave-dlxsx-7pxm1"
spec:
  containers:
  - env:
    - name: "JENKINS_SECRET"
      value: "* * * * * * * *"
    - name: "JENKINS_AGENT_NAME"
      value: "jenkins-slave-dlxsx-7pxm1"
    - name: "JENKINS_NAME"
      value: "jenkins-slave-dlxsx-7pxm1"
    - name: "JENKINS_AGENT_WORKDIR"
      value: "/home/jenkins/agent"
    - name: "JENKINS_URL"
      value: "http://jenkins.kube-ops.svc.cluster.local:8080/"
    image: "192.168.188.119:85 / library/Jenkins - slave - maven: latest"
    imagePullPolicy: "IfNotPresent"
    name: "jnlp"
    resources:
      limits: {}
      requests: {}
    tty: false
    volumeMounts:
    - mountPath: "/usr/local/apache-maven/repo"
      name: "volume-1"
      readOnly: false
    - mountPath: "/var/run/docker.sock"
      name: "volume-0"
      readOnly: false
    - mountPath: "/home/jenkins/agent"
      name: "workspace-volume"
      readOnly: false
  - command:
    - "cat"
    image: "docker:stable"
    imagePullPolicy: "IfNotPresent"
    name: "docker"
    resources:
      limits: {}
      requests: {}
    tty: true
    volumeMounts:
    - mountPath: "/usr/local/apache-maven/repo"
      name: "volume-1"
      readOnly: false
    - mountPath: "/var/run/docker.sock"
      name: "volume-0"
      readOnly: false
    - mountPath: "/home/jenkins/agent"
      name: "workspace-volume"
      readOnly: false
  nodeSelector:
    kubernetes.io/os: "linux"
  restartPolicy: "Never"
  volumes:
  - hostPath:
      path: "/var/run/docker.sock"
    name: "volume-0"
  - name: "volume-1"
    nfs:
      path: "/opt/nfs/maven"
      readOnly: false
      server: "192.168.188.116"
  - emptyDir:
      medium: ""
    name: "workspace-volume"

Running on jenkins-slave-dlxsx-7pxm1 in/ home/Jenkins/agent/workspace/tensquare__back [Pipeline] {/ Pipeline stage [Pipeline] {(pull code) [Pipeline] checkout The recommended git tool is: NONE using credential bb5a1b2e-8dfa-4557-a79b-66d0f1f05f5c Cloning the remote Git repository Cloning repository http://192.168.188.120:82/maomao_group/tensquare_back.git  git init/home/Jenkins/agent/workspace/tensquare__back# timeout=10
Fetching upstream changes from http://192.168.188.120:82/maomao_group/tensquare_back.git
  git --version # timeout=10
  git --version # 'git version 2.20.1'
using GIT_ASKPASS to setcredentials gitlab-http-auth  git fetch --tags --force --progress -- http://192.168.188.120:82/maomao_group/tensquare_back.git + refs/heads / * : refs/remotes/origin / *# timeout=10Avoid second fetch Checking out Revision a35a2f630f44b425c52aa483aef1b7dc64539940 (origin/master)  git config Remote. Origin. The url http://192.168.188.120:82/maomao_group/tensquare_back.git# timeout=10
  git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10
  git rev-parse origin/master^{commit} # timeout=10
  git config core.sparsecheckout # timeout=10
  git checkout -f a35a2f630f44b425c52aa483aef1b7dc64539940 # timeout=10
Commit message: "K8S"
  git rev-list --no-walk a35a2f630f44b425c52aa483aef1b7dc64539940 # timeout=10[Pipeline]} [Pipeline] // stage [Pipeline] stage [Pipeline] {(public engineering code compile) [Pipeline] sh + mvn-f tensquare_common clean install [INFO] Scanningforprojects... [INFO] [INFO] ------------------- com.tensquare:tensquare_common ------------------- [INFO] Building tensquare_common 1.0 the SNAPSHOT [INFO] -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- (jar) -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- [INFO] - [INFO] Maven-clean-plugin :3.0.0:clean (default-clean) @tensquare_common -- [INFO] Deleting maven-clean-plugin:3.0.0:clean (default-clean) @tensquare_common -- [INFO] Deleting /home/jenkins/agent/workspace/tensquare__back/tensquare_common/target [INFO] [INFO] --- Maven-resources-plugin :3.0.1:resources (default-resources) @tensquare_common -- [INFO] Using maven-resources-plugin:3.0.1:resources (default-resources) @tensquare_common -- [INFO] Using'UTF-8'encoding to copy filtered resources. [INFO] skip non existing resourceDirectory /home/jenkins/agent/workspace/tensquare__back/tensquare_common/src/main/resources [INFO] skip non existing resourceDirectory /home/jenkins/agent/workspace/tensquare__back/tensquare_common/src/main/resources [INFO] [INFO] --- Maven-compiler-plugin :3.7.0:compile (default-compile) @tensquare_common -- [INFO] Changes detected - recompiling the module! [INFO] Compiling 5sourcefiles to /home/jenkins/agent/workspace/tensquare__back/tensquare_common/target/classes [INFO] [INFO] --- Maven - resources - the plugin: 3.0.1: testResources (default - testResources) @ tensquare_common - [INFO] Using'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory /home/jenkins/agent/workspace/tensquare__back/tensquare_common/src/test/resources [INFO] [INFO] -- maven-compiler-plugin:3.7.0:testCompile (default-testcompile) @tensquare_common -- [INFO] -- maven-compiler-plugin:3.7.0:testCompile (default-testcompile) No sources to compile [INFO] [INFO] -- maven-surefire-plugin:2.21.0:test(default-test) @tensquare_common -- [INFO] No tests to run. [INFO] [INFO] -- Maven-jar-plugin :3.0.2:jar (default-jar)  @ tensquare_common --- [INFO] Building jar: / home/Jenkins/agent/workspace/tensquare__back tensquare_common/target/tensquare_common - 1.0 - the SNAPSHOT. Jar [INFO] [INFO] -- Maven-install-plugin :2.5.2:install (default-install) @tensquare_common -- [INFO] Installing / home/Jenkins/agent/workspace/tensquare__back tensquare_common/target/tensquare_common - 1.0 - the SNAPSHOT. Jar to/usr /local/ apache maven repo/com/tensquare tensquare_common / 1.0 - the SNAPSHOT/tensquare_common - 1.0 - the SNAPSHOT. Jar [INFO] Installing /home/jenkins/agent/workspace/tensquare__back/tensquare_common/pom.xml to /usr/local/ apache maven repo/com/tensquare tensquare_common / 1.0 - the SNAPSHOT/tensquare_common - 1.0 - the SNAPSHOT. Pom/INFO ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- [INFO] Total time: 4.069 s [INFO] Finished at: 2021-05-13T16:34:40Z [INFO] ------------------------------------------------------------------------ [Pipeline] } [Pipeline] // stage [Pipeline] stage [Pipeline] { Deployment project) [Pipeline] sh + MVN -f tensquare_Eureka_server Clean package dockerfile:build [INFO] Scanningforprojects... [INFO] [INFO] --------------- com.tensquare:tensquare_eureka_server ---------------- [INFO] Building 1.0 the SNAPSHOT tensquare_eureka_server [INFO] -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- (jar) -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- [INFO] [INFO] -- maven-clean-plugin:3.0.0:clean (default-clean) @tensquare_eureka_server -- [INFO] Deleting /home/jenkins/agent/workspace/tensquare__back/tensquare_eureka_server/target [INFO] [INFO] --- Maven-resources-plugin :3.0.1:resources (default-resources) @tensquare_eureka_server -- [INFO] Using maven-resources-plugin:3.0.1:resources (default-resources) @tensquare_eureka_server -- [INFO] Using'UTF-8'encoding to copy filtered resources. [INFO] Copying 1 resource [INFO] Copying 0 resource [INFO] [INFO] --- Maven-compiler-plugin :3.7.0:compile (default-compile) @tensquare_eureka_server -- [INFO] Changes detected - recompiling the module! [INFO] Compiling 1sourcefile to /home/jenkins/agent/workspace/tensquare__back/tensquare_eureka_server/target/classes [INFO] [INFO] --- Maven - resources - the plugin: 3.0.1: testResources (default - testResources) @ tensquare_eureka_server - [INFO] Using'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory /home/jenkins/agent/workspace/tensquare__back/tensquare_eureka_server/src/test/resources [INFO] [INFO] -- maven-compiler-plugin:3.7.0:testCompile (default-testcompile) @tensquare_eureka_server -- [INFO] No sources to compile [INFO] [INFO] -- maven-surefire-plugin:2.21.0:test(default-test) @tensquare_eureka_server -- [INFO] No tests to run. [INFO] [INFO] -- Maven-jar-plugin :3.0.2:jar (default-jar) @ tensquare_eureka_server --- [INFO] Building jar: / home/Jenkins/agent/workspace/tensquare__back tensquare_eureka_server/target/tensquare_eureka_server - 1.0 - the SNAPSHOT. The jar [INFO] [INFO] - spring - the boot - maven plugin: 2.0.1. RELEASE: repackage (default) @ tensquare_eureka_server - [INFO] [INFO] -- dockerfile-maven-plugin:1.3.6:build (default-cli) @tensquare_eureka_server -- [INFO] Building Docker context /home/jenkins/agent/workspace/tensquare__back/tensquare_eureka_server [INFO] [INFO] Image will be built as tensquare_eureka_server:latest [INFO] [INFO] Step 1/5 : FROM openjdk:8-jdk-alpine [INFO] [INFO] Pulling from library/openjdk [INFO] Digest: sha256:94792824df2df33402f201713f932b58cb9de94a0cd524164a0f2283343547b3 [INFO] Status: Image is up to datefor openjdk:8-jdk-alpine
[INFO]  --- a3562aa0b991
[INFO] Step 2/5 : ARG JAR_FILE
[INFO] 
[INFO]  --- Using cache
[INFO]  --- a2a3b3df4f15
[INFO] Step 3/5 : COPY ${JAR_FILE} app.jar
[INFO] 
[INFO]  --- 1b476689026f
[INFO] Step 4/5 : EXPOSE 10086
[INFO] 
[INFO]  --- Running in 68e029b5bdac
[INFO] Removing intermediate container 68e029b5bdac
[INFO]  --- f76e827eb058
[INFO] Step 5/5 : ENTRYPOINT ["java"."-jar"."/app.jar"]
[INFO] 
[INFO]  --- Running in92ad4dcb6596 [INFO] Removing intermediate container 92ad4dcb6596 [INFO] --- ce5f4598f452 [INFO] Successfully built ce5f4598f452 [INFO] Successfully tagged tensquare_eureka_server:latest [INFO] [INFO] Detected build of image with id ce5f4598f452 [INFO] Building jar: / home/Jenkins/agent/workspace/tensquare__back tensquare_eureka_server/target/tensquare_eureka_server - 1.0 - the SNAPSHOT - docker -info.jar [INFO] Successfully built tensquare_eureka_server:latest [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 28.580s [INFO] Finished at: 2021-05-13T16:35:10Z [INFO] ------------------------------------------------------------------------ [Pipeline] container [Pipeline] { [Pipeline] sh + docker tag tensquare_eureka_server:latest 192.168.188.119:85 / tensquare tensquare_eureka_server: latest [Pipeline] withCredentials Masking supported the pattern matches  of$username or $password
[Pipeline] {
[Pipeline] sh
Warning: A secret was passed to "sh" using Groovy String interpolation, which is insecure.
		 Affected argument(s) used the following variable(s): [password, username]
		 See https://jenkins.io/redirect/groovy-string-interpolation for+ docker login -u **** -p **** 192.168.188.119:85 WARNING! Using --password via the CLI is insecure. Use --password-stdin. WARNING! Your password will be stored unencryptedin /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-storeLogin Succeeded [Pipeline] sh + docker push 192.168.188.119:85 / tensquare/tensquare_eureka_server: latest The push refers To the repository 192.168.188.119:85 / tensquare/tensquare_eureka_server 6 b8a8e1926e4: Preparing ceaf9e1ebef5: Preparing 9b9b7f3d56a0: Preparing f1b5933fe4b5: Preparing f1b5933fe4b5: Layer already exists 9b9b7f3d56a0: Layer already exists ceaf9e1ebef5: Layer already exists 6b8a8e1926e4: Pushed latest: digest: sha256:584ab9d6bd684636dfef55c2a2ac3d36d445b287f5a0a89a6694823655f909b1 size: 1159 [Pipeline] } [Pipeline] // withCredentials [Pipeline] sh + docker rmi -f tensquare_eureka_server:latest Untagged: Tensquare_eureka_server: latest [Pipeline] sh + docker rmi -f 192.168.188.119:85 / tensquare/tensquare_eureka_server: the latest Untagged: 192.168.188.119:85 / tensquare/tensquare_eureka_server: latest Untagged: 192.168.188.119:85 / tensquare tensquare_eureka_server @ sha256:584 ab9d6bd684636dfef55c2a2ac3d36d445b287f5a0a89a6694823655f9 09b1 Deleted: sha256:ce5f4598f452e300f537deacab64ee958f93a7c39ced0ff71f360f9c4d5d7572 Deleted: sha256:f76e827eb0580d99d60f8709608850589404666611fae1068e4e2231d100e6a3 Deleted: sha256:1b476689026f842092261a4d524043b6c4aa754e6b7b68c25501269fa1e17dce Deleted: sha256:731b8259434bd270843c034ed36011581a2371daf28f682a6e3d0fa8b713545a [Pipeline] } [Pipeline] // container [Pipeline] sh + sed -i s# # $IMAGE_NAME 192.168.188.119:85 / tensquare/tensquare_eureka_server: latest# tensquare_eureka_server/deploy yml
+ sed -i s#$SECRET_NAME#registry-auth-secret# tensquare_eureka_server/deploy.yml
[Pipeline] kubernetesDeploy
Starting Kubernetes deployment
Loading configuration: /home/jenkins/agent/workspace/tensquare__back/tensquare_eureka_server/deploy.yml
Finished Kubernetes deployment
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] End of Pipeline
Finished: SUCCESS
Copy the code

The gateway cluster is deployed on K8S

Change Zull's configuration to this address

defaultZone: http://eureka-0.eureka:10086/eureka/,http://eureka-1.eureka:10086/eureka/
Copy the code

Write a deploy.yml file

---
apiVersion: v1
kind: Service
metadata:
  name: zuul
  labels:
    app: zuul
spec:
  type: NodePort
  ports:
    - port: 10020
      name: zuul
      targetPort: 10020
  selector:
    app: zuul
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: zuul
spec:
  serviceName: "zuul"
  replicas: 2
  selector:
    matchLabels:
      app: zuul
  template:
    metadata:
      labels:
        app: zuul
    spec:
      imagePullSecrets:
        - name: $SECRET_NAME
      containers:
        - name: zuul
          image: $IMAGE_NAME
          ports:
            - containerPort: 10020
  podManagementPolicy: "Parallel"
Copy the code

To do task buildingAn error in theOriginally required == manually upload the parent project to the Maven shared repository directory that depends on NFS ==

[[email protected] tensquare]# pwd
/opt/nfs/maven/com/tensquare

[[email protected] tensquare]# mv /root/tensquare_parent/ ./
[[email protected] tensquare]# ls
tensquare_common  tensquare_parent
Copy the code

Build again after uploading the parent project dependencies

There are already two gateways

Search
About
mo4tech.com (Moment For Technology) is a global community with thousands techies from across the global hang out!Passionate technologists, be it gadget freaks, tech enthusiasts, coders, technopreneurs, or CIOs, you would find them all here.