1. The concept

1.DevOps

DevOps (a combination of Development and Operations) is a collection of processes, methods, and systems used to facilitate communication, collaboration, and integration among Development (application/software engineering), technical Operations, and quality assurance (QA) departments. IT is a culture, movement, or practice that values communication and collaboration between software developers (Dev) and IT operations technicians (Ops). Build, test, and release software faster, more frequently, and more reliably by automating the software delivery and architecture change processes. It comes as the software industry increasingly recognizes that development and operations must work closely together in order to deliver software products and services on time.

The above quotes from Baidu Encyclopedia, in a nutshell, is the integration of development and operation and maintenance to achieve automatic operation and maintenance.


2.CI/CD

Continuous Integration (CI) Continuous Delivery (CD)

If DevOps is an idea, then CI/CD is its concrete implementation, and here are the software for implementing CI/CD:

2.1 Jenkins (Automation)

throughJenkinsCreate pipeline (Pipeline) pullGitWarehouse code, and execute the correspondingShellScript, automatically complete packaging, testing, building, publishing, deployment and other operations. There are other programs that do similar thingsGithubtheTravisActionsAnd so on.One of the mistakes is thatJenkinsThe assembly lineShellThe script is deadJenkinstheWebPage, this kind ofFreestyle ProjectAny modification of the method can not enjoyGitVersion tracing.

Here is the correct way: CreatePipelinePipeline and configure the following information So the whole projectShellScripts can be written in the projectJenkinsfileAmong them, a simple one is listed belowJenkinsfileStructure:

1. The front end Jenkinsfile

pipeline { agent any stages { stage('echo node version') { steps { sh 'node -v' } } stage('NpmInstall') { steps { sh 'npm i' } } stage('NpmBuild') { steps { sh 'npm run build' } } stage('docker build') { steps { sh 'docker build -t www.harbor.com/sli-frontend:${BUILD_NUMBER}.' // There is a point at the end, }} stage('docker push') {steps {sh' docker push -t www.harbor.com/sli-frontend:${BUILD_NUMBER}' } } stage('k8s deploy') { steps { sh 'kubectl -n namespace set image deployment/xxx xxx=www.harbor.com/sli-frontend:${BUILD_NUMBER}' } } } post { always { echo 'Finish!! '}}}Copy the code

2. The back-end Jenkinsfile

pipeline { agent any stages { stage('echo java version') { steps { sh 'java -version' } } stage('maven build') { steps {  sh 'mvn clean package -Dmaven.test.skip=true' } } stage('docker build') { steps { sh 'docker build -t www.harbor.com/mx-backendc:${BUILD_NUMBER}.' // There is a point at the end, }} stage('docker push') {steps {sh' docker push -t www.harbor.com/mx-backendc:${BUILD_NUMBER}' } } stage('k8s deploy') { steps { sh 'kubectl -n namespace set image deployment/xxx xxx=www.harbor.com/mx-backendc:${BUILD_NUMBER}' } } } post { always { echo 'Finish!! '}}}Copy the code

2.2 Docker (containerization)

The container concept is similar to that of a virtual machine, which is closer to a complete operating system but consumes more resources. Docker is more lightweight. Through the Container Runtime environment like Docker, we can simulate an independent environment for each Container, and each environment has its own operating system. Then we will put the developed software into the Container. In this way, running a container is equivalent to running an operating system + a software. This container mode combining the operating system and software has the following benefits:

  1. Ensure that the application runs in a consistent environment, so that there are no more “this code is ok on my machine” issues; — Consistent operating environment
  2. The startup time can be in the order of seconds or even milliseconds. Greatly saving the development, testing, deployment time. — Faster startup time
  3. Avoid public servers, where resources are vulnerable to other users. – isolation,
  4. Good at dealing with server usage stress of concentrated outbreak; — Elastic expansion and rapid expansion
  5. Applications running on one platform can be easily migrated to another platform without worrying about running in a different environment. — Easy to migrate
  6. useDockerContinuous integration, continuous delivery, and deployment can be achieved by customizing application images. –Continuous delivery and deployment

Non-containerized CICD

The container is changed CICD

The following isDockerSome operations in the operation and maintenance process: DockerfileRepresents a project that has been packageddistPackage, the back end is generallywarPackages orjarHow are packages built into oneDockerMirror (image), here is a simple exampleDockerfileStructure:

1. The front end Dockerfile

//Dockerfile
FROM nginx:1.14.0-alpine
MAINTAINER liangchaogui <[email protected]>

ADD nginx-default.conf /etc/nginx/conf.d/default.conf

ADD dist /usr/share/nginx/html/sli-frontend
Copy the code

A mirror is a layer of nesting dolls, and the construction of each mirror is generally based on the mirror of the previous layer.

  1. FROMRepresents the base image of the current build (exceptFROMEverything else is optional.)

It can be seen that the basic image used is Nginx image, Nginx is now the mainstream deployment architecture front-server (the first to be attacked), with domain name forwarding, load balancing, reverse proxy and other functions, as the front-server responsibility. However, Nginx is more of a static Web server, with a high degree of configurability: caching, HTTPS, HTTP2, path forwarding, etc., as a Docker image deployment is perfect, the following is an Nginx configuration file:

//nginx-default.conf server { listen 80; server_name _; location ^~/sli-frontend { root /usr/share/nginx/html; try_files $uri $uri/ /sli-frontend/index.html; } location / { root /usr/share/nginx/html; index index.html index.htm; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; }}Copy the code

  1. MAINTAINER Author name.

  2. The first ADD represents to copy the Nginx configuration file of the Jenkins server project into the Nginx container, and the second ADD represents to copy the dist package formed by Jenkins package into the HTML directory of the Nginx container.

2. The back-end Dockerfile

There are currently two packaging modes for the backend: one where the Spring project war package is deployed in a separate Tomcat, and the other where the more mainstream SpringBoot project Jar package runs directly through Java-JAR (SpringBoot embedded Tomcat).

  1. War package approach (packaged at container build time)
FROM maven:3 AS bd
MAINTAINER liangchaogui <[email protected]>
WORKDIR /code
ADD ./ /code
RUN mvn package -Dmaven.test.skip=true

FROM tomcat:7-jre7
MAINTAINER liangchaogui <[email protected]>
COPY --from=bd /code/target/*.war /usr/local/tomcat/webapps/
CMD ["catalina.sh", "run"]
Copy the code

Copy the WAR package to the Webapps directory of the Tomcat image based on maven image packaging

  1. War package (package and build container on Jenkins server)
FROM tomcat:7-jre7
MAINTAINER liangchaogui <[email protected]>
ADD /code/target/*.war /usr/local/tomcat/webapps/
CMD ["catalina.sh", "run"]
Copy the code

The Maven packaging step is omitted

  1. The jar package
FROM java:8-jre
MAINTAINER liangchaogui <[email protected]>

ADD ./target/sli-backend.jar /app/
CMD ["java", "-jar", "/app/sli-backend.jar"]

EXPOSE 7000
Copy the code
  1. CMDIt means mirror passDockerThe shell instruction to execute when executed as a container
  2. EXPOSERepresents the port exposed by the current service

PS: In practice, if Tomcat image is used, some configuration files must be replaced. You can run the ADD command to ADD the configuration file of the project to the Tomcat container

After writing the image build file, pass

docker build
docker push 
Copy the code

We can build our application into a mirror, and then push it to the mirror warehouse, such as external Docker Hub or internal service Harbor


2.3 Docker-compose (Multiple Container running)

There are theDockerMirror (image) after that, we naturally have to consider how to run the image as a container, of course we can directly through theDockerHost execution

docker run -d -p 80:80 nginx
Copy the code

Docker-compose is a solution to the problem of manually inputting Shell instructions when multiple containers need to be started sequentially at the same time.

Docker-compose is composed for the container to run commands in the form of yML file configuration, the final pass

 docker-compose up -d
Copy the code

The command can start multiple images at once

Here is the docker-comemess.yml configuration file:

version: '3' services: web: image: nginx ports: - "80:80" volumes: - /html:/usr/share/nginx/html - /conf/nginx.conf:/etc/nginx/nginx.conf php: image: Volumes: - / HTML :/var/ WWW/HTML mysql: image: mysql:5.6 Environment: - MYSQL_ROOT_PASSWORD=12345Copy the code

This script starts a Mysql service, a Php service, and an Nginx service.


2.4. Kubernetes (Container Choreography)

KubernetesAlso calledk8s, is a container orchestration management toolDockerIt’s just that we use it more often in our lifeDcoker), function ratioDocker-ComposeMuch more powerful,Docker-ComposeCan only manage the container of the current host, whilek8sIs a distributed container management solution and provides a nice visual interface for orchestrating containers (Dashboard).Following is a brief introduction to the structure and basic concepts of **K8s ** :

K8s structure introduction:

K8s is distributed and consists of aster node and node node: Master master node consists of 4 important processes:

1. Api-server: Pi-sever is the external communication interface for the master. The master sends commands to the node through it.

  • kubectl: Client command line
  • HTTPIn the form ofRestful APIinterface
  • WebUIInterface (mentioned aboveDashboard)
  1. ETCD: metadata database, the wholek8sThis is where the cluster’s data is stored
  2. controller-manager: of the entire clusterCPU
  3. SchedulerIn charge of dispatching startuppodRunning in whichnodenodes

Node A slave node consisting of three important processes:

  1. kubelet: NodeAn agent for a node, responsible for managing the nodePodLife cycle, etc.
  2. Kube-proxy: network-related work, creating virtual switches at the cluster level and implementing domain name resolution (DNS `)
  3. Container Runtime: ** Container runtime environment process: the most common isDocker

K8s Basic concept introduction:
  1. Pod:k8sIs the minimum scheduling unit, usually onepodContains a container

Suppose we have a high concurrency scenario, we create 10 pods, and use Nginx load balancing to send requests to 10 pods, and then two of the POD services are shut down due to force majeure. At this point, 8 machines are no longer enough to handle the current request concurrency. We usually setup script real-time monitoring on the server before closing the case issued by SMS or email notification operations staff login server restart 2 pod service, but so what all not too smart, we need a process to protect things, to make service has always been maintained at 10 pod. This thing is Deployment.

  1. deployment: to maintainpodThe number of

It says that Nginx implements load balancing, but the same role already exists in K8S: Service

  1. service: the multiplepodAbstract as a service to expose a unified port externally and achieve load balancing internally

The above are the three most common concepts of K8S, of course, there are far more than these concepts of K8S, common also:

  1. ingress: Implements domain name forwarding and port mapping so that users can access the InternetiporThe domain nameAccess specifiedThe service, the ingressThe bottom layer is also based onnginximplementation
  2. replicaset: set of replicas of multiple instances, representing onedeploymentThere arepodNumber of instances

To summarize the process:

  1. Client accessNginxThe server
  2. NginxThe server calls us through domain name forwardingK8s Masternode
  3. Maste rThe node’sIngressCall the request to our specific through domain name forwardingService
  4. ServiceThe request is routed to the corresponding through its own load balancingPodon

Here is the k8s.yML configuration file:

---
apiVersion: v1
kind: Service
metadata:
  name: sli-ui
  namespace: sli
spec:
  type: NodePort
  selector:
    app: ui
  ports:
  - name: svc-port
    port: 80
    targetPort: 80
    nodePort: 30184

---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: ui
  namespace: sli
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: ui
    spec:
      volumes:
      - name: localtime
        hostPath:
          path: /usr/share/zoneinfo/Asia/Shanghai
      - name: sli-pvc
        persistentVolumeClaim:
          claimName: sli-pvc
      imagePullSecrets:
      - name: harborsecret
      containers:
      - name: ui
        image: www.harbor.com/sli/ui:1.0.4-324
        imagePullPolicy: IfNotPresent
        volumeMounts:
        - name: localtime
          mountPath: /etc/localtime
        - name: sli-pvc
          mountPath: /usr/share/nginx/html/video
          subPath: nginx/video
        - name: sli-pvc
          mountPath: /sli
          subPath: sli

Copy the code

The core concepts in the configuration file are delopyment and service, for example, service offers the number of PODS for exposing services.

2.5 supplement

Devops related software is very much, in addition to Docker and K8s, there are not mentioned such as Gitla, JIRA, Sonar, Confluence and so on, the re-construction of the internal Devops architecture is obviously too far away, and these in a company has used similar cloud services instead, Cloud services are also good with cloud services, so the next section introduces how to use DaoCloud, the homegrown CICD cloud platform.


Practice 2.

1.DaoCloud

DaoCloud is an enterprise-level container cloud platform, so we don’t have to install Jenkins and K8S on our own network (of course Docker will still have to be installed, as Docker will eventually push the built image to our own server to run as a container), reducing the operational stress

1.1 Creating a Springboot Project

Write an interface and test case and commit it to the ** Git ** repository


1.2 Create DaoCloud project

  1. Select project -> Create a new project and point to the ** Git ** project repository we just created

  1. Go to the process definition, which is equivalent to visualizing each stage of the configuration **Jenkins **

  1. Modify a test tasktestThe script formvn testSince the test script is executed by the cloud container, the base image is changed tomaven

  1. Modify the build task to use localDockerfileAnd save it


1.3 Added Dockerfile to the project

FROM maven:3 AS bd MAINTAINER liangchaogui <[email protected]> WORKDIR /code ADD ./ /code RUN mvn package -Dmaven.test.skip=true FROM java:8 MAINTAINER liangchaogui <[email protected]> COPY --from=bd /code/target/*.jar /app.jar  CMD ["java", "-jar","/app.jar"]Copy the code

Since two mirrors are involved, the –from command has been added to the COPY command to COPY files from one image to another


1.4 Building an Image (CI)

Click on the manual trigger in the upper right corner

You can seeDaocloudStart with the repository codecloneGo to the cloud machine and pull our configurationmaven3Image, and start downloadingmavenDependency, and finally executionmvn test.

Once the test is complete, we move into the next build phase (fail or break) and start building on ourDockerfileEach line of instructions is mirrored

complete

Click image when the build is complete You can see that the image has been pushedDaocloudAnd then we just execute on our own serverdocker pull + docker runThe command will run our image as a container, but we want toDaoCloudAfter pipelining, we push the image to our server and run it as a container, instead of our server pulling the image manually.DaoCloudHow do you know which machine to run the image onaoCloudConnect it to our server


1.5 DaoCloud connect to your own server

Choose Cluster Management > Import Host

Pass it through your own servercurlPerform thehttprequest

Using a virtual machine as an example, execute the request

The request is successful

DaoCloud successfully connected to its own host

When it’s done, it’s our owndockerThe server is likedaocloudThe host is connected, so that theDaoCloudThe panel has a visual interface to view the current hostDockerSomething about the environment

All that is done is to build the image and get through Daocloud and its own servers, completing the CI continuous integration part, and then completing the CD continuous deployment part.


1.6 Deploying An Application (CD)

Choose Application > Create Application > Deploy the latest version

Select the name of the application, the image of the application, the version of the image, and the runtime environment to deploy (in this case, to the virtual machine we just connected)

Next, enter a configuration interface, which is equivalent to the visual configuration of the Docker run command. Here is a brief introduction to the Docker run command:

docker run -d -p 80:80 nginx -v /home/data/:/home/data/
Copy the code
  • -dSpecify background running (do not blockShell)
  • -pInstruction ports, to the left of the colon are the server ports and to the right of the colon are the container ports
  • vSpecify a file mapping, such as one runMysqlThe container, of course, we want data to be persisted to our server, not just stored in the container

We set the container port to 8080 (SpringbootDefault boot port), the host port random goodClick Deploy Now

If the deployment is successful, click the randomly generated port to open the deployment pageWe deployed the application manually above, and the next step is to automate the deployment as wellstageIn the


1.7 Automatic Deployment

Select release phase

Publish your own host

Select the application we just created

complete

As soon as code is pushed to the repository, the pipeline starts all over again, from testing to building to deployment, automatedAn email notification is also sent to the mailbox when deployment is complete


3. Summary

1. Existing problems

1.1 Strange Deployment Modes

Some are uploaded based on NodeJS directly connected to the server, some are written to the Web page through Jenkins configuration file, and some are manually deployed

1.2 High Deployment Cost (Communication cost)

Developers deploying an application have a lot to say to operations

2. Future direction

The Web project is deployed in DaoCloud platform by Jenkins+Jenkinsfile+ Dockerfile, or it can also be deployed by Docker + K8s installed on the server itself to avoid manual deployment by operation and maintenance personnel. The Cocos project cannot be deployed on Jenkins servers because there is no Linux version of Creator (unless you have a Windows server, which is obviously too expensive), and there are many versions of Creator that can only be deployed on developers’ machines.