This article is shared by Huawei Cloud community “Deconstructing the Application of Container Technology in Huawei Cloud HE2E Project”, written by Agile Xiaozhi.

In the practice of Huawei Cloud DevCloud HE2E DevOps, the project adopts Docker technology for construction and deployment.

Container technology application, in fact, simple and very simple, its process is nothing more than: make image – upload image – pull image – start image.

Today, we will take you to deconstruct HE2E project from the perspective of container technology applications.

HE2E Technical Architecture Diagram:

Create a project

If you select a DevOps sample project when creating a project in Huawei DevCloud, you can create a DevOps sample project preconfigured with code repository, compilation, construction, and deployment tasks, namely, HE2E project.

Code warehouse

The code repository Phoenix-Sample is preset in the HE2E project.

In the root directory, you can see the images, kompose, result, Vote and worker folders, as well as the LICENSE, readme. md and docker-comedy-standalone and docker-compose YML files. Never mind that the Images folder holds a few Images, that the LICENSE and README are independent of the content of the code, and that the docker-comemage. yml file is a test file for local development.

Configure Kubectl’s kompose folder

Let’s take a look at the Kompose folder, which contains multiple YAML files named for each microservice application’s configuration. The configuration is read here when we deploy the CCE (configured at deployment time). In the spirit of going from shallow to deep, this article will first explain the configuration required for ECS deployment, so don’t be too hasty.

Function module with making image of Dockerfile

The result, vote, and worker folders correspond to the result, vote, and process modules of HE2E.

Vote for the product on the client side by clicking the LIKE button Voting data is written to the cache Redis vote
Processing voting data Data is written from the cache Redis to the database Postgres worker
The corresponding data of each product is displayed on the management terminal Extract data from database Postgres for presentation result

As you can see, each of the three folders has a Dockerfile file. These are the dockerfiles used to create images.

Let’s take the Dockerfile under result as an example:

FROM: Custom mirrors are based on FROM, where Node: 5.11.0-Slim is the base image required for custom. Subsequent operations are based on Node: 5.11.0-Slim.

WORKDIR /app: Specify and create a working directory /app.

RUN < command > : Executes < command >.

ADD < file > < directory > : Copy < file > to < directory >.

Line 5-9: Perform the NPM installation and save the related files to the appropriate directory.

ENV PORT 80: Defines the environment variable PORT=80

EXPOSE 80: Indicates port 80.

CMD < command > : Run < command > at Docker run time.

In the phoenix-sample-ci compilation task, “Make the Result image and push it to SWR repository” step, use “working directory” and “Dockerfile path” two options to determine the Dockerfile read during the image creation: < working directory >/<Dockerfile path >, i.e./result/Dockerfile.

The other two function modules of vote and worker are also mirrored in this way. It is worth mentioning that there are three files Dockerfile, dockerfile. j and dockerfile. j2 in the worker folder. However, in the build task, we only need to select one file for image making, and the file is dockerfile. j2.

In dockerfile.j2, copy the contents of target to code/target, but the target folder is not in the code. This is because the project under worker is a Java project, and the target folder is generated in the process of Maven construction. Therefore, in the construction task Phoenix-sample-ci, the worker image needs to be built through Maven before it is made.

Through the above Dockerfile file can already make three functional modules corresponding to the container image. Docker login, docker pull, and docker run commands are used to login, pull, and start images on the deployment host. However, this method requires that each mirror be pulled and started, so you cannot configure all mirrors at one time. Therefore, docker compose is introduced to realize the rapid orchestration of docker container cluster by docker compose. One click (one configuration file) configures the functional modules required by this project.

Configure the docker-comement-standalone. Yml file for docker-compose

When we deploy this project to the server, we start it in docker-compose mode.

In the deployment task Phoenix-sample-standalone, finally start the project by executing the shell command:

docker-compose -f docker-compose-standalone.yml up -d
Copy the code

The docker-comement-standalone. Yml in this shell command is the docker-comement-standalone. Yml file in the root directory of our code repository.

The docker-comedy-standalone. Yml file is interpreted below.

Version: Specifies which version of compose this YML complies with.

“Services” : contains services.

This YML contains five services: Redis, DB, Vote, result and worker. Db is the postgres database.

Image: indicates the address of an image.

Docker-server /docker-org/redis:alpine, docker-server/docker-org/worker:image**-version**, Here we use the form of parameterized substitution to define the mirror address.

In the build task Phoenix-sample-CI, The shell command in the step of “Replace the image version of the docker-compose deployment file” is exactly to replace the docker-server, docker-org and image-version in the docker-compose deployment file The three parameters defined in the build task dockerServer, dockerOrg, BUILDNUMBER.

After this substitution, the mirror address in our docker-comedy-standalone. Yml becomes the final address we need. Ex. : Swr.cn-north-4.myhuaweicloud.com/devcloud-bhd/redis:alpine, swr.cn-north-4.myhuaweicloud.com/devcloud-bhd/worker:20220303 . 1.

Among the five services, Vote, result and worker are generated by this project. Redis and DB are third-party applications, so there will be differences in mirroring versions.

Ports: indicates the port number. Bind containers and hosts to exposed ports.

In a vote, ports: 55:80 is used to bind port 80 used by the container to port 5000 used by the host, so that we can access the client interface of this project through < host IP >:5000.

Networks: Indicates the network connected to the container. The two simplest declarative network names are used here.

Frontend is the frontend and backend is the backend.

Environment: Adds environment variables. POSTGRES_HOST_AUTH_METHOD: “trust”. This variable prevents login failures when accessing Postgres.

Volumes: Mount data volumes or files on the host to the container. Db data: / var/lib/postgresql/data under the data content of the content becomes a postgres.

Deploy: Specifies the configuration related to the deployment and running of the service. Placement :constraints: [Node. role == manager] that is, the permission is set to the administrator.

Depends_on: Set the dependency. Vote depends on redis and result depends on DB.

At this point, the entire HE2E project code structure has been deconstructed.

Compile build

In fact, after the code deconstruction, the whole project is very clear. The code includes three function modules: Vote, Result and worker. The project also uses two third-party applications, Redis and Postgres. Therefore, our main goal in the build phase is to make images of these services and upload them to the SWR container image repository.

There are five build tasks preset in this project.

Preset builds the build task Mission statement
phoenix-sample-ci Basic build tasks.
phoenix-sample-ci-test Test the build tasks corresponding to the environment.
phoenix-sample-ci-worker The build task corresponding to the Worker function.
phoenix-sample-ci-result The build task corresponding to the Result function.
phoenix-sample-ci-vote Compile and build tasks corresponding to the Vote function.

We only analyze the Phoenix-sample-CI task.

The construction of three functional modules

Part of the build task has been analyzed during code deconstruction, including how to create an image by specifying a Dockerfile, the docker build operation. In addition, the steps to make and push the XX image to SWR also include the information needed to push the image. Here we set the push area, organization, image name and image tag, which is actually the operation of docker tag and Docker push.

During the process of creating and pushing images of vote, result, and worker, use the parameter BUILDNUMBER to define the version number of the image. BUILDNUMBER is a system-predefined parameter that changes with build dates and times.

Before making the worker image, it is necessary to build Maven for the project under the worker directory, so that the target file required by dockerfile.j2 (when making the image) will be generated.

Postgres and Redis builds

After making the three feature module images, the next step is to generate the Postgres and Redis images. The method chosen here is to write out the dockerfiles for both applications through shell commands.

Echo from postgres:9.4 > dockerfile-postgres echo from redis:alpine > dockerfile-redisCopy the code

Dockerfile-postgres and dockerfile-redis files will be generated in the current working directory.

Dockerfile-postgres: FROM Postgres :9.4 dockerfile-redis: FROM Redis :alpineCopy the code

In the next step, specify the dockerfile-postgres and dockerfile-redis files in the current directory to create images and upload them.

Replace the deployment profile and package it

By following these steps, the image is fully uploaded to the SWR repository. The subsequent “Replace Docker-compose deployment file image version” and “Replace Kubernates deployment file image version” steps place all XX-deploymen in docker-compose-standalone. Yml and kompose respectively in the root directory of the code Docker-server, docker-org, image-version in t.yaml file are replaced with parameters dockerServer, dockerOrg, BUILDNUMBER in build task. The meaning of these two steps is to modify the configuration files required for ECS deployments (docker-compose/ docker-comement-standalone. Yml) and CCE deployments (Kubernates/Kompose) to deployable, application-ready versions.

After the two files are modified, tar is used to package them. The packaged product is also uploaded to the software distribution library through the next two “Upload XX” steps.

Tips

In the help documentation for this project, there is a reference to configuring base dependency mirroring. The entire section is due to the limited pull of the underlying image source DockerHub used in the build task, and a compromise is taken to pull the image. In short, the entire operation is to replace the base image version by creating a prebuild task to avoid a failure to pull the image while building the Phoenix-sample-CI task. Accordingly, the Postgres and Redis image creation steps have been disabled in configure and Execute build tasks.

The deployment of

In the build phase, we have successfully built and uploaded three functional module images (Vote, Result, Worker) and two third-party images to SWR (Container Image Repository). All we need to do next is pull the image from the SWR to our deployment host and start it.

Throughout the practice, both ECS deployment and CCE deployment are provided, and three deployment tasks are preset in the project.

Preset deployment Tasks Mission statement
phoenix-sample-standalone Deployment tasks corresponding to the elastic cloud server process.
phoenix-cd-cce Deployment tasks corresponding to the cloud container engine process.
phoenix-sample-test Deployment tasks of the test environment.

We’ll just analyze the Phoenix-Sample-standalone task.

Transfer software packages to the deployment host

During the build phase, in addition to mirroring and uploading to SWR, we also modified, compressed, and uploaded the configuration file to the software distribution library. The first thing we need to do during deployment is transfer the configuration file from the software distribution library to the deployment host.

Based on actual deployment tasks, it is: To deploy a package/Build task to [host group] group-bhd, we select the Latest version of [Build task] Phoenix-Sample-CI build product ([build version][Latest]) and download it to the host’s deployment directory.

After this step is complete, the docker-stack.tar.gz and phoenix-sample-k8s.tar.gz compressed in the build task are displayed in the /root/phoenix-sample-standalone-deploy directory on the host. For an ECS deployment, we simply need to unpack the docker-stack.tar.gz file, which is the compressed package for the docker-comedy-standalone. Yml (recall “Replace the deployment profile and pack” from the build task).

Start docker-compose by executing a shell command

Once decompressed, we can start the project by executing the Docker-compose startup command.

In this step, the first three lines output three parameters docker-username, docker-password, docker-server respectively. These three parameters are used for the Docker login operation. Because we are involved in pulling the image in docker-comement-standalone. Yml, we need to log in to the SWR image repository before pulling the image.

After the login, go to the /root/phoenix-sample-standalone-deploy directory (CD /root/phoenix-sample-standalone-deploy).

Start docker-compose (docker-comement-f docker-comement-standalone. Yml up-d).

At this point, the project has been deployed to the host, where 5 container processes can be seen through the Docker ps -A command.

At the same time, visit http://{ip}:5000 and http://{ip}:5001 to access the client and management sides of the project.

conclusion

This paper deconstructs the code repository configuration, image construction and docker-compose deployment of HE2E project from the perspective of container technology application. Hopefully, this article will help more developers understand container technology and Huawei Cloud.

Click to follow, the first time to learn about Huawei cloud fresh technology ~