The original article is reprinted from liu Yue’s Technology blog v3u.cn/a_id_203

Container, see container. The main advantage of Docker containers is that they are portable. A set of services, all of whose dependencies can be bundled into a single container independent of the Linux kernel, platform distribution, or deployment model of the host version. This container can be transferred to another host running Docker and executed without compatibility issues. And traditional micro service architecture will each service package for container alone, although can micro service container is changed environment in a given amount of work within the infrastructure of higher load density, however, in the whole environment created, monitoring, and destruction of containers of aggregate demand exponentially, thus significantly increased the complexity of the environment based on container management.

On this basis, we will break up the service into parts and integrate Tornado service, Nginx server and the supporting monitoring and management program Supervisor into a single container to maximize its high portability.

Docker specific installation process, please move to jade step: an inch of an inch of blood in the downtime, hundreds of thousands of soldiers container | Win10 / Mac system based on Kubernetes (k8s) build Gunicorn + Flask Web high availability cluster

The system architecture in the overall container is shown in the figure below:

First, create project catalog MyTornado:

mkdir mytornado
Copy the code

Tornado6.2, a well-known non-blocking asynchronous framework, is used to create a service entry file main.py

import json  
import tornado.ioloop  
import tornado.web  
import tornado.httpserver  
from tornado.options import define, options  
  
  
define('port', default=8000, help='default port', type=int)  
  
  
class IndexHandler(tornado.web.RequestHandler):  
    def get(self):  
        self.write("Hello, Tornado")  
  
  
def make_app():  
    return tornado.web.Application([  
        (r"/", IndexHandler),  
    ])  
  
  
if __name__ == "__main__":  
    tornado.options.parse_command_line()  
    app = make_app()  
    http_server = tornado.httpserver.HTTPServer(app)  
    http_server.listen(options.port)  
    tornado.ioloop.IOLoop.instance().start()
Copy the code

Here we run port through the command line to pass the way of listening, convenient multi-process service start.

After that, create the requirements.txt project dependency file:

Tornado = = 6.2Copy the code

Next, create the tornado. Conf configuration file for Nginx

Upstream myTornado {server 127.0.0.1:8000; Server 127.0.0.1:8001; } server { listen 80; location / { proxy_pass http://mytornado; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; }}Copy the code

Here, Nginx listens on port 80 and reverse-proxies to ports 8000 and 8001 on the local system. Here, we use the default load balancing scheme: polling. If you have other requirements, you can modify it according to other schemes:

1. Polling (default) Each request is allocated to a different backend server one by one in chronological order. If the backend server goes down, it is automatically removed. Upstream {server 192.168.0.14; Server 192.168.0.15; } 2, weight specifies the polling probability, weight is proportional to the access ratio, used in the case of uneven back-end server performance. Upstream {server 192.168.0.14weight =3; Server 192.168.0.15 weight = 7; } 3, the problem with the above method is that in a load balancing system, if a user logs in to a server, then the user's second request, because we are a load balancing system, each request will be redirected to one of the server cluster. If a user who has logged in to a certain server relocates to another server, its login information will be lost, which is obviously not appropriate. We can solve this problem by using the IP_hash directive. If the client has already accessed a server, when the user accesses it again, the request will be automatically located to the server through the hash algorithm. Each request is allocated according to the hash result of the access IP, so that each visitor has a fixed access to the back-end server, which can solve the session problem. upstream backserver { ip_hash; Server 192.168.0.14:88; Server 192.168.0.15:80; } 4. Fair (third-party plug-in) allocates requests based on the response time of the back-end server, with priority given to those with short response times. upstream backserver { server server1; server server2; fair; } 5. Url_hash (third-party plug-in) allocates requests based on the hash result of the url, so that each URL is directed to the same backend server, which is more effective when the backend server is cached. upstream backserver { server squid1:3128; server squid2:3128; hash $request_uri; hash_method crc32; }Copy the code

The Supervisor configuration file it is handling is supervisord.conf.

[supervisord] nodaemon=true [program:nginx] command=/usr/sbin/nginx [group:tornadoes] programs=tornado-8000,tornado-8001 [program:tornado-8000] command=python3.8 /root/mytornado/main.py --port=8000 Autorestart =true [program: Tornado -8001] [program: Tornado -8001] [program: Tornado -8001] [program: Tornado -8001] Py --port=8001 # directory=/root/mytornado # autorestart=true # When the Supervisor is started, the program automatically starts autostart=true # log Stdout = /var/log/Tornado-8001.log redirect_stderr=true loglevel=infoCopy the code

Supervisor, a tool designed to monitor administrative processes on Unix-like systems, was released in 2004 and is designed for handling the Supervisorctl and supervisorord formats. The Supervisor’s main function is to launch the configured program, respond to commands sent by the Supervisor, and restart the exiting child process while the Supervisor’s main function is the Supervisor’s client, which provides a series of parameters in the form of a command line to enable the user to send instructions to the Supervisor. Commonly used commands include start, pause, remove, and update.

Here, we mainly use Supervisor to monitor and manage Tornado service, and the default project directory here is /root/myTornado /

The process is configured with two nginx listening ports: 8000 and 8001 respectively

Finally, write the container configuration file Dockerfile:

FROM yankovg/python3.8.2-ubuntu18.04 RUN sed -i "s@/archive.ubuntu.com/@/mirrors.163.com/@g" /etc/apt/sources.list \ && rm -rf /var/lib/apt/lists/* \ && apt-get update --fix-missing -o Acquire::http::No-Cache=True RUN apt install -y nginx supervisor pngquant # application RUN mkdir /root/mytornado WORKDIR /root/mytornado COPY main.py /root/mytornado/ COPY requirements.txt /root/mytornado/ RUN pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/ # nginx  RUN rm /etc/nginx/sites-enabled/default COPY tornado.conf /etc/nginx/sites-available/ RUN ln -s /etc/nginx/sites-available/tornado.conf /etc/nginx/sites-enabled/tornado.conf RUN echo "daemon off;" >> /etc/nginx/nginx.conf # supervisord RUN mkdir -p /var/log/supervisor COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf # run CMD ["/usr/bin/supervisord"]Copy the code

The base image is Ubuntu18 with Python3.8 installed, which has both small size and extensible features. After adding the apt-get installation source, install Nginx and Supervisor respectively.

Then, create the project directory /root/myTornado/inside the container as described in the Supervisor configuration file

And the above writing good main. Py and requirements. TXT is copied to the container inside, run PIP install – r requirements. TXT -i mirrors.aliyun.com/pypi/simple… Install all dependencies for the project.

Finally, copy tornado. Conf and supervisord.conf to the corresponding configuration path, and start the Nginx and Supervisor services respectively.

Once written, run the command package image from a terminal in the project root directory:

docker build -t 'mytornado' .
Copy the code

The first build will wait a little while because the base image service needs to be downloaded:

Liuyue: Tornado Liuyue $Docker build-t Tornado. [+] Building 16.2s (19/19) FINISHED => [internal] Tornado Definition from Dockerfile 0.1s => => Dokerfile: Dockerignore 0.0s => => transferring context: 2 b 0.0 s = > (internal) load metadata for docker. IO/yankovg/python3.8.2 - ubuntu18.04: latest 15.9 s = > (internal) load build Transferring-to-transferring-to-transferring-to-transferring-to-transferring-to-transferring-to-transferring-to: 132B 0.0s => [1/14] FROM Docker. IO/yankovg/python3.8.2 - ubuntu18.04 @ sha256:811 ad1ba536c1bd2854a42b5d6655fa9609dce1370a6b6d48087b3073c8f5fce 0.0 s => CACHED [ 2/14] RUN sed -i "s@/archive.ubuntu.com/@/mirrors.163.com/@g" /etc/apt/sources.list && rm -rf /var/lib/apt/lists/* 0.0s => CACHED [3/14] RUN apt install -y Nginx Supervisor pngquant 0.0s => CACHED [4/14] RUN / dir /mytornado 0.0s => CACHED [5/14] WORKDIR /root/ myTornado 0.0s => CACHED [6/14] COPY main.py /root/ myTornado / 0.0s => CACHED [7/14] COPY requirements. TXT /root/mytornado/ 0.0s => CACHED [8/14] RUN PIP install -r Requirements. TXT -i https://mirrors.aliyun.com/pypi/simple/ 0.0 s = > CACHED [9/14] RUN rm /etc/nginx/sites-enabled/default 0.0s => CACHED [10/14] COPY tornado. Conf /etc/nginx/sites-available/ 0.0s => CACHED [11/14] RUN ln -s /etc/nginx/sites-available/tornado.conf /etc/nginx/sites-enabled/tornado.conf 0.0s => CACHED [12/14] RUN echo "daemon off;" >> /etc/nginx.conf 0.0s => CACHED [13/14] RUN mkdir -p /var/log/supervisor 0.0s => CACHED [14/14] COPY Supervisord. Conf/etc/supervisor/conf. D/supervisord. Conf 0.0 s = > exporting to image 0.0 s = > = > exporting the layers 0.0 s = > = > writing image sha256:2 dd8f260882873b587225d81f7af98e1995032448ff3d51cd5746244c249f751 0.0 s = > = > naming the to Docker. IO/library/mytornado 0.0 sCopy the code

After the package is complete, run the following command to view the image information:

docker images
Copy the code

You can see that the total mirror size is less than 1g:

liuyue:docker_tornado liuyue$ docker images  
REPOSITORY                           TAG       IMAGE ID       CREATED         SIZE  
mytornado                            latest    2dd8f2608828   4 hours ago     828MB
Copy the code

Let’s start the container:

docker run -d -p 80:80 mytornado
Copy the code

The port mapping technology is used to map port 80 services in the container to port 80 on the host.

Enter the command to view the service process:

docker ps
Copy the code

Show running:

docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 60e071ba2a36 mytornado "/usr/bin/supervisord" 6 seconds Ago Up 2 seconds 0.0.0.0:80->80/ TCP, ::80->80/ TCP Frosty_lamport: Docker_Tornado Liuyue $Copy the code

At this point we open a browser and visit http://127.0.0.1

No problem at all.

At the same time, you can select the corresponding container according to the running container ID:

liuyue:docker_tornado liuyue$ docker exec -it 60e071ba2a36 /bin/sh  
#
Copy the code

Inside the container we can see all the files for the project:

# pwd  
/root/mytornado  
# ls  
main.py  requirements.txt  
#
Copy the code

Importantly, Supervisor can be used to manage the existing Tornado process.

View all processes:

supervisorctl status
Copy the code

According to the configuration file, we have three services running inside the container:

# supervisorctl status  
nginx                            RUNNING   pid 10, uptime 0:54:28  
tornadoes:tornado-8000           RUNNING   pid 11, uptime 0:54:28  
tornadoes:tornado-8001           RUNNING   pid 12, uptime 0:54:28
Copy the code

Stop the service based on the service name:

# supervisorctl stop tornadoes:tornado-8001  
tornadoes:tornado-8001: stopped  
# supervisorctl status  
nginx                            RUNNING   pid 10, uptime 0:55:52  
tornadoes:tornado-8000           RUNNING   pid 11, uptime 0:55:52  
tornadoes:tornado-8001           STOPPED   Dec 28 08:47 AM  
#
Copy the code

Start again:

# supervisorctl start tornadoes:tornado-8001                         
tornadoes:tornado-8001: started  
# supervisorctl status  
nginx                            RUNNING   pid 10, uptime 0:57:09  
tornadoes:tornado-8000           RUNNING   pid 11, uptime 0:57:09  
tornadoes:tornado-8001           RUNNING   pid 34, uptime 0:00:08  
#
Copy the code

If the service process terminates unexpectedly, the Supervisor can pull it up and revive it with full blood:

# ps - aux | grep python root 1 55744 20956 0.0 0.1? Ss 07:58 0:01 / usr/bin/python/usr/bin/supervisord root 11 102148 22832 0.0 0.1? S 07:58 0:00 python3.8 /root/ myTornado /main.py --port=8000 root 34 0.0 0.1 102148 22556? Py /root/ myTornado /main.py --port=8001 root 43 0.0 0.0 11468 1060 PTS /0 S+ 08:51 0:00 grep python # kill -9 34 # supervisorctl status nginx RUNNING pid 10, uptime 1:00:27 tornadoes:tornado-8000 RUNNING pid 11, uptime 1:00:27 tornadoes:tornado-8001 RUNNING pid 44, uptime 0:00:16Copy the code

If you want, you can also submit the compiled image to Dockerhub, so that you can use it at any time and pull it at any time. You don’t need to compile every time. Here I have pushed the image to the cloud, and you can pull it directly if you want:

docker pull zcxey2911/mytornado:latest
Copy the code

For details about how to use Dockerhub, see Deploying the Gunicorn+Flask independent architecture using Dockerhub in Centos7.7

Conclusion: Admittedly, Docker container technology eliminates the environmental differences between online and offline, and ensures the environmental consistency and standardization of service life cycle. Developers use mirror to achieve the construction of a standard development environment, development is completed by encapsulating the integrity environment and application of mirror to migrate, whereby testing and operations staff can be directly deployed software image to test and release, greatly simplifies the process of continuous integration, testing, and release, but we also had to face container technology the disadvantage of the existing phase, That’s the performance penalty. Docker containers have almost no overhead on CPU and memory usage, but they affect I/O and OS interactions. This overhead comes in the form of extra cycles per I/O operation, so small I/ OS suffer much more than large ONES. This overhead limits throughput by increasing I/O latency and reducing CPU cycles for useful work. Perhaps in the near future, with the improvement of kernel technology, this defect will be gradually solved, and finally we send you the project address: github.com/zcxey2911/D…

The original article is reprinted from liu Yue’s Technology blog v3u.cn/a_id_203