I. Container Technology

Before the emergence of virtual machines (VMware), new applications were often deployed on independent servers due to dependency problems, resulting in low utilization of hardware resources. The emergence of VMware perfectly solved this problem.

However, VMware also has its disadvantages. First, VMware is actually a virtual independent OS, which requires additional CPU, RAM and storage resources. Second, because the VIRTUAL machine is an independent OS, it is slow to start. Finally, separate patching & monitoring is required for standalone OS, and licensing of the system is required for commercial applications.

For these reasons, large Internet companies such as Google have been exploring the disadvantages of using container technology to replace the virtual machine model.

In the virtual machine model, the Hypervisor is Hardware Virtualization — the partitioning of Hardware resources into virtual resources packaged into the SOFTWARE structure of the VM, while the container model is OS Virtualization.

Containers share the host OS to achieve process-level isolation. Compared with VMS, containers have faster startup, lower resource consumption, and easier migration.

The 1.1 Linux container

Modern container technology originated from Linux and relies on core technologies such as Kernel Namespace, Control Group and Union File System.

  • Kernel Namespace

    Responsible for the quarantine

The complexity of container technology made it difficult to popularize until the advent of Docker, which made it easy to use container technology.

There is no Mac container yet, Windows platforms Win10 and Win Server2016 already support Windows containers.

Note that because containers share the host operating system, Windows containers do not run on Linux, but Docker for Mac and Docker for Windows support Linux containers by launching a lightweight Linux VM

In Windows, the system must be 64-bit Windows 10 with Hyper-V and container features enabled.

1.2 Docker installation

For Mac installation, see here. For Windows, see here.

1.3 Basic Concepts

1.3.1 Docker Engine Architecture

After the installation is successful, execute the Docker version and output something like the following:

  • Docker daemon

    • Image management, image build, API, authentication, core network, etc.

    Earlier Docker engines only contained LXC and Daemons, while Daemons coupled client, API, runtime, image building and other functions. The core functions of the disassembled daemons focused on image building management and API.

  • containerd

    • Encapsulates the execution logic of the container and is responsible for the lifecycle management of the container.
  • shim

    • Used to decouple running containers from daemons (daemons can be upgraded independently)
    • Keep all STDIN and STDOUT streams open
    • Synchronize the container exit status to the daemon
  • Runc (Container runtime)

    • Specification reference implementation of the OCI container runtime (hence also OCI layer)
    • Lightweight Lincontainer wrapper command line interaction tool
    • The only function is to create containers
1.3.2 mirror

Mirroring is made up of multiple layers, containing a streamlined OS and separate objects for applications and dependency packages. Images are stored in the image repositoryOfficial mirror warehouse service.

You can use the docker image pull image name command to pull an image. A complete image name is as follows:

:repository:

  • The relevant operation

    • Pull the mirrordocker image pull repository:tag
    • Viewing a Local Mirrordocker image ls
    • View mirror detailsdocker image inspect repository:tag
    • Remove the mirrordocker image rm repository:tag(Note that images with running containers cannot be deleted.)
  • The way the image is built

    • Build from a running containerdocker container commit
    • Through the Dockerfile file
1.3.3 container

A container is a runtime instance of an image.

  • Run the container

    $ docker container run [OPIONS] <IMAGE> <APP>
    Copy the code

    If the app process exits, the container exits. That is, kill the main process in the container and kill the container.

    Use the cDR-pq key combination to exit the container and return it to the host. Docker container exec it

    bash can be used to enter the container again.

    Because the parameters of running containers are quite complex, such as port mapping, mounting disks, setting environment variables, setting network, etc., especially when running multiple containers, it is more complicated, so Docker Compose is generally used to organize and manage containers. Docker Compose is installed by default in the new Docker version.

  • View running containers

    $ docker cotainer ls [-a]
    $ docker ps [-a]
    Copy the code
  • Start or stop the container

    $ docker container start <containerId>
    $ docker container stop <containerId>
    Copy the code

Second, the meaning of containerization

2.1 Unified Development Environment

2.2 Automated Testing

2.3 deployment (CI)

The container is the application.

Iii. Making the basic environment (mirror)

3.1 Containerizing

The process of integrating applications into containers to run is called containerization.

General steps for containerization:

  • Application code development
  • Create Dockerfile
  • Execute in the Dockerfile contextdocker image build
  • Start the container

Let’s get down to business and customize a full stack development infrastructure by containerizing it.

3.2 Customizing development images

3.2.1 Environment Requirements
  • CentOS7.x

  • gcc gcc-c++ pcre-devel openssl-devel make perl zlib zlib-devel

  • vim/openssh-server

  • NodeJS 12.9.1

  • Set the NPM Registory as a team private repository

  • Mount the project directory

  • Ports 80 and 22 are exposed

3.2.1 establishment Dockerfile

FROM centos:7

MAINTAINER xuhui120 <[email protected]>


#Install dependencies and SSHRUN yum -y install \ gcc \ gcc-c++ \ pcre-devel \ openssl-devel \ make \ perl \ zlib \ vim \ zlib-devel \ openssh-server  \ && ssh-keygen -t rsa -f /etc/ssh/ssh_host_rsa_key \ && ssh-keygen -t rsa -f /etc/ssh/ssh_host_ecdsa_key \ && ssh-keygen -t rsa -f /etc/ssh/ssh_host_ed25519_key \ && /bin/sed -i "s/#UsePrivilegeSeparation.*/UsePrivilegeSeparation no/g" /etc/ssh/sshd_config \ && /bin/sed -i "s/#PermitRootLogin.*/PermitRootLogin yes/g" /etc/ssh/sshd_config \ && /bin/sed -i 's/UsePAM yes/UsePAM no/g' /etc/ssh/sshd_config \ && /bin/echo "cd /var/www" >> /root/.bashrc


## # # # # # # # # # # # # # # # # # # # # #
#Install the node
## # # # # # # # # # # # # # # # # # # # # #
USER root
ARG NODE_VERSION=12.9.1
RUN cd /usr/local/src \
    && curl -O -s https://nodejs.org/dist/v${NODE_VERSION}/node-v${NODE_VERSION}-linux-x64.tar.gz \
    && mkdir -p /usr/local/nodejs \
    && tar zxf node-v${NODE_VERSION}-linux-x64.tar.gz -C /usr/local/nodejs --strip-components=1 --no-same-owner \
    && ls -al /usr/local/nodejs \
    && ln -s /usr/local/nodejs/bin/* /usr/local/bin/ \
    && npm config set registry http://registry.xxx.com \
    && cd /usr/local/src \
    && rm -rf *.tar.gz


#Changing the Root Password
ARG SSH_ROOT_PWD=root
RUN /bin/echo "${SSH_ROOT_PWD}" | passwd --stdin root


#Password-free loginARG LOCAL_SSH_KEY RUN if [ -n "${LOCAL_SSH_KEY}" ]; then \ mkdir -p /root/.ssh \ && chmod 750 /root/.ssh \ && echo ${LOCAL_SSH_KEY} > /root/.ssh/authorized_keys \ && sed -i  s/\"//g /root/.ssh/authorized_keys \ && chmod 600 /root/.ssh/authorized_keys \ ; fi
#Exposure to port
EXPOSE 22
EXPOSE 80

COPY startup.sh /
RUN chmod +x /startup.sh

WORKDIR /var/www

CMD /startup.sh
Copy the code

Create a container and run the script startup.sh:

#! /bin/bash

source /etc/profile

/usr/sbin/sshd -D
Copy the code

Finally, execute the build command docker image build.