This is the 19th day of my participation in the August Wenwen Challenge.More challenges in August

🌈 Past review

Thank you for reading, I hope to help you, if there is a flaw in the blog, please leave a message in the comment area or add my private chat in the home page, thank you for every little partner generous advice. I’m XiaoLin, a boy who can both write bugs and sing rap

  • 💖 take 20 minutes per day to get you started on ElasticSearch 4️ discount
  • 💖 take 20 minutes per day to get you started on ElasticSearch 3️ discount
  • 💖 take 20 minutes per day to get you started on ElasticSearch 2️ discount

Overview of Docker

1.1 History of virtualization Technology

Before virtualization, if we wanted to build a server, we needed to do the following:

→ Purchase a hardware server. → Install and configure the operating system on the hardware server. → Configure the application running environment on the operating system. → Deploy and run the application.

The disadvantages of this approach are:

  1. Deploying applications is slow.
  2. The cost involved is very high (time cost, server cost).
  3. Application migration trouble; To migrate an application, you have to deploy it repeatedly.

So, to solve this problem, virtualization technology followed.

1.2 virtualization technology

Virtualization (English :Virtualization) is a computer resource management technology that abstracts and transforms a variety of computer hardware resources, such as servers, networks, CPUS, memory and storage, to present a new hardware resource environment in which our operating system can be installed. Deploy our application environment and so on, it breaks down the impenetrable barrier of computer hardware resources, so that we can combine and apply these resources in a better way than the original structure of computer hardware resources.

1.3 Advantages and disadvantages of virtualization technology

1.3.1 Advantages of virtualization technology

A physical server can virtualize multiple virtual servers to fully utilize computer resources.

1.3.2 Disadvantages of virtualization technology

  1. Each TIME a VM is created, an operating system is created, which consumes a lot of resources. The more VMS that are installed, the more resources are consumed.
  2. If the environment runs properly during development, an error may occur during the deployment to the VM environment test.

1.4 virtualization classification

Virtualization is generally divided into:

  1. Hardware-level-virtualization
  2. OS-level-virtualization

1.4.1 hardware-level virtualization

Hardware-level virtualization is a virtualization technology that runs on hardware. Its core technology is Hypervisor sound [Haip ə’vaizə]. A Hypervisor is a software layer that runs on the hardware of an underlying physical server and virtualizes hardware resources, such as CPUS, hard disks, and memory resources. We can then install an operating system on top of virtualized resources, which is called a virtual machine. Like VMWare, VirtualBox and so on are using this technology, we often use the desktop version of the VIRTUAL machine VMWare is using this virtualization technology.

With the Hypervisor layer, we can create separate virtual machines, each of which is separate and independent, so that we can virtualize multiple servers on top of a single hardware server and local operating system to host our applications

1.4.2 Operating system-level Virtualization

Operating system-level virtualization, also known as containerization, is a feature of the operating system itself that allows multiple instances of user-space to exist in isolation from each other. These user-space instances are also referred to as containers. A normal process can see all of the computer’s resources while a process in a container can only see the resources allocated to that container.

In plain English, operating system-level virtualization groups computer resources managed by the operating system, including processes, files, devices, and networks, and then allocates them to different containers for use. Processes running in a container can only see the resources allocated to the container. To achieve isolation and virtualization purposes.

1.5. Development of container technology

Based on the shortcomings and shortcomings of hardware level virtualization technology, another virtualization technology is developed, namely operating system level virtualization technology.

Operating system-level virtualization is a virtualization technology that runs on an operating system. It simulates multiple processes running on an operating system and encapsulates them in a closed container. This technology is also called containerization technology.

Docker is one of the most popular implementations of containerization technologies. Docker was released in 2013, Docker is based on LXC technology, LXC is a container technology implementation on Linux platform.

LXC, short for Linux Container, is a kernel virtualization technology that provides lightweight virtualization to isolate processes and resources. It uses the same kernel as the host and has low performance costs. Linux provides this technology, but it was not developed until the advent of Docker.

1.6 History of Docker

In 2010, a group of young people in San Francisco started a startup called dotCloud that made PaaS (Platform as a Service) and got support from Y Combinator, Although dotCloud received some funding during its existence, IT struggled as IT giants (Microsoft, Google, Amazon, etc.) also entered the PaaS platform.

In 2013, Solomon Hykes, dotCloud’s 28-year-old founder, made the difficult decision to open source dotCloud’s core engine, a technology that bundles application code from Linux containers and easily moves it from server to server.

However, the core management engine based on LXC (Linux Container) technology has been opened to the public, and technicians around the world are amazed at how convenient it is. IT was dotCloud’s founders who made the difficult, all-or-nothing decision that sent shivers through all the IT giants.

Since Docker was open source in 2013, Docker technology became popular all over the world, so dotCloud decided to develop Docker as its main business, renamed the company DockerInc, and devoted itself to the development of Docker. In August 2014, Docker announced the sale of dotCloud (Platform as a Service) business to cloudControl, a Platform as a Service provider based in Berlin, Germany. From now on, Docker can pack light and focus on Docker research and development.

It took only one month from the decision to open source in February 2013 to the release of Docker 0.1 on March 20, 2013. The latest version of Docker is 18.03.

Docker grew rapidly. On June 9, 2014, the Docke team announced the release of Docker1.0, which indicates that the Docker platform is mature enough and can be applied into production products (with some support options that require payment).

Within one year, a small start-up ecosystem around Docker has gradually formed. Docker has won the favor of Google, Microsoft, Amazon, VMware and other IT giants, who have said that they will ensure the compatibility of their platform and Docker container technology.

On February 29, 2016, CloudControl announced in its official blog that it was going bankrupt, and dotCloud, a subsidiary of CloudControl, also announced that it would shut down its service on February 29. As the predecessor of Docker, DotCloud witnessed the growth of Docker and became a new star of cloud platform, but it was powerless. The prosperity of Docker indirectly led to the decline, rise and fall of DotCloud, the predecessor of Docker, on PaaS platform. This may be a classic case of disruptive innovation.

1.7. What is Docker

Docker is an open source application container engine, which is based on the Go language launched by Google. The project code is hosted on GitHub for maintenance.

Docker technology allows developers to package their applications and dependencies into a portable container, which can be published to run on any popular Linux server. In this way, the problem of inconsistency between development environment and operation environment can be solved. Therefore, container technology solves the contradiction between development and operation. Let development focus on development, operation focus on operation, don’t be disturbed by environmental issues.

Docker completely released the power of virtualization, greatly reducing the cost of computer resources supply, Docker redefined the process of program development testing, delivery and deployment, Docker proposed the concept of “build once, run everywhere”, so that application development, testing, deployment and distribution have become unprecedented efficient and easy!

Docker is a lightweight operating system virtualization solution, Docker is based on Linux container (LXC) technology, on the basis of LXC Docker further encapsulation, so that users do not need to care about container management, making the operation more convenient. Users can manipulate Docker containers as easily as a fast, lightweight virtual machine; Docker has attracted wide attention since it was open source. Docker was first developed based on Ubuntu, but subsequent CentOS, Debian, Fedora and other mainstream Linux operating systems all support Docker.

To put it simply, Docker is a standardized package of software and its dependent environment. Applications are isolated from each other and share an OS Kernel (which solves the problem of resource waste). It can run on many mainstream operating systems.

1.8. Differences between containers and virtual machines

  1. A container is a collection of code and environment relationships packaged together, while a virtual machine is a separate operating system at the physical level.
  2. Multiple containers can run on the same physical server and share the kernel resources of the same operating system. Multiple virtual machines can also run on the same machine, but each requires a full operating system.
  3. A container can be virtualized on the local OS level and directly reuse the local host OS. In traditional virtualization mode, each VM OS needs to be installed separately.
Comparative study The container The virtual machine
The startup time Second level Minutes of class
Hard disk space It is usually tens of MB As a general rule, be 10 gb
performance Close to the native Close to the native
System support Supports thousands of containers on a single machine Usually dozens
The operating system Share the OS with the host The host allows the VM OS

1.9. Why Docker

As an emerging virtualization technology, Docker has many advantages compared with traditional virtualization methods.

  1. Docker containers can be started in seconds, which is much faster than traditional virtual machines.
  2. Docker has a high utilization rate of system resources, and thousands of Docker containers can be run simultaneously on a host.
  3. The container consumes almost no additional system resources except to run the applications in it, resulting in high application performance. Traditional virtual machines run 10 completely different applications. Maybe we will deploy 10 virtual machines, while Docker only needs to start 10 isolated applications.
  4. Docker can be delivered and deployed more quickly, saving a lot of time for development, testing and deployment. For development and operation personnel, the most desirable thing is to create or configure once and run normally anywhere.
  5. More efficient virtualization, Docker containers do not require additional hypervisor support to run, it is kernel-level virtualization, thus achieving higher performance and efficiency.
  6. Easier migration and expansion, Docker container can run on almost any platform, including physical machine, virtual machine, public cloud, private cloud, PERSONAL computer, server, etc. This compatibility allows users to easily migrate an application directly from one platform to another.

Ii. Docker environment construction

2.1 Docker version

Docker released Docker 0.1 on March 20, 2013, and has released multiple versions until now. Since March 2017, Docker has been divided into two branch versions on the basis of the original: Docker CE and Docker EE.

  1. Docker CE: Community Free edition, free to use forever.
  2. Docker EE: Enterprise edition, with more features and more emphasis on security, but for a fee.

2.2 Docker installation

CentOS7 can be installed directly through yum.

Check to see if Docker is installed

yum list installed | grep docker
#If so, delete the corresponding package
 yum remove docker...... 
Copy the code

Install using YUM

yum install docker -y
Copy the code

Set the Decker to start automatically

#Start the Docker server
systemctl start docker

#Enable docker service startup
systemctl enable docker
Copy the code

Check whether the Docker version is installed successfully

docker -v
Copy the code

Uninstall the Docker

yum remove docker.x86_64 -y
yum remove docker-client.x86_64 -y
yum remove docker-common.x86_64 -y
Copy the code

2.3. Start Docker

Start the

systemctl start docker 
#Or you could use this one
service docker start
Copy the code

stop

systemctl stop docker
#Or use this one
service docker stop
Copy the code

restart

systemctl restart docker 
#Or you could use this one
service docker restart
Copy the code

Check the running status of the Docker process

systemctl status docker
#Or this one could work
service docker status
Copy the code

View the Docker process

ps -ef | grep docker
Copy the code