What is a Docker

Docker has no official Chinese document (at least not so far, 2018/1/1), so I can only give it through the search engine and my own understanding.

Docker is an open source application container engine that allows developers to package their applications and dependencies into portable containers and distribute them to any popular Linux machine, as well as virtualization. Containers are completely sandboxed and have no interface with each other.

The above is from Baidu Baike, but AS FAR as I’m aware, Docker is already supported on Mac and Windows, so it should be able to be deployed on any machine.

In fact, in terms of functions, Docker is a virtualization technology launched to solve the problems of different development environment/operation and maintenance environment, inconvenient release and multi-platform transplantation. (My humble opinion, not necessarily correct)

So what scenarios is Docker used for?

  • Automated packaging and publishing of Web applications;
  • Automated testing and continuous integration, release;
  • Deploy and adjust databases or other backend applications in a service environment;
  • Build from scratch or extend an existing oneOpenShiftorCloud FoundryPlatform to build their own PaaS environment.

So What?

After a lot of talking, I still don’t know what Docker is. For the moment, I will put aside what technology Docker uses and what instructions can be used, and introduce some concepts first.

The two most important concepts of Docker are images and containers. Besides, links and data volumes are also very important.

The mirror

If you’ve ever used a virtual machine, you’ve probably heard of mirroring. If you’ve never used a virtual machine, you’ll probably know the term if you’re installing a system. In fact, Docker images are similar to snapshots of virtual machines, but much more lightweight, very, very lightweight.

There are many ways to create a Dcoker image, the most common is to create a new image under an existing image, because basically everything we need has a common image. Each image has a unique ID that exists as an identifier.

The container

After the image, let’s talk about containers. Again, this is a virtual machine. Notice, it’s always a child. The virtual machine analogy, why? Because Docker is not really a virtual machine.

Creating a container from an image is the same as creating a VM using a snapshot. The former is lighter. The same thing? Applications run in containers, just like virtual machines.

For example! You can download an Ubuntu image, publicly install Django and other applications and dependencies to modify it, and then create a container from the image to run the application once it starts.

Containers, like virtual machines, are isolated and have unique identification ids and names. Containers also expose specific ports in order to expose services to the outside world.

Containers differ greatly from virtual machines in that they are designed to run a single process and do not do a good job of emulating a complete environment, although multiple processes can be started through related instances, but I don’t think this is really necessary.

A container is designed to run an application, not a machine, and that’s what it’s all about.

Data volume

Data volumes can be persisted regardless of the container life cycle. They are ostensibly Spaces inside the container, but are actually kept outside the container, allowing manipulation of the container without affecting the data.

The Docker developer defines the application part and the data part, and provides tools to separate them. One of the mental changes you need to make with Docker is that containers should be transient and disposable.

Volumes, for containers, can be used to create multiple containers with the same image and define different volumes. Volumes are stored in the host file system where Docker is running and can be used to share data between containers.

link

When a container is started, it is assigned a random private IP that other containers can use to communicate with each other. So, one, linking provides a way for containers to communicate with each other. Two, the containers will share a local network.

portability

This is not one of the concepts listed, but it’s also important, and it’s one of the most important features of Docker.

To put it bluntly, Docker does not allow non-portable images.

How?

How does Docker implement these functions and requirements? Here are two nouns:

Cgroups

This is a Linux kernel feature that makes two things possible:

  • Limit the resource usage of Linux process groups (memory, CPU)
  • Make PID,UTS,IPC, network, user and load namespaces for process groups

The most critical thing is the namespace. A PID namespace runs with an isolated PID, separate from the main PID namespace, so you can have your own initialization process with PID 1 in a PID namespace. Other namespaces in the same, then you can use the Cgroups create an environment, in which process can run, and separate to the rest of the operating system application, but the key point here is that this process is used on environment had been loading and running kernel, so the extra spending is the same with other processes running.

Union file system

In a Union file system, file systems can be mounted on top of other file systems, resulting in a hierarchical accumulation of changes. Each mounted filesystem represents the set of changes since the previous filesystem, like a diff.

So, when you download an image, modify it, and then save it to a new version, you’re really just creating a new Union file system loaded on top of the original layer that wraps the underlying image. This is why Docker images are lightweight. Generally speaking, your DB,Nginx, and Syslog images can all share the same Ubuntu base, and each image saves only changes based on the functionality they need.

Just do it!

Fist of all! Install the Docker

Update your apt source *(apt-get update)

The installation

Check whether the curl package is installed.

$ which curl
Copy the code

To retrieve the latest docker installation package, go to curl

If curl is not installed, install the curl package after updating your APT source.

$ sudo apt-get update $ sudo apt-get install curl
Copy the code

Get the latest Docker installation package.

$ sudo curl -sSL https://get.docker.com/ | sh 
Copy the code

Installation time is long, wait patiently or drink a cup of coffer!

Check whether Docker is installed successfully.

$ sudo docker run hello-world
Copy the code

This command downloads a test image and starts a container to run it.

Use the command to download an image from the public registery

$ docker pull ubuntu:latest
Copy the code

This public registry contains almost all images, ubuntu,Mysql,Redis, etc. Docker developers maintain several images in this public registry and can also obtain images published by users.

It is also possible to create a private Registry.

List the mirror

$ docker images
Copy the code

Create containers from the image

$ docker run --rm -ti ubuntu /bin/bash
Copy the code

Description:

  • --rmtellDockerRemove containers as soon as running processes are pushed out, often used during testing to eliminate clutter.
  • -titellDockerAssign a dummy terminal and enter interactive mode.
  • ubuntuContainer based image
  • /bin/bashThe command to run