Original address: docs.docker.com/get-started… (Official Docker document)

Docker is an open platform for developing, delivering and running applications. Docker keeps your program separate from your infrastructure, so you can deliver software quickly. With Docker, you can manage your apps the same way you manage apps. By taking advantage of Docker’s fast delivery, testing, and coding, you can significantly reduce the latency you need to code and run your product.

Docker platform

Docker provides the ability to package and run applications through a loose isolation environment, also known as a container. Its isolation and security allow you to run many containers simultaneously on a given host. The container is lightweight, and it contains everything necessary to run the application, so you don’t need to rely on what is currently installed on your host. You can easily share your containers as you work, and it ensures that the same container works the same way for everyone who shares it.

Docker provides tools and platforms to manage the entire life cycle of your container:

  • Develop your application and use components provided by the container
  • The container becomes the smallest unit in which your application is allocated and tested.
  • When ready, deploy the application to the production environment as a container or choreographed service. Whether your production environment is a local data center, a cloud provider, or a mixture of the two, it works the same way.

What can I do with Docker?

Deliver your application quickly and consistently

Docker simplifies the development lifecycle, allowing developers to use native containers to provide applications and services in a standardized environment. Containers are useful for continuous integration and continuous delivery (CI/CD) workflows.

Consider the following example scenario:

  • Your developers write code locally and share their work with colleagues using the Docker container.
  • They use Docker to push applications into a test environment and perform both automated and manual tests.
  • When developers find errors, they can fix them in the development environment and redeploy them to the test environment for testing and validation.
  • After testing is complete, getting the fix from the customer is as simple as pushing the updated image into production.
Responsive deployment and scaling

Docker’s container-based platform allows for highly portable workloads. Docker containers can run on a developer’s local laptop, a physical or virtual machine in a data center, a cloud provider, or a mixed environment.

Docker’s portability and lightweight features also make it easy to dynamically manage workloads and scale and dismantle applications and services almost in real time based on business requirements.

Running more workloads on the same hardware

Docker is lightweight and fast. It provides a viable, cost-effective alternative to hypervisors based virtual machines, so you can use more computing power to achieve business goals. Docker is well suited to high-density environments and medium to small deployments where you need to do more with less resources.

Docker architecture

Docker uses a client-server architecture. Docker clients talk to the Docker daemon, which is responsible for building, running, and distributing Docker containers. Docker clients and daemons can run on the same system, or you can connect a Docker client to a remote Docker daemon. Docker clients and daemons use REST apis to communicate over UNIX sockets or network interfaces. Another Docker client is Docker Compose, which allows you to use an application that consists of a set of containers.

Docker daemon

Docker daemons (Dockerd) listen to Docker API requests and manage Docker objects, such as images, containers, networks, and volumes. A daemon can also communicate with other daemons to manage Docker services.

The Docker client

Docker client (Docker) is the main way for many Docker users to interact with Docker. When you use commands such as docker run, the client sends them to Dockerd, which executes them. The docker command uses the Docker API. Docker clients can communicate with multiple daemons.

The Docker warehouse

The Docker repository stores Docker images. Docker Hub is a public repository that anyone can use, and Docker is configured by default to find images on Docker Hub. You can even run your own private repository.

When you use the Docker pull or Docker run commands, the required images are extracted from the configured repository. When you use the Docker push command, your image will be pushed to the configured repository.

The Docker object

When you use Docker, you are creating and using images, containers, networks, volumes, plug-ins, and other objects. This section briefly introduces some of these objects.

The mirror

An image is a read-only template that contains instructions for creating a Docker container. Typically, one image is based on the other, with some additional customization. For example, you can build an image based on the Ubuntu image, but install the Apache Web server and your application, along with the configuration details needed to get your application running.

You can create your own images, or you can only use images created by others and published in the repository. To build your own image, you need to create a Dockerfile using simple syntax that defines the steps needed to create and run the image. Each instruction in the Dockerfile creates a layer in the image. When you change the Dockerfile and rebuild the image, only those layers that have changed are rebuilt. This is part of what makes mirroring so lightweight, small, and fast compared to other virtualization technologies.

The container

A container is a runnable instance of an image. You can create, start, stop, move, or delete containers using the Docker API or CLI. You can connect a container to one or more networks, attach storage to it, and even create new images based on its current state.

By default, containers are isolated from other containers and their hosts. You can control how isolated a container’s network, storage, or other underlying subsystems are from other containers or hosts.

A container is defined by its image and any configuration options provided to it when it is created or started. When the container is removed, any changes to its state that are not stored in persistent storage disappear.

Example of the docker run command

The following command runs an Ubuntu container, interactively connects to a local command line session, and runs /bin/bash.

$ docker run -i -t ubuntu /bin/bash 
Copy the code

When you run this command, the following happens (assuming you are using the default repository configuration):

  1. If you don’t have an Ubuntu image locally, Docker will extract it from your configured repository, just as if you had run Docker Pull Ubuntu manually.
  2. Docker creates a new container just as if you had run one manuallydocker container createSame orders.
  3. Docker assigns a read-write file system to the container as its last layer. This allows a running container to create or modify files and directories in its local file system.
  4. Docker creates a network interface to connect the container to the default network because you did not specify any network options. This involves assigning an IP address to the container. By default, a container can connect to an external network using the host’s network connection.
  5. Docker starts the container and executes the /bin/bash command. Because the container runs interactively and is connected to your terminal (thanks to the -i and -t flags), you can use the keyboard to provide input while recording the output to the terminal.
  6. When you enter exit to terminate /bin/bash, the container is stopped, but not deleted. You can restart or delete it.

The underlying technology

Docker is written in the Go programming language and takes advantage of several features of the Linux kernel to deliver its functionality. Docker uses a technique called namespaces to provide separate workspaces called containers. When you run a container, Docker creates a set of namespaces for that container.

These namespaces provide an isolation layer. Each aspect of the container runs in a separate namespace, and its access is limited to that namespace.