What is a Docker container? What it has: Lightweight, multiple Docker containers running on one machine can share the operating system kernel on that machine; They can start quickly and require very few computing and memory resources. Mirrors are constructed through the file system layer and share some common files. This minimizes disk usage and allows images to be downloaded more quickly.

Let’s start with a couple of concepts.

  • Docker runs applications like MySQL in a single container.

    This is a virtual machine-like lightweight package containing the operating system, application files, and all dependencies.

  • Web applications may require several containers. Code (and runtime), database, Web server, etc.

  • Start the container from the image.

    Essentially, it is a container template that defines the operating system, installation procedures, Settings, and so on in a Dockerfile configuration. Any number of containers can be started from the same image.

  • The container starts in a clean (image) state, so it does not store data permanently.

    You can mount Docker volumes or bind host folders to preserve state.

  • Containers are isolated from hosts and other containers.

    You can define a network and turn on TCP/IP ports to allow communication.

  • Each container is started with a Docker command.

    Docker Compose is a utility that can start multiple containers at once using configuration files. docker-compose.yml

  • Optionally, orchestration tools such as Docker Swarm and Kubernetes can be used for replication on container-managed and production systems.

The container

Recall how to install a Web application and its dependencies using a virtual machine (VM). VM software such as VMware, Parallels Desktop, and VirtualBox are called hypervisors. They can create new virtual machines and then install the appropriate operating system using the required application stack (Web server, Runtime, database, etc.) as shown in the picture for a single virtual machine:

In some cases, it may not be possible to install all applications in a single VM, so multiple VMS are required:

Each VM is a full OS running on emulated hardware of the host OS, with access to resources (such as networks) through hypervisors. This is a considerable overhead, especially if the dependencies are small.

Docker starts each dependency in a separate container. It helps to think of the container as a tiny VM with its own operating system, libraries, and application files.

In fact:

  • Virtual machine hypervisors can emulate hardware and therefore run a full operating system
  • Docker emulates an operating system, so you can run separate applications on separate file systems.

Therefore, Docker uses less host OS resources than VM.

Technically, it is possible to run all of your application’s dependencies in a single container, but there is no real benefit to doing so, and it becomes more difficult to manage. Therefore, separate containers should be used for the application, database, and any other dependencies required.

Docker technology uses the Linux kernel and kernel features such as Cgroups and Namespaces to separate processes so that they run independently of each other. This independence is the purpose of the container; It can run multiple processes and applications independently, giving full play to the role of infrastructure while maintaining the security of individual systems.

Container tools (including Docker) provide image-based deployment patterns. This enables it to easily share groups of applications or services with its dependencies across multiple environments. Docker can also automatically deploy applications in this container environment (or combine multiple processes to build a single application).

Containers are isolated

Each running container is available, but you must expose a TCP port to use it, such as localhost 127.0.0.1

  • Port or HTTP or HTTPS Web server80 443
  • 3306forMySQL
  • 27017forMongoDB

Docker also allows access to the container shell and exposes other ports so that you can attach a debugger to look for problems.

Containers are stateless and disposable

Once closed, data written to the container file system is lost!

Any number of containers can be started from the same base image (see below). Because each container instance is the same and disposable, this makes scaling easy.

This could change the way applications are developed, especially if you want to use Docker on production servers. Suppose your application has a variable that counts the number of logged in users. If it runs in two containers, both containers can handle logins, so each container has a different number of users.

Therefore, Docker-like Web applications should avoid keeping state data in variables and local files. An application can store data in a database, such as Redis, MySQL, or MongoDB, so that state remains constant between container instances.

It may not be practical to develop existing applications using Docker containers in a non-stateless manner from the outset. However, you can still run applications in the Docker container during development.

Which raises the question: What if the database is running in a container?

It also loses data when it reboots, so Docker provides a volume and host folder binding installation.

You might think, “Ah, I can solve the state problem by not stopping the container!” That’s true. Assume that your application is 100% error free. And your runtime is 100% reliable. And the operating system never crashes. And you never need to update the host operating system or the container itself.

A container running on Linux

It doesn’t matter which host operating system you’re using: The Docker container runs natively on Linux. Therefore, Windows and macOS run Docker containers inside Linux VMS!

The macOS version of Docker requires VirtualBox.

Docker for Windows requires:

  1. The Hyper – V.

    A free Microsoft VIRTUAL machine manager for Windows 10 Professional and Enterprise

  2. The Windows subsystem is Linux (WSL) 2.

    The Windows May 2020 update provides this tool, which is essentially a highly integrated, seamless VM that can be installed on all versions of Windows.

    Docker Desktop on Windows lets you toggle between the two types.

    So running Docker on Linux is more efficient, but it hardly matters on a developing PC. Use whatever operating system and tools you prefer.

    However, if you deploy your application using Docker, Linux is the best choice for a real-time server.

image

Docker images are snapshots of files and operating systems with libraries and application executables. Essentially, an image is a controller or template for creating containers. (In a manner similar to some computer languages, you can define reusable class templates to instantiate objects of the same type.)

A single image can start any number of containers. Although it is unlikely to start multiple containers from the same image during development, this allows for scaling on production servers.

Docker provides universal popular application images such as NGINX, MySQL, MongoDB, Elasticsearch, Redis, etc.

There are also runtime images for Node.js, PHP, Python, Ruby, Rust, and any other language you’ve heard of.

Note: If you want to publish your own images, sign up for a Docker Hub account.

Dockerfile

Configure the image using Dockerfile. It defines:

  1. The initial base image, usually the operating system
  2. Working directory and user permissions
  3. All necessary installation steps, such as defining environment variables, copying files from the host, running the installation process, etc.
  4. Whether the container should attach one or more volumes for data storage
  5. Whether containers should join the network to communicate with others
  6. Which ports (if any) localhost are exposed on the host
  7. Application start command.

In some cases, images in a Docker Hub, such as MySQL, will be used as-is. However, your application will need its own custom Dockerfile.

Develop and produce dockerfiles

You can create two Dockerfile configurations for your application:

  • A development environment

    Typically, it will enable logging, debugging, and remote access. For example, during Node.js development, you might want to start the application using Nodemon to automatically restart it when a file is changed.

  • A production environment

    This will operate in a more efficient and secure mode. For Node.js deployments, standard runtime commands may be used.

The image tag

Docker Hub manages Docker images, while Github is used for Git repositories.

Any image created can be pushed to the Docker Hub. Few developers do this, but it can be practical for deployment purposes or for sharing applications with others.

The images are separated by name using the Docker Hub ID to ensure that no one else can use the same name. They also have a label so you can create the same image, for example multiple versions,,, etc. 1.0 1.1 2.0 Latest /:

Example:

Yourname/yourapp: latest, craigbuckler/myapp: 1.0.

An official image on a Docker Hub does not require a Docker ID, for example (presumably),,. Mysql: latest mysql mysql: latest mysql: 5 ` ` mysql: 8.0.20

volumes

The container does not remain in state between restarts. This is usually a good thing; Any number of containers can be started from the same base image, and each container can handle incoming requests regardless of how or when they are started (see Business processes).

However, some containers (such as databases) absolutely must retain data, so Docker provides two types of storage mechanism:

  1. Volumes: a file system managed by Docker
  2. Bind mounts: a file or directory on a host.

Both can install directories on containers, such as those for MongoDB storage. /data/db

You are advised to use Volumes to retain data. In some cases, it is the only option — for example, MongoDB does not currently support binding installations on Windows or macOS file systems.

However, a binding installation is useful during development. Application folders on the host OS can be installed in containers, so any file changes trigger application restarts, browser refreshes, and so on.

network

Any TCP/IP port can be exposed in the container, such as MySQL 3306. This allows applications on the host to communicate with the database system on localhost:3306.

The other container cannot communicate with MySQL because localhost will resolve to itself. For this reason, Docker creates a virtual network and assigns a unique IP address to each running container. One container can then use its address to communicate with another container.

Unfortunately, the Docker IP address can change every time a container is started. An easier option is to create your own Docker virtual network. Any container added to the network can communicate with another container using its name, such as mysql:3306 resolved to the correct address.

Container TCP/IP ports can be exposed:

  • Only in the virtual network
  • Between the virtual network and the host.

Suppose you’re running two containers on the same Docker network:

  1. A container called PHpapp exposes a Web application on port 80
  2. A container named mysql exposes a database on port 3306.

During development, you might want to expose both ports to the host. The application can be launched in a Web browser with http://localhost/(port 80 being the default), and the MySQL client can connect to http://localhost:3306/.

In a production environment, mysql ports do not need to be exposed to hosts. The PHpapp container can still communicate with mysql:3306, but unscrupulous hackers will not be able to probe port 3306 on the host.

With careful planning, complex Docker networks can be created to improve security, for example, mysql and Redis containers can be accessed by PHPAPp, but they cannot be accessed by each other.

Docker Compose

A single Docker command is used to start a single container. An application that requires Node.js, NGINX, and MongoDB containers starts with three commands – possibly executed in the correct order on three terminals (possibly MongoDB, then node.js application, then NGINX).

Docker Compose is a tool for managing multiple containers with associated volumes and networks. A single configuration file, usually named docker-comemess.yml, defines the container and can override the Dockerfile Settings if necessary.

It is practical to create a Docker composite configuration for development. You can also create one for production, but there are better options…

choreography

Containers are portable and replicable. This way, you can scale a single application by starting the same container on the same server, on another server, or even in a different data center on the other side of the world.

The process of managing, extending, and maintaining containers is called a business process. Docker Compose can be used for basic orchestration, but it is best to use professional tools such as:

  • Docker Swarm
  • Kubernetes

Cloud hosts also offer their own choreography solutions, such as AWS Fargate, Microsoft Azure, and Google Cloud. These are usually based on Kubernetes, but may have custom options or tools.

Docker development strategy

How you use Docker containerization is up to you.

Use Docker only for development

Docker is a production environment for replicating live servers on development PCS. Production systems with Node.js, MongoDB, and NGINX can be simulated in a development environment with three Docker containers.

Use Docker where feasible

Production servers use Docker for certain applications. The Node.js process would be ideal, but the MongoDB database could be provided by a cloud service, and NGINX could be installed on the host OS as a load balancer.

The development PC can simulate this environment using three Docker containers. Alternatively, you might run Node.js and NGINX in a container, but access the test database on the same MongoDB cloud service to eliminate compatibility issues.

Concurrent processing

Node.js applications typically run on a single processing thread. A server running 16 CPU cores (for example, applications) will have 15 idle! The same is true for other runtimes, although Apache and similar Web servers start other threads as requests increase (which has its own resource issues).

Node.js applications can implement clustering or start other threads using a process manager (such as PM2). However, when resources allow, it is often more practical for Docker to start and manage multiple containers.

Use Docker for development and production

You can use almost identical Docker containers in development and production. It may be necessary to create a slightly different startup configuration for each creation.

Easier development and production

  1. The application Dockerfile configures only the production environment.
  2. Docker Compose is used to override this basic configuration for development purposes.

Therefore, images can be used as-is on the production server, regardless of which choreography or deployment option you choose.

When not to use Docker

There are few drawbacks to using Docker during development. It allows you to install dependencies and emulate live systems on any PC. You can easily share this isolated environment with others, while preserving your favorite editors and tools.

But Docker isn’t the magic solution to all your production problems! In some cases, Docker may not be appropriate.

  1. Applications are not stateless

If not originally designed for container-based deployment, Docker can be difficult to do with an existing overall application. Programs that store state in variables or files will need to be modified to use other data stores.

  1. You are using Windows Server

Docker is native on Linux, but Windows runs containers in a Hyper-V virtual machine or WSL2 (actually another VM). This is an additional overhead, and while Docker lets you run Linux dependencies, it’s probably more efficient to have a Linux server.

  1. Performance is critical

CPU and RAM limits have been imposed on Docker containers. These are configurable, but applications running on the host OS will always be faster.

That is, if your application typically runs on a single CPU kernel, Docker can scale horizontally for parallel processing.

  1. Stability is important

Docker is mature, but it is another dependency to install, update, and manage. Do you have expertise in internal container management?

Because the container can be scaled and restarted automatically, your application seems more robust. That doesn’t mean it doesn’t crash less often than it used to!

  1. Store mission-critical data

Volume and bound mounts can store persistent data, but they are more difficult to manage and back up than standard file system options.

  1. To improve security

Containers are isolated, but unlike real VMS, containers are not fully sandboxed from the host OS. Docker provides options for hiding dependencies, but it is no substitute for strong security.

  1. Create a GUI application

Someone somewhere will use the container to create a cross-platform graphical application. But that’s not the ideal solution for Docker!