Docker

origin

  • Docker is an open source container project based on GO language implemented by dotCloud. DotCloud is a company founded in 2010 that provides services to developers based on the Platform as a Service (PaaS) Platform. Under the PaaS platform, all the service environments have been pre-configured. Developers only need to select the service type and upload the code to provide external services. There is no need to spend a lot of time to build the service and configure the environment. DotCloud PaaS platform has done good enough, it supports almost all major Web programming language and database, can let developers choose oneself follow one’s inclinationsly need programming languages, databases, and programming framework, and it’s set up is very simple, only need to run a command after each coding can make the entire site deployment; And using the concept of a multi-level platform, its applications can theoretically run on various types of cloud services. Two or three years later, although dotCloud has gained a good reputation in the industry, the whole PaaS market is still in the nurturing stage, and dotCloud has not seen an explosive growth.
  • Docker mainly runs under Ubuntu system at first, and later supports REHL/ CentOS. All the big cloud computing companies, such as Azure, Google and Amazon, are supporting Docker technology, which actually makes Docker an important part of the cloud computing field.
  • Docker blurred the boundary between IaaS and PaaS, and brought infinite possibilities for cloud computing service forms. Docker, with its container concept, broke and rose, which is a great initiative in the cloud computing movement.

    https://www.ruanyifeng.com/bl…

Concepts and advantages

  • Docker, as currently defined, is an open source container engine that makes it easy to manage containers. Its packaging and encapsulation of the image, as well as the unified management of the image by the introduction of Docker Registry, Build a convenient and fast “Build, Ship and Run” process, which can unify the environment and process of the whole development, testing and deployment, and greatly reduce the operation and maintenance costs
  • Docker containers are fast and can be started and stopped in seconds, much faster than traditional virtual machines. The core problem Docker solves is to use containers to realize functions similar to virtual machines, so as to provide users with more computing resources with less hardware resources. In addition to running the applications in Docker container, it basically consumes no additional system resources, which ensures the application performance and reduces the system overhead, making it possible to run thousands of Docker containers on a host computer at the same time.
  • Consistent running environment
  • Resources, networks, libraries, and so on are isolated and do not have dependency issues
  • Provide a variety of standardized operations, very suitable for automation
  • Lightweight, fast startup and migration

The installation

  • CentOS install Docker
[root@localhost ~]# curl -fsSL https://get.docker.com | bash -s docker --mirror aliyun
  • Docker service will not be started by default after successful installation, but will be started manually
[root@localhost ~]# systemctl start docker
  • Add Docker service to boot item
[root@localhost ~]# systemctl enable docker
  • View version number
[root@localhost ~]# Docker VersionClient: Docker Engine - Community Version: 20.10.7 API Version: 1.41 Go Version: Go1.13.15 Git Commit: F0DF350 Built: Wed Jun 2 11:58:10 2021 OS/Arch: Linux/AMD64 Context: default Experimental: TrueServer: Docker Engine - Community Engine: Version: 20.10.7 API Version: 1.41 (Minimum Version 1.12) Go Version: Go1.13.15 Git Commit: B0F5Bc3 Built: Wed Jun 2 11:56:35 2021 OS/Arch: Linux/AMD64 Experimental: False containerd: Version: 1.4.6 GitCommit: d71fcd7d8303cbf684402823e425e9dd2e99285d runc: Version: 1.0.0 - rc95 GitCommit: B9ee9c6314599f1b4a7f497e1f1f856fe433d3b7 docker - init: Version: 0.19.0 GitCommit: de40ad0

The basic composition of

  • Docker client: The most commonly used Docker client is the Docker command. Docker makes it easy to build and run containers on the Host.
  • Docker Server: Docker Daemon runs on Docker Host and is responsible for creating, running and monitoring containers, building and storing images. By default, Docker daemon can only respond to client requests from the local Host. To allow remote client requests, you need to turn on TCP listening in the configuration file.

    • Edit the configuration file/etc/systemd/system/multi – user. Target. Wants/docker. Service, behind the environment variable ExecStart add – H TCP: / / 0.0.0.0, allowed from any IP client connection
    • Restart the Docker daemon
    systemctl daemon-reloadsystemctl restart docker.service
  • The IP of Docker server is 192.168.9.140. The client adds -h parameter to the command line to communicate with the remote server on another machine
Docker -h 192.168.9.140 info
[root@xdja ~]# docker-h 192.168.9.140 InfoContainers: 3 Running: 3 Stopped: 0Images: 3Server Version: 18.09.7 Storage Driver: the devicemapper Pool Name: docker - and - 67364689 - Pool Pool Blocksize: 65.54 kB Base Device Size: Supported: true Data file: /dev/loop0 Metadata file: /dev/loop1 Data loop file: /var/lib/docker/devicemapper/devicemapper/data Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
  • Mirroring: Docker’s mirroring is the basis for creating a container, similar to a snapshot of a virtual machine, and can be thought of as a read-only template for the Docker container engine. For example, an image can be a full CentOS environment, called a CentOS image; It could also be an application with MySQL installed, called a MySQL image, and so on.
  • Container: Docker’s container is a running instance created from an image that can be started, stopped, and deleted. Each container is isolated from each other and not visible to ensure the security of the platform. Think of the container as a simplified version of the Linux environment. Docker uses the container to run and isolate applications.
  • Repository: The Docker repository is where images are stored centrally. After developers create their own images, they can upload them to Public or Private repositories using the push command. The next time you want to use this image on another machine, you simply retrieve it from the repository.

    The official Docker warehouse address is
    https://hub.docker.com

  • Docker Host: A physical or virtual machine used to execute Docker daemons and containers.

Mirror build: This is to create an image containing the environment, program code, etc. needed for installation and running. Dockerfile is used to create this process.

Container startup: The container finally runs by pulling the built image and starting the service through a series of run instructions (such as port mapping, external data mounts, environment variables, etc.). For individual containers, this can be run through Docker Run.

For runs involving more than one container (such as service composing), docker-compose can be used. It is easy to run more than one container as a service (or just one of them), and provides scale functionality.

Build the image from Dockerfile

  • Pull the CentOS image
[root@localhost docker]# docker pull centos
  • Upload the JDK and Tomcat installation packages
[root@localhost docker]# lltotal 151856-rw-rw-rw- 1 root root 10559131 Jun 21 17:45 Apache-tomcat -8.5.68.tar.gz-rw-r--r-- 1 root root 696 Jun 22 09:32 dockerfile-rw-rw-rw-1 root root 144935989 Jun 22 09:32 dockerfile-rw-rw-rw-1 root root 144935989 Jun 22 09:32 dockerfile-rw-rw-rw-1 root root 144935989 Jun 22 09:15 jdk-8u291-linux-x64.tar.gz
  • Build the Dockerfile file
[root@localhost wch]# pwd/home/wch[root@localhost wch]# mkdir docker[root@localhost docker]# touch Dockerfile
  • Type the following
# Add a Tomcat and JDK to the image # Add a Tomcat and JDK to the image in the current directory /usr/local/ADD apache-tomcat-8.5.68.tar.gz /usr/local/# Set the environment variable ENV JAVA_HOME /usr/local/jdk1.8.0_291/ENV PATH $JAVA_HOME/bin:$PATHENV CLASSPATH.:$JAVA_HOME/lib# RUN chmod +x # Expose: /usr/local/apache-tomcat-8.5.68/bin/*. Sh# Expose: /usr/local/apache-tomcat-8.5.68/bin/* /usr/local/apache-tomcat-8.5.68/bin/startup.sh && /bin/bash && tail -f /usr/local/apache-tomcat-8.5.68/logs/catalina.out
  • Build the image and return the image ID on success
[root@localhost docker]# docker build -f /home/wch/docker/Dockerfile -t wch/tomcat .Sending build context to Docker DAEMON 155.5MBSTEP 1/10: FROM CENTOS: Latest --> 300E315ADB2FSTEP 2/10: MAINTAINER wch ---> Running in c9ff9c1277b4Removing intermediate container c9ff9c1277b4 ---> 3b8b3ffc8af3Step 3/10 : ADD jdk-8u291-linux-x64.tar.gz /usr/local/ ---> 988571412bacStep 4/10 : ADD apache-tomcat-8.5.68.tar.gz /usr/local/ --> f160e9207148Step 5/10: ENV JAVA_HOME /usr/local/jdk1.8.0_291/ --> Running in 4574503F1307 Removing intermediate container 4574503F1307 --> af37b9368f59Step 6/10 : ENV PATH $JAVA_HOME/bin:$PATH ---> Running in 30521e475681Removing intermediate container 30521e475681 ---> 98760e798091Step 7/10 : ENV CLASSPATH .:$JAVA_HOME/lib ---> Running in 6efa1040eb62Removing intermediate container 6efa1040eb62 ---> e50226013e04Step 8/10 : RUN chmod +x /usr/local/apache-tomcat-8.5.68/bin/*. Sh -- RUN chmod +x /usr/local/apache-tomcat-8.5.68/bin/*. Sh -- RUN chmod +x /usr/local/apache-tomcat-8.5.68/bin/* 733a8f068adc ---> 60ffde451605Step 9/10 : EXPOSE 8080 ---> Running in 024e2e19af04Removing intermediate container 024e2e19af04 ---> 52afaea4fc62Step 10/10 : /usr/local/apache-tomcat-8.5.68/bin/startup.sh && /bin/bash && tail -f /usr/local/apache-tomcat-8.5.68/logs/catalina.out --> Running in 69e6fea9f1b7Removing intermediate container 69e6fea9f1b7 ---> 9b8179770e78Successfully built 9b8179770e78Successfully tagged wch/tomcat:latest

At the end of the command. Specifies Build Context as the current directory. By default, Docker will look for the Dockerfile from the build context, and we can also specify the location of the Dockerfile with the -f parameter.

docker build -f /home/wch/docker/Dockerfile -t wch/tomcat .

Or the following

cd /home/wch/docker

docker build -t wch/tomcat .

  • View the image of a successful build
[root@localhost docker]# docker imagesREPOSITORY                         TAG                 IMAGE ID            CREATED             SIZEwch/tomcat                         latest              9b8179770e78        25 minutes ago      584MBgrafana/grafana                    latest              b53df981d3aa        7 days ago          206MBprom/prometheus                    latest              86ea6f86fc57        4 weeks ago         185MBquay.io/prometheus/node-exporter   latest              c19ae228f069        3 months ago        26MBcentos                             latest              300e315adb2f        6 months ago        209MB
  • Start the container with the image built
docker run -d -p 8010:8080 wch/tomcatb43861a53e3206650d57107c869f538cc3384630957fcb8bff1cc40bb92610e0
  • Browser access

  • Check the container
[root@localhost ~]# docker exec -it b43861a53e32 /bin/bash[root@b43861a53e32 /]# cd /usr/local/[root@b43861a53e32 Local]# lsapace-tomcat-8.5.68 bin etc games include jdk1.8.0_291 lib lib64 libexec sbin share SRC

RUN vs CMD vs ENTRYPOINT

  • Run: Executes the command and creates a new mirroring layer. Run is often used to install software packages.
  • CMD: Sets the command and its arguments to be executed by default after the container starts, but CMD can be replaced by the command-line arguments that follow docker run.

    • If Docker Run specifies another command, the default command specified by CMD will be ignored
    • If there are multiple CMD directives in a Dockerfile, only the last CMD is valid
  • EntryPoint: Configure the command to run when the container starts.

    • CMD can provide additional default arguments for entryPoint, even if other commands are specified when Docker Run is run. The Docker Run command line can also be used to replace the default arguments.
    • EntryPoint’s Shell format ignores any arguments provided by the CMD or Docker Run
  • /bin/ sh-c [command] /bin/ sh-c [command]

    • RUN apt-get install python3
    • CMD echo “hello world”
    • ENTRYPOINT echo “hello world”
  • Exec format, when the command is executed, will be directly called [command], will not be parsed by the shell.

    • The RUN [” apt – get “, “install”, “python3”]
    • CMD [“/bin/echo “, “hello world”]
    • ENTRYPOINT [“/bin/echo “, “hello world”]
    • EntryPoint [” /bin/echo “, “hello”] CMD [” world “]
    • ENV name Cloud Man entryPoint [” /bin/sh “, “– c”, “echo hello,$name”]

CMD and EntryPoint recommend using the EXEC format because instructions are more readable and easier to understand. Run can do either.

Distribution of the mirror

Using the Public Registry

  • Docker Hub, you first register an account through a Web page
[root@localhost ~]# docker login -u wholegale39Password: WARNING! Your password will be stored unencrypted in /root/.docker/config.json.Configure a credential helper to remove this warning. Seehttps://docs.docker.com/engine/reference/commandline/login/#credentials-storeLogin Succeeded
  • View local image
[root@localhost ~]# docker imagesREPOSITORY                         TAG                 IMAGE ID            CREATED             SIZEwch/tomcat                         latest              9b8179770e78        5 hours ago         584MBgrafana/grafana                    latest              b53df981d3aa        7 days ago          206MBprom/prometheus                    latest              86ea6f86fc57        4 weeks ago         185MBquay.io/prometheus/node-exporter   latest              c19ae228f069        3 months ago        26MBcentos                             latest              300e315adb2f        6 months ago        209MB
  • Modify the image name
[root@localhost ~]# docker tag wch/tomcat wholegale39/tomcat
[root@localhost ~]# docker imagesREPOSITORY                         TAG                 IMAGE ID            CREATED             SIZEwch/tomcat                         latest              9b8179770e78        5 hours ago         584MBwholegale39/tomcat                 latest              9b8179770e78        5 hours ago         584MBgrafana/grafana                    latest              b53df981d3aa        7 days ago          206MBprom/prometheus                    latest              86ea6f86fc57        4 weeks ago         185MBquay.io/prometheus/node-exporter   latest              c19ae228f069        3 months ago        26MBcentos                             latest              300e315adb2f        6 months ago        209MB
  • Upload the image
[root@localhost ~]# docker push wholegale39/tomcat:latestThe push refers to repository [docker.io/wholegale39/tomcat]711749be7df9: Pushed 579be2cb5f3b: Pushed 015815b60df5: Pushed 2653d992f4ef: Mounted from library/centos latest: digest: sha256:8ce292efe201dcefcd76fb1e3d42d5bc65a5469f46b470a738ed1027fcaeebd3 size: 1163
  • View the image after success

  • Download and use this image, all users can download and use
[root@localhost ~]# docker imagesREPOSITORY                         TAG                 IMAGE ID            CREATED             SIZEwch/tomcat                         latest              9b8179770e78        6 hours ago         584MBwholegale39/tomcat                 latest              9b8179770e78        6 hours ago         584MBgrafana/grafana                    latest              b53df981d3aa        7 days ago          206MBprom/prometheus                    latest              86ea6f86fc57        4 weeks ago         185MBquay.io/prometheus/node-exporter   latest              c19ae228f069        3 months ago        26MBcentos                             latest              300e315adb2f        6 months ago        209MB[root@localhost ~]# docker rmi wholegale39/tomcatUntagged: wholegale39/tomcat:latestUntagged: wholegale39/tomcat@sha256:8ce292efe201dcefcd76fb1e3d42d5bc65a5469f46b470a738ed1027fcaeebd3[root@localhost ~]# docker imagesREPOSITORY                         TAG                 IMAGE ID            CREATED             SIZEwch/tomcat                         latest              9b8179770e78        6 hours ago         584MBgrafana/grafana                    latest              b53df981d3aa        7 days ago          206MBprom/prometheus                    latest              86ea6f86fc57        4 weeks ago         185MBquay.io/prometheus/node-exporter   latest              c19ae228f069        3 months ago        26MBcentos                             latest              300e315adb2f        6 months ago        209MB[root@localhost ~]# docker pull wholegale39/tomcatUsing default tag: latestlatest: Pulling from wholegale39/tomcatDigest: sha256:8ce292efe201dcefcd76fb1e3d42d5bc65a5469f46b470a738ed1027fcaeebd3Status: Downloaded newer image for wholegale39/tomcat:latest[root@localhost ~]# docker imagesREPOSITORY                         TAG                 IMAGE ID            CREATED             SIZEwch/tomcat                         latest              9b8179770e78        6 hours ago         584MBwholegale39/tomcat                 latest              9b8179770e78        6 hours ago         584MBgrafana/grafana                    latest              b53df981d3aa        7 days ago          206MBprom/prometheus                    latest              86ea6f86fc57        4 weeks ago         185MBquay.io/prometheus/node-exporter   latest              c19ae228f069        3 months ago        26MBcentos                             latest              300e315adb2f        6 months ago        209MB

Setting up the local Registry

  • Set up local Registey service
docker run -d -p 5000:5000 -v /home/wch/localRegistry:/var/lib/registry registryUnable to find image 'registry:latest' locallylatest: Pulling from library/registryddad3d7c1e96: Pull complete 6eda6749503f: Pull complete 363ab70c2143: Pull complete 5b94580856e6: Pull complete 12008541203a: Pull complete Digest: sha256:aba2bfe9f0cff1ac0618ec4a54bfefb2e685bbac67c8ebaf3b6405929b3e616fStatus: Downloaded newer image for registry:latestb7d56c751422ec434dd5217db4afac626fcf452b2d86554ea08126d8ee226cfb[root@localhost wch]# docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMESb7d56c751422 registry "/ entryPoint.sh /etc..." 8 seconds ago Up 4 seconds 0.0.0.0:5000->5000/ TCP happy_mcleanb43861a53e32 WCH /tomcat "/bin/ sh-c '/usr/lo..." 6 hours ago Up 6 hours 0.0.0.0:8010->8080/ TCP Inspiring_Rubin2649B0F316C3 Quay. IO/Prometheus/Node-Exporter :latest "/ bin/node_exporter..." 5 days ago Up 24 hours node_exporter314026ddbcc3 grafana/grafana:latest "/run.sh" 5 days ago Up 24 hours 0.0.0.0:26->26/ TCP, 0.0.0.0:3000->3000/ TCP grafana407fd7fc14a6 PROMETHEUS :latest "/bin/ Prometheus -- C..." 5 days ago Up 24 hours 8086/ TCP, 0.0.0.0:9090->9090/ TCP Prometheus
  • Modify the mirror
[root@localhost docker]# docker imagesREPOSITORY TAG IMAGE ID CREATED SIZEwch/tomcat latest 9b8179770e78 6 hours ago 584MBwholegale39/tomcat latest 9b8179770e78 6 hours ago 584MBgrafana/grafana latest b53df981d3aa 7 days ago 206MBprom/prometheus latest 86ea6f86fc57 4 weeks ago 185MBregistry latest 1fd8e1b0bb7e 2 months ago 26.2 MBquay. IO/Prometheus/node - exporter latest c19ae228f069 3 have a line of 26 mbcentos latest 300 e315adb2f 6 have a line 209MB
Docker tag wholegale39 / tomcat 192.168.9.140:5000 / wholegale39 / tomcat
[root @ localhost docker] # docker imagesREPOSITORY TAG IMAGE ID CREATED SIZE192.168.9.140:5000 / wholegale39 / tomcat latest 9b8179770e78 6 hours ago 584MBwch/tomcat latest 9b8179770e78 6 hours ago 584MBwholegale39/tomcat latest 9b8179770e78 6 hours ago 584MBgrafana/grafana latest b53df981d3aa 7 days ago 206MBprom/prometheus latest 86ea6f86fc57 4 weeks ago 185 mbregistry latest 1 fd8e1b0bb7e 2 have a line 26.2 MBquay. IO/Prometheus/node - exporter latest c19ae228f069 3 have a line 26MBcentos latest 300e315adb2f 6 months ago 209MB
  • Upload the image
[root @ localhost docker] # docker push 192.168.9.140:5000 / wholegale39 / tomcat: latestThe push refers to the repository [192.168.9.140:5000 / wholegale39 / tomcat] 711749 be7df9: Pushed 579 be2cb5f3b: Pushed 015815 b60df5: Pushed 2653 d992f4ef: Pushed latest: digest: sha256:8ce292efe201dcefcd76fb1e3d42d5bc65a5469f46b470a738ed1027fcaeebd3 size: 1163
  • Download and use this image, all Intranet users can download and use
[root @ localhost docker] # docker imagesREPOSITORY TAG IMAGE ID CREATED SIZE192.168.9.140:5000 / wholegale39 / tomcat latest 9b8179770e78 7 hours ago 584MBwch/tomcat latest 9b8179770e78 7 hours ago 584MBwholegale39/tomcat latest 9b8179770e78 7 hours ago 584MBgrafana/grafana latest b53df981d3aa 7 days ago 206MBprom/prometheus latest 86ea6f86fc57 4 weeks ago 185 mbregistry latest 1 fd8e1b0bb7e 2 have a line 26.2 MBquay. IO/Prometheus/node - exporter latest c19ae228f069 3 have a line 26MBcentos latest 300e315adb2f 6 months ago 209MB[root@localhost docker]# docker rmi 192.168.9.140:5000 / wholegale39 tomcatUntagged: 192.168.9.140:5000 / wholegale39 / tomcat: latestUntagged: 192.168.9.140:5000 / wholegale39 / tomcat @ sha256:8 ce292efe201dcefcd76fb1e3d42d5bc65a5469f46b470a738ed1027fcaeebd3 [root @ the local The host docker] # docker pull 192.168.9.140:5000 / wholegale39 / tomcatUsing default tag: latestlatest: Pulling from wholegale39/tomcatDigest: sha256:8ce292efe201dcefcd76fb1e3d42d5bc65a5469f46b470a738ed1027fcaeebd3Status: Downloaded newer image for 192.168.9.140:5000 / wholegale39 / tomcat: latest [root @ localhost docker] # docker imagesREPOSITORY The TAG IMAGE ID CREATED SIZE192.168.9.140:5000 / wholegale39 / tomcat latest 9 b8179770e78 7 hours line 584 MBWCH/tomcat latest 9b8179770e78 7 hours ago 584MBwholegale39/tomcat latest 9b8179770e78 7 hours ago 584MBgrafana/grafana latest b53df981d3aa 7 days ago 206MBprom/prometheus latest 86ea6f86fc57 4 weeks ago 185MBregistry latest 1fd8e1b0bb7e 2 months Line 26.2 MBquay. IO/Prometheus/node - exporter latest c19ae228f069 3 have a line of 26 mbcentos latest 300 e315adb2f 6 have a line 209MB
  • View the Image information in Registry
[root@localhost docker]# curl http://192.168.9.140:5000/v2/_catalog {" repositories: "[" wholegale39 / tomcat"]} [root @ localhost docker] # curl http://192.168.9.140:5000/v2/wholegale39/tomcat/tags/list {" name ":" wholegale39 / tomcat ", "tags" : [" latest "]}

Common commands

  • View the currently running container
[root@localhost ~]# docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMESb7d56c751422 registry "/entrypoint.sh / etc..." 25 hours ago Up 24 hours 0.0.0.0:5000->5000/ TCP happy_mclean 25 hours ago Up 24 hours 0.0.0.0:5000->5000/ TCP happy_mclean
  • Containers that view all states
[root@localhost ~]# docker ps -aCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMESb7d56c751422 registry "/ entrypoint. Sh/etc..." 25 hours ago Up 24 hours 0.0.0.0:5000->5000/ TCP happy_mcleanb43861a53e32 WCH /tomcat "/bin/ sh-c '/usr/lo..." 31 hours ago Exited (137) 24 hours ago inspiring_rubin
  • Into the container
[root@localhost ~]# docker exec -it CONTAINERID /bin/bash
  • Start the container
[root@localhost ~]# docker start CONTAINERID
  • Stop the container
[root@localhost ~]# docker stop CONTAINERID
  • Restart the container
[root@localhost ~]# docker restart CONTAINERID
  • See the log
[root@localhost ~]# docker logs -f CONTAINERID
  • Suspension of the container
[root@localhost ~]# docker pause CONTAINERID
  • Restore the paused container
[root@localhost ~]# docker unpause CONTAINERID
  • Delete containers that are not running
[root@localhost ~]# docker rm CONTAINERID
  • Deletes the specified unused image
[root@localhost ~]# docker rmi IMAGEID
  • Delete any unused images
[root@xdja wch]# docker image prune -aWARNING! This will remove all images without at least one container associated to them.Are you sure you want to continue? [y/N] y
  • Save the image
[root@localhost ~]# docker save -0 tomcat wholegale39/tomcat
  • Other machines load the image
[root@xdja wch]# docker load -i tomcat2653d992f4ef: Loading layer [= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = >] 216.5 MB/MB015815b60df5 216.5: Loading layer [= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = >] 360.4 MB/MB579be2cb5f3b 360.4: Loading layer [= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = >] 15.27 MB/MB711749be7df9 15.27: Loading layer [= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = >] 65.02 kB / 65.02 kBLoaded image: wholegale39/tomcat:latest
  • Batch delete orphan volume
[root@localhost ~]# docker volumerm $ (docker volume ls -q)
  • copy
[root@localhost ~]# docker cp /home/wch containerID:/home/[root@localhost ~]# docker cp containerID:/home/ /home/wch

Docker network

  • Check the network
[root@localhost docker]# docker network lsNETWORK ID          NAME                         DRIVER              SCOPE0a6e7337301f        bridge                       bridge              locale558d63e1ee8        host                         host                localc7da7be15130        none                         null                local4965012c623e        prometheus_grafana_monitor   bridge              local

None network, only LO network card, some security requirements of high applications can be used

Host network: The container shares the network stack of Docker Host. The network configuration is exactly the same as that of the host. The biggest benefit is better performance, but there are port conflicts to consider

Bridge network: The Docker daemon creates a virtual Ethernet bridge, docker0, that automatically forwards packets between any network cards attached to it. By default, daemons create pairs of equal interfaces, with one set to the container’s eth0 interface and the other placed in the host’s namespace to connect all containers on the host to the internal network. The daemon also assigns an IP address and subnet from the bridge’s private address space to the container. Bridge mode is Docker’s default setting

  • Dynamic port mapping, which maps port 80 to the host dynamic port
[root@localhost docker]# docker run -p 80 httpd
  • Specify a port map that maps port 80 to port 8080 on the host
[root@localhost docker]# docker run -p 8080:80 httpd

For each mapped port, the host starts a docker-proxy process to handle the traffic to the container

[root@localhost docker]# ps -ef|grep docker-proxyroot 910 16786 0 Jun23 ? 00:00:00 /usr/bin/docker-proto-cp-host-ip 0.0.0.0-host-port 5000-container-ip 172.17.0.1-container-port 00:00:00 /usr/bin/docker-proto-cp-host-ip 0.0.0.0-host-port 5000-container-ip 172.17.0.1-container-port 5000root 17024 16786 0 Jun22 ? /usr/bin/docker-proto-tcp-host-ip 0.0.0.0-host-port 3000-container-ip 172.26.0.2-container-port 00:00:00:00 /usr/bin/docker-proto-tcp-host-ip 0.0.0.0-host-port 3000-container-ip 172.26.0.2-container-port 3000root 17038 16786 0 Jun22 ? 00:00:00 /usr/bin/docker-proto-tcp-host-ip 0.0.0.0-host-port 26-container-ip 172.26.0.2-container-port 26root  17068 16786 0 Jun22 ? /usr/bin/docker-proto-cp-host-ip 0.0.0.0-host-port 9090-container-ip 172.26.0.3-container-port 00:01:57 /usr/bin/docker-proto-cp-host-ip 0.0.0.0-host-port 9090-container-ip 172.26.0.3-container-port 9090root 27721 17810 0 09:59 pts/0 00:00:00 grep --color=auto docker-proxy

Cross-host network solutions include:

1. Docker native overlay and MacVLAN;

2. Third-party programs: Common ones include Flannel, Weave and Calico;

Overlay network uses tunneling technology to encapsulate data packets into UDP for transmission. There is additional CPU and network overhead because of the package packaging and unmarshalling involved. While almost all Overlay network solutions use the Linux kernel’s VXLAN module as the underlying layer to minimize overhead, this overhead is still present when compared to the Underlay network. Therefore, MacVLAN, Flannel Host-GW, and Calico will perform better than Docker Overlay, Flannel VXLAN, and Weave.

Compared with Underlay, Overlay can support more two-layer network segments, make better use of the existing network, and avoid the exhaustion of physical switch MAC table, etc. Therefore, comprehensive consideration should be taken into consideration when selecting the scheme.

Docker storage

Docker provides two resources for the container to store data:

1. Mirror layer and container layer managed by Storage Driver.

2. Data Volume.

storage driver

The container consists of a top, writable container layer and several read-only mirrored layers in which the container’s data resides. The biggest feature of such a hierarchical structure is copy-on-write:

1. New data is stored directly in the top container layer.

2. To modify the existing data, the data will be copied from the mirror layer to the container layer first, and the modified data will be stored directly in the container layer, while the mirror layer will remain unchanged.

3. If multiple layers have files with the same name, the user can only see the files in the top layer.

The hierarchical structure makes the creation, sharing, and distribution of images and containers very efficient, thanks to Docker StorageRiver. It is the Storage Driver that enables the stacking of multiple layers of data and provides users with a single unified view of consolidated data.

Docker supports a variety of Storage Drivers, including AUFS, Device Mapper, BtrFS, OverlayFS, VFS, and ZFS. Both of them can implement a layered architecture, while having their own characteristics. For Docker users, choosing exactly which StorageRiver to use is a challenge because:

1. No driver fits all scenarios.

The Driver itself is evolving and iterating rapidly.

Officially, though, Docker has a simple answer: Use the Linux distribution’s default Storage Driver first.

  • Centos7.4 system
[root@localhost docker]# docker infoContainers: 5 Running: 3 Paused: 1 Stopped: 1Images: 14Server Version: 18.09.7 Storage Driver: the devicemapper
  • Ubuntu18 system
wch@ubuntu: ~$sudo docker Infoclient: Debug Mode: falseServer: Containers: 0 Running: 0 Paused: 0 images: 0Server Version: 19.03.13Storage Driver: overlay2

For some containers, placing data directly in a layer maintained by the Storage Driver is a good choice, such as those with stateless applications. Stateless means that the container has no data to persist and can be created directly from the image at any time.

BusyBox, for example, is a toolbox that is started to execute commands such as wget and ping. There is no need to save the data for later use. When the container is deleted, the working data stored in the container layer will also be deleted.

However, this is not appropriate for other applications where there is a need to persist data, load existing data when the container is started, and retain new data when the container is destroyed. In other words, these containers are stateful.

This requires the use of another storage mechanism of Docker: Data Volume.

Data Volume

Data Volume is essentially a directory or file in the Docker Host file system that can be mounted directly into the container’s file system.

Data Volume has the following features:

1. Data Volume is a directory or file, not an unformatted disk (block device).

2. The container can read and write data in the volume.

Volume data can be stored permanently even after the container in which it was used has been destroyed.

Docker provides two types of volumes: Bind Mount and Docker Managed Volume

  • bind mount

    • Bind mount is to mount a directory or file that already exists on the host into the container.
    • The format of -v is

      :

      . / usr/local/apache2 / htdocs is Apache Server static file location. Due to/usr/local/apache2 / htdocs already exists, the original data will be hidden, instead, the host/home/WCH docker/HTTPD/of the data
[root@localhost httpd]# pwd/home/wch/docker/httpd[root@localhost httpd]# lltotal 4-rw-r--r-- 1 root root 72 Jun 24 15:17  index.html[root@localhost httpd]# cat index.html <html><body><h1>This is a file in host file system ! </h1></body></html>[root@localhost httpd]# docker run -d -p 80:80 -v /home/wch/docker/httpd:/usr/local/apache2/htdocs httpd275953f4f8bcc276dc83c63147a5d05582c4b216eb80855d12a1eb3d7da5baae[root@localhost httpd]# curl 127.0.0.1:80< HTML ><body><h1>This is a file in host file system! </h1></body></html>
[root@localhost httpd]# echo "update index page" > index.html[root@localhost httpd]# cat index.html update index Page [root@localhost HTTPD]# curl 127.0.0.1:80update index page
[root @ # default is to read write localhost HTTPD] # docker run - d - p 80:80 - v/home/WCH/docker/HTTPD: / usr/local/apache2 / htdocs httpd# [root@localhost HTTPD]# docker run-d-p 80:80-v. You cannot modify the bind mount data in the container /home/wch/docker/httpd:/usr/local/apache2/htdocs:ro httpd
  • docker managed volume

The main difference between Docker Managed Volume and Bind mount is that you do not need to specify the mount source. You can specify mountpoint instead

If mount point points to an existing directory, the original data is copied into the volume of host

[root@localhost httpd]# docker run -d -p 80:80 -v /usr/local/apache2/htdocs httpd6c0c6c8e15ebc5e99ff53d60a9e59994dc79909b80f1020f15271e9012958c64[root@localhost httpd]# docker inspect 6c0c6c8e15eb"Mounts": [            {                "Type": "volume",                "Name": "02a78718f039a58ba22b56a96c1b0379da45f37408b96c8792b33a781ac04154",                "Source": "/var/lib/docker/volumes/02a78718f039a58ba22b56a96c1b0379da45f37408b96c8792b33a781ac04154/_data",                "Destination": "/usr/local/apache2/htdocs",                "Driver": "local",                "Mode": "",                "RW": true,                "Propagation": ""            }        ]
[root@localhost httpd]# docker volume lsDRIVER              VOLUME NAMElocal               02a78718f039a58ba22b56a96c1b0379da45f37408b96c8792b33a781ac04154local               0449d527e57c9b7b48789449fb02ae9c598db4d982a6c9af4f56cddea57a1b49[root@localhost httpd]# docker inspect 02a78718f039a58ba22b56a96c1b0379da45f37408b96c8792b33a781ac04154[    {        "CreatedAt": "2021-06-24T15:35:00+08:00",        "Driver": "local",        "Labels": null,        "Mountpoint": "/var/lib/docker/volumes/02a78718f039a58ba22b56a96c1b0379da45f37408b96c8792b33a781ac04154/_data",        "Name": "02a78718f039a58ba22b56a96c1b0379da45f37408b96c8792b33a781ac04154",        "Options": null,        "Scope": "local"    }]
[root@localhost httpd]# ls -l /var/lib/docker/volumes/02a78718f039a58ba22b56a96c1b0379da45f37408b96c8792b33a781ac04154/_datatotal 4-rw-r--r-- 1 mysql mysql 45 Jun 12 2007 index.html[root@localhost httpd]# cat /var/lib/docker/volumes/02a78718f039a58ba22b56a96c1b0379da45f37408b96c8792b33a781ac04154/_data/index.html <html><body><h1>It works! </h1></body></ HTML >[root@localhost HTTPD]# curl 127.0.1:80 < HTML ><body><h1>It works! </h1></body></html>
# docker rm removes the volume used by the container. # docker rm removes the volume used by the container But only if no other container mounts the volume[root@localhost HTTPD]# docker

Data sharing

  • The container shares data with the host

    • Bind mounts directly the directory to be shared to the container
    • docker managed volume
    [root@localhost HTTPD]# curl 127.0.1:80 < HTML ><body><h1>It works! </h1></body></html>[root@localhost httpd]# docker cp /home/wch/docker/httpd/index.html 6 c0c6c8e15eb: / usr/local/apache2 / htdocs/root @ localhost HTTPD] # curl 127.0.0.1:80 this is a new index page for the web cluster[root@localhost httpd]# cat /var/lib/docker/volumes/02a78718f039a58ba22b56a96c1b0379da45f37408b96c8792b33a781ac04154/_data/index.html This is a new index page for web cluster
  • Data is shared between containers
[root@localhost httpd]# docker run --name web1 -d -p 80 -v /home/wch/docker/httpd/:/usr/local/apache2/htdocs httpd2126366ffe2cb5aca7b97012b41779b7963ca41c4afd797a992d8a3c2e471ab4[root@localhost httpd]# docker run --name web2 -d -p 80 -v /home/wch/docker/httpd/:/usr/local/apache2/htdocs httpd03a859cfda48a472ff28c313638c6054633e30e7ed77d17d0919a6e95ecd164f[root@localhost httpd]# docker run --name web3 -d -p 80 -v /home/wch/docker/httpd/:/usr/local/apache2/htdocs httpd27483f6f7ccccce086594501d21e0b9eef1fdcc9f3145dd1a36e0c9c7910322a[root@localhost httpd]# docker psCONTAINER ID IMAGE Command CREATED Status Ports names27483F6F7CCC "HTTPD-foreground" 8 seconds ago Up 5 seconds 0.0.0.0:1026->80/ TCP "HTTPD-foreground" 17 Seconds ago Up 14 Seconds 0.0.0.0-1025-> 80/ TCP Web22126366FFE2C HTTPD "HTTPD-foreground" 17 Seconds ago Up 14 Seconds 0.0.0.0.0-1025-> 80/ TCP Web22126366FFE2C HTTPD "HTTD-foreground" 29 seconds ago Up 26 seconds 0.0.0.01:1024-> 80/ TCP Web1 "HTTD-foreground" 29 seconds ago Up 26 seconds 0.0.0.01:1024-> 80/ TCP Web1
[root@localhost HTTPD]# curl 127.0.0.1:1024update index Page [" nix "]# curl 127.0.0.1:1025update index Page [root@localhost HTTPD]# curl 127.0.0.1:1026update index page
[root@localhost httpd]# echo "This is a new index page for web cluster" > index.html [root@localhost httpd]# curl 127.0.0.1:1024This is a new index page for web cluster[root@localhost HTTPD]# curl 127.0.0.1:1025This is a new index page for web cluster[root@localhost HTTPD]# curl 127.0.0.1:1025This is a new index page for web cluster[root@localhost HTTPD]# curl 127.0.0.1:1025This is a new index page Page for Web Cluster [root@localhost HTTPD]# curl 127.0.0.1:1026This is a new index page for Web Cluster
  • volume container

    • Bind mount, which holds a static file for the Web Server
    • A docker managed volume that holds some utilities (empty for now, just for an example)
# docker create because the purpose of the volume container is to provide data, Itself does not need running [root @ localhost HTTPD] # docker create -- the name vc_data - v/home/WCH/docker/HTTPD: / usr/local/apache2 / htdocs - v /other/useful/tools busyboxUnable to find image 'busybox:latest' locallylatest: Pulling from library/busyboxb71f96345d44: Pull complete Digest: sha256:930490f97e5b921535c153e0e7110d251134cc4b72bbb8133c6a5065cc68580dStatus: Downloaded newer image for busybox:latest948a7dd94baf96c7b6291d4830df7d314a65680c687bad52ece2432e1190ee55
 [root@localhost httpd]# docker inspect vc_data "Mounts": [            {                "Type": "bind",                "Source": "/home/wch/docker/httpd",                "Destination": "/usr/local/apache2/htdocs",                "Mode": "",                "RW": true,                "Propagation": "rprivate"            },            {                "Type": "volume",                "Name": "9ea52d28e5824755983b45ebd1a28ea220eecadd2e653e3537143191dd97578f",                "Source": "/var/lib/docker/volumes/9ea52d28e5824755983b45ebd1a28ea220eecadd2e653e3537143191dd97578f/_data",                "Destination": "/other/useful/tools",                "Driver": "local",                "Mode": "",                "RW": true,                "Propagation": ""            }
--volumes-from vc_data volume container[root@localhost HTTPD]# docker run --name web4-d-p 80 --volumes-from vc_data httpdc9e05ea4c552687c79f00698ae56f1ab2c4654192105db309d09dd41eb3fcbee[root@localhost httpd]# docker inspect web4"Mounts": [ { "Type": "bind", "Source": "/home/wch/docker/httpd", "Destination": "/usr/local/apache2/htdocs", "Mode": "", "RW": true, "Propagation": "rprivate" }, { "Type": "volume", "Name": "9ea52d28e5824755983b45ebd1a28ea220eecadd2e653e3537143191dd97578f", "Source": "/var/lib/docker/volumes/9ea52d28e5824755983b45ebd1a28ea220eecadd2e653e3537143191dd97578f/_data", "Destination": "/other/useful/tools", "Driver": "local", "Mode": "", "RW": true, "Propagation": "" } ],
  • data-packed volume container

The idea is to package the data into an image and then share it through Docker Managed Volume

The container correctly reads the data in the volume. Data-packed volume container is self-contained and does not rely on the host to provide data. It has strong portability, which is very suitable for the scenarios that only use static data, such as application configuration information, static files of Web server, etc.

[root@localhost httpd]# pwd/home/wch/httpd[root@localhost httpd]# lltotal 4-rw-r--r-- 1 root root 91 Jun 24 17:00 Dockerfiledrwxr-xr-x 2 root root 23 Jun 24 16:57 htdocs
[root@localhost HTTPD]# docker build-t datapacked.Sending build context to docker daemon 3.584kbstep 1/3: FROM busybox:latest ---> 69593048aa3aStep 2/3 : ADD htdocs /usr/local/apache2/htdocs ---> aa1f4298814eStep 3/3 : VOLUME /usr/local/apache2/htdocs ---> Running in 71362c795108Removing intermediate container 71362c795108 ---> cb8ced11e74cSuccessfully built cb8ced11e74cSuccessfully tagged datapacked:latest
[root@localhost httpd]# docker run -d -p 80 --volumes-from vc_data2 Httpdb9da47ebcf64477c77fed8bb85613765485624b20161daf1508b56e326880447 [root @ localhost HTTPD] # curl 127.0.0.1: this is 1028 a new index page for web cluster

Multi-host management

Docker Machine is a tool that allows you to install Docker on a virtual host and manage the host using the docker-machine command.

Docker Machine can also centrally manage all Docker hosts, for example, quickly install Docker on 100 servers.

The installation

[root @ localhost HTTPD] # curl - L https://github.com/docker/machine/releases/download/v0.16.2/docker-machine- ` uname -s`-`uname -m` >/tmp/docker-machine && chmod +x /tmp/docker-machine &$ sudo cp /tmp/docker-machine /usr/local/bin/docker-machine
[root@localhost HTTPD]# docker-machine -vdocker-machine version 0.16.2, Build BD45AB13 # Install completion [root@localhost HTTPD]# yum-y install bash-completion

Configuration management

  • Check the current Machine
[root@localhost httpd]# docker-machine lsNAME   ACTIVE   DRIVER   STATE   URL   SWARM   DOCKER   ERRORS
  • Configure password-free login
[root@localhost HTTPD]# ssh-keygen# Copy keys to client1 # [root@localhost HTTPD]# ssh-copy-id 192.168.9.31# [root@localhost HTTPD]# SSH is not allowed
  • Create the machine
[root@localhost HTTPD]# docker-machine create --driver generic --generic-ip-address=192.168.9.31 client1Running pre-create checks... Creating machine... (client1) No SSH key specified. Assuming an existing key at the default location.Waiting for machine to be running, this may take a few minutes... Detecting operating system of created instance... Waiting for SSH to be available... Detecting the provisioner... Provisioning with centos... Copying certs to the local machine directory... Copying certs to the remote machine... Setting Docker configuration on the remote daemon... Checking connection to Docker... Docker is up and running! To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env client1
  • Check the current Machine
[root@localhost httpd]# docker-machine lsNAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORSclient1 - generic Running TCP: / / 192.168.9.31:2376 v18.06.3 - ce
  • Access all CLIENT1 environment variables
[root@localhost docker]# docker-machine env client1export DOCKER_TLS_VERIFY="1"export DOCKER_HOST = "TCP: / / 192.168.9.31:2376" export DOCKER_CERT_PATH = "/ root/docker/machine/those/client1" export DOCKER_MACHINE_NAME="client1"# Run this command to configure your shell: # eval $(docker-machine env client1)
  • Switch to client1 for action
[root@localhost docker]# eval $(docker-machine env client1)[root@localhost docker]# docker imagesREPOSITORY TAG IMAGE ID  CREATED SIZEwholegale39/tomcat latest 9b8179770e78 2 days ago 584MB
  • Other commands
[root@localhost docker]# docker-machine version client118.06.3-ce[host docker]# docker-machine status client1Running

Container monitoring

Built-in command tools

[root@client1 docker]# docker ps
[root@client1 docker]# docker container ls[root@localhost ~]# docker container ls -a
[root@localhost ~]# docker container top containerID
[root@localhost ~]# docker stats

sysdig

SysdigSysdig Cloud is an open source system analysis tool developed by Sysdig Cloud, which is mainly based on Lua language. SYSDIG can filter and analyze the status and behavior of a running system, making it more functional than any other open source tool. Sysdig can be considered a collection of strace + tcpdump + lsof + htop + iftop and other system analysis tools.

  • The installation
[root@localhost ~]# docker run -i -t --name sysdig --privileged -v /var/run/docker.sock:/host/var/run/docker.sock -v /dev:/host/dev -v /proc:/host/proc:ro -v /boot:/host/boot:ro -v /lib/modules:/host/lib/modules:ro -v /usr:/host/usr:ro sysdig/sysdig
root@6d57b899e866:/# csysdig

Weave Scope

Weave Scope is used to monitor, visualize, and manage Docker and Kubernetes.

Weave Scope is a project that automatically generates diagrams of the relationships between the containers to help you understand the relationships between the containers and monitor containerized and micro-service-oriented applications.

  • The installation
Download the latest version [root @ localhost ~ #] # sudo curl - https://github.com/weaveworks/scope/releases/download/latest_release/scope - o L [root@localhost ~]# sudo chmod a+x /usr/local/bin/scope# scope launch will launch Weave as a container Scope and add a user name and password, Improved security [root@localhost ~]# scope launch-app.basicauth-app.basicauth.password 123456 - app.basicauth.username user -probe.basicAuth -probe.basicAuth.password 123456 -probe.basicAuth.username user
  • Browser access http://[Host IP]:4040/, can be arbitrary operations on the container, very powerful

  • Multi-host monitoring, installed successfully on multiple machines according to the above command
# Weave scope container service [root@client1 ~]# Docker stop 1215c4a1d22e# Launch # Scope on multiple machines 192.168.9.31 192.168.9.1405023 feeda6c0e299c6c56cf7f1e1a4be1c9b8532a591f1aa326fbf8c75c4d561Scope probe startedWeave Scope  is listening at the following URL(s):

cAdvisor

  • Please refer to the Prometheus&Grafana Performance Monitoring article for details

Prometheus

  • Please refer to the Prometheus&Grafana Performance Monitoring article for details

Comparison of monitoring tools

Concerns/Solutions Docker ps/top/stats sysdig WeaveScope cAvisor Prometheus
Ease of deployment sssss sssss ssss sssss sss
Data detail sss sssss sssss sss sssss
Many Host monitoring none none sssss none sssss
The alarm function none none none none ssss
Monitor non-container resources none sss sss ss sssss

S stands for strong

Container log management

Docker logs

  • Attach, can not see the previous log, can only see the subsequent log, and the exit operation is more complicated
[root@localhost ~]# docker attach containerID
  • logs
[root@localhost ~]# docker logs -f containerID

Docker logging driver

Sending container logs to STDOUT and STDERR is Docker’s default logging behavior. In fact, Docker provides a variety of logging mechanisms to help users extract logging information from the running container, called logging drivers.

Docker’s default logging driver is JSON-FILE.

[root@localhost ~]# cat /var/lib/docker/containers/03a859cfda48a472ff28c313638c6054633e30e7ed77d17d0919a6e95ecd164f/03a859cfda48a472ff28c313638c 6054633e30e7ed77d17d0919a6e95ecd164f-json.log

ELK

FileBeat is a lightweight delivery tool for forwarding and centralizing log data. FileBeat monitors the log files or locations you specify, collects log events, and forwards them to Elasticsearch or Logstash for indexing. Beat also collects network traffic data, CPU and memory usage data at the system, process and file system levels, collects Windows event log data, collects audit logs, collects system runtime data, and so on.

Logstash, reads the raw log, analyzes and filters it, and then forwards it to another component (such as Elasticsearch) for indexing or storage. Logstash supports a rich variety of Input and Output types and can handle logging for a wide variety of applications. JVM running, resource consumption is relatively large

ElasticSearch is a full-text search engine for near real-time queries. Elasticsearch was designed to be able to handle and search huge amounts of log data.

Kibana, a Javascript based Web graphical interface program designed specifically to visualize ElasticSearch data. Kibana can query ElasticSearch and display the results with rich charts. Users can create dashboards to monitor system logs.

FileBeat >Kafka cluster >Logstash cluster >Elasticsearch cluster >Kibana

  • The Git Clone command downloads the project
[root@localhost docker-elk]# git clone https://github.com/deviantony/docker-elk.git
  • The installation
[root@localhost docker-elk]# docker-compose upBuilding elasticsearchSending build context to docker daemon 3.584kBStep 1/2: ARG ELK_VERSIONStep 2/2: FROM docker. Elastic. The co/elasticsearch/elasticsearch: ${ELK_VERSION} 7.13.2: Pulling from elasticsearch/elasticsearchddf49b9115d7: Already exists 815a15889ec1: Pull complete ba5d33fc5cc5: Pull complete 976 d4f887b1a: Extracting [= = = = = = = = = = = = = = >] 104.7 MB / 354.9 MB9b5ee4563932: Download complete ef11e8f17d0c: Download complete 3c5ad4db1e24: Download complete
  • To reset your password
[root@localhost docker-elk]# docker-compose exec -T elasticsearch bin/elasticsearch-setup-passwords auto --batchChanged password for user apm_systemPASSWORD apm_system = 4OHYCFm7yZhsVG5tQDflChanged password for user kibana_systemPASSWORD kibana_system = oksG2cfrYEFDFqzPLpu3Changed password for user kibanaPASSWORD kibana = oksG2cfrYEFDFqzPLpu3Changed password for user logstash_systemPASSWORD logstash_system = nHU6m8iuBoGKpHI4Yt1pChanged password for user beats_systemPASSWORD beats_system = YTjhnmgKxLlTVOY8V9PJChanged password for user remote_monitoring_userPASSWORD remote_monitoring_user = eihRRu2eDt05zY7AbqYuChanged password for user elasticPASSWORD elastic = fpgKWAI6tkQKkS8c8zzD
  • Change the password for user Elastic in the following configuration files
kibana/config/kibana.ymllogstash/config/logstash.ymllogstash/pipeline/logstash.conf
  • Restart the service
[root@localhost docker-elk]# docker-compose restartRestarting docker-elk_logstash_1      ... doneRestarting docker-elk_kibana_1        ... doneRestarting docker-elk_elasticsearch_1 ... done

https://blog.csdn.net/soultea…

Graylog

GrayLog is an open source log aggregation, analysis, auditing, presentation, and warning tool. Similar to ELK in function, but simpler than ELK, it quickly became popular with many people because it was more concise, efficient, and easy to deploy and use.

  • Creating configuration files

https://raw.githubusercontent…

https://raw.githubusercontent…

  • creategraylog.conffile
############################# GRAYLOG CONFIGURATION FILE############################## This is the Graylog configuration file. The file has to use ISO 8859-1/Latin-1 character encoding.# Characters that cannot be directly represented in this encoding can be written using Unicode escapes# as defined in https://docs.oracle.com/javase/specs/jls/se8/html/jls-3.html#jls-3.3, using the \u prefix.# For example, \u002c.## * Entries are generally expected to be a single line of the form, one of the following:## propertyName=propertyValue# propertyName:propertyValue## * White space that appears between the property name and property value is ignored,#   so the following are equivalent:## name=Stephen# name = Stephen## * White space at the beginning of the line is also ignored.## * Lines that start with the comment characters ! or # are ignored. Blank lines are also ignored.## * The property value is generally terminated by the end of the line. White space following the#   property value is not ignored, and is treated as part of the property value.## * A property value can span several lines if each line is terminated by a backslash (‘\’) character.#   For example:## targetCities=\#         Detroit,\#         Chicago,\#         Los Angeles##   This is equivalent to targetCities=Detroit,Chicago,Los Angeles (white space at the beginning of lines is ignored).## * The characters newline, carriage return, and tab can be inserted with characters \n, \r, and \t, respectively.## * The backslash character must be escaped as a double backslash. For example:## path=c:\\docs\\doc1## If you are running more than one instances of Graylog server you have to select one of these# instances as master. The master will perform some periodical tasks that non-masters won't perform.is_master = true# The auto-generated node ID will be stored in this file and read after restarts. It is a good idea# to use an absolute file path here if you are starting Graylog server from init scripts or similar.node_id_file = /usr/share/graylog/data/config/node-id# You MUST set a secret to secure/pepper the stored user passwords here. Use at least 64 characters.# Generate one by using for example: pwgen -N 1 -s 96# ATTENTION: This value must be the same on all Graylog nodes in the cluster.# Changing this value after installation will render all user sessions and encrypted values in the database invalid. (e.g. encrypted access tokens)password_secret = replacethiswithyourownsecret!# The default root user is named 'admin'#root_username = admin# You MUST specify a hash password for the root user (which you only need to initially set up the# system and in case you lose connectivity to your authentication backend)# This password cannot be changed using the API or via the web interface. If you need to change it,# modify it in this file.# Create one by using for example: echo -n yourpassword | shasum -a 256# and put the resulting hash value into the following line# CHANGE THIS!root_password_sha2 = 8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918# The email address of the root user.# Default is empty#root_email = ""# The time zone setting of the root user. See http://www.joda.org/joda-time/timezones.html for a list of valid time zones.# Default is UTC#root_timezone = UTC# Set the bin directory here (relative or absolute)# This directory contains binaries that are used by the Graylog server.# Default: binbin_dir = /usr/share/graylog/bin# Set the data directory here (relative or absolute)# This directory is used to store Graylog server state.# Default: datadata_dir = /usr/share/graylog/data# Set plugin directory here (relative or absolute)plugin_dir = /usr/share/graylog/plugin################ HTTP settings################### HTTP bind address## The network interface used by the Graylog HTTP interface.## This network interface must be accessible by all Graylog nodes in the cluster and by all clients# using the Graylog web interface.## If the port is omitted, Graylog will use port 9000 by default.## Default: 127.0.0.1:9000#http_bind_address = 127.0.0.1:9000#http_bind_address = [2001:db8::1]:9000http_bind_address = 0.0.0.0:9000#### HTTP publish URI## The HTTP URI of this Graylog node which is used to communicate with the other Graylog nodes in the cluster and by all# clients using the Graylog web interface.## The URI will be published in the cluster discovery APIs, so that other Graylog nodes will be able to find and connect to this Graylog node.## This configuration setting has to be used if this Graylog node is available on another network interface than $http_bind_address,# for example if the machine has multiple network interfaces or is behind a NAT gateway.## If $http_bind_address contains a wildcard IPv4 address (0.0.0.0), the first non-loopback IPv4 address of this machine will be used.# This configuration setting *must not* contain a wildcard address!## Default: http://$http_bind_address/#http_publish_uri = http://192.168.1.1:9000/#### External Graylog URI## The public URI of Graylog which will be used by the Graylog web interface to communicate with the Graylog REST API.## The external Graylog URI usually has to be specified, if Graylog is running behind a reverse proxy or load-balancer# and it will be used to generate URLs addressing entities in the Graylog REST API (see $http_bind_address).## When using Graylog Collector, this URI will be used to receive heartbeat messages and must be accessible for all collectors.## This setting can be overriden on a per-request basis with the "X-Graylog-Server-URL" HTTP request header.## Default: $http_publish_uri#http_external_uri =#### Enable CORS headers for HTTP interface## This allows browsers to make Cross-Origin requests from any origin.# This is disabled for security reasons and typically only needed if running graylog# with a separate server for frontend development.## Default: false#http_enable_cors = false#### Enable GZIP support for HTTP interface## This compresses API responses and therefore helps to reduce# overall round trip times. This is enabled by default. Uncomment the next line to disable it.#http_enable_gzip = false# The maximum size of the HTTP request headers in bytes.#http_max_header_size = 8192# The size of the thread pool used exclusively for serving the HTTP interface.#http_thread_pool_size = 16################# HTTPS settings#################### Enable HTTPS support for the HTTP interface## This secures the communication with the HTTP interface with TLS to prevent request forgery and eavesdropping.## Default: false#http_enable_tls = true# The X.509 certificate chain file in PEM format to use for securing the HTTP interface.#http_tls_cert_file = /path/to/graylog.crt# The PKCS#8 private key file in PEM format to use for securing the HTTP interface.#http_tls_key_file = /path/to/graylog.key# The password to unlock the private key used for securing the HTTP interface.#http_tls_key_password = secret# Comma separated list of trusted proxies that are allowed to set the client address with X-Forwarded-For# header. May be subnets, or hosts.#trusted_proxies = 127.0.0.1/32, 0:0:0:0:0:0:0:1/128# List of Elasticsearch hosts Graylog should connect to.# Need to be specified as a comma-separated list of valid URIs for the http ports of your elasticsearch nodes.# If one or more of your elasticsearch hosts require authentication, include the credentials in each node URI that# requires authentication.## Default: http://127.0.0.1:9200#elasticsearch_hosts = http://node1:9200,http://user:password@node2:19200elasticsearch_hosts = http://elasticsearch:9200# Maximum amount of time to wait for successfull connection to Elasticsearch HTTP port.## Default: 10 Seconds#elasticsearch_connect_timeout = 10s# Maximum amount of time to wait for reading back a response from an Elasticsearch server.# (e. g. during search, index creation, or index time-range calculations)## Default: 60 seconds#elasticsearch_socket_timeout = 60s# Maximum idle time for an Elasticsearch connection. If this is exceeded, this connection will# be tore down.## Default: inf#elasticsearch_idle_timeout = -1s# Maximum number of total connections to Elasticsearch.## Default: 200#elasticsearch_max_total_connections = 200# Maximum number of total connections per Elasticsearch route (normally this means per# elasticsearch server).## Default: 20#elasticsearch_max_total_connections_per_route = 20# Maximum number of times Graylog will retry failed requests to Elasticsearch.## Default: 2#elasticsearch_max_retries = 2# Enable automatic Elasticsearch node discovery through Nodes Info,# see https://www.elastic.co/guide/en/elasticsearch/reference/5.4/cluster-nodes-info.html## WARNING: Automatic node discovery does not work if Elasticsearch requires authentication, e. g. with Shield.## Default: false#elasticsearch_discovery_enabled = true# Filter for including/excluding Elasticsearch nodes in discovery according to their custom attributes,# see https://www.elastic.co/guide/en/elasticsearch/reference/5.4/cluster.html#cluster-nodes## Default: empty#elasticsearch_discovery_filter = rack:42# Frequency of the Elasticsearch node discovery.## Default: 30s# elasticsearch_discovery_frequency = 30s# Set the default scheme when connecting to Elasticsearch discovered nodes## Default: http (available options: http, https)#elasticsearch_discovery_default_scheme = http# Enable payload compression for Elasticsearch requests.## Default: false#elasticsearch_compression_enabled = true# Enable use of "Expect: 100-continue" Header for Elasticsearch index requests.# If this is disabled, Graylog cannot properly handle HTTP 413 Request Entity Too Large errors.## Default: true#elasticsearch_use_expect_continue = true# Graylog will use multiple indices to store documents in. You can configured the strategy it uses to determine# when to rotate the currently active write index.# It supports multiple rotation strategies:#   - "count" of messages per index, use elasticsearch_max_docs_per_index below to configure#   - "size" per index, use elasticsearch_max_size_per_index below to configure# valid values are "count", "size" and "time", default is "count"## ATTENTION: These settings have been moved to the database in 2.0. When you upgrade, make sure to set these#            to your previous 1.x settings so they will be migrated to the database!#            This configuration setting is only used on the first start of Graylog. After that,#            index related settings can be changed in the Graylog web interface on the 'System / Indices' page.#            Also see http://docs.graylog.org/en/2.3/pages/configuration/index_model.html#index-set-configuration.rotation_strategy = count# (Approximate) maximum number of documents in an Elasticsearch index before a new index# is being created, also see no_retention and elasticsearch_max_number_of_indices.# Configure this if you used 'rotation_strategy = count' above.## ATTENTION: These settings have been moved to the database in 2.0. When you upgrade, make sure to set these#            to your previous 1.x settings so they will be migrated to the database!#            This configuration setting is only used on the first start of Graylog. After that,#            index related settings can be changed in the Graylog web interface on the 'System / Indices' page.#            Also see http://docs.graylog.org/en/2.3/pages/configuration/index_model.html#index-set-configuration.elasticsearch_max_docs_per_index = 20000000# (Approximate) maximum size in bytes per Elasticsearch index on disk before a new index is being created, also see# no_retention and elasticsearch_max_number_of_indices. Default is 1GB.# Configure this if you used 'rotation_strategy = size' above.## ATTENTION: These settings have been moved to the database in 2.0. When you upgrade, make sure to set these#            to your previous 1.x settings so they will be migrated to the database!#            This configuration setting is only used on the first start of Graylog. After that,#            index related settings can be changed in the Graylog web interface on the 'System / Indices' page.#            Also see http://docs.graylog.org/en/2.3/pages/configuration/index_model.html#index-set-configuration.#elasticsearch_max_size_per_index = 1073741824# (Approximate) maximum time before a new Elasticsearch index is being created, also see# no_retention and elasticsearch_max_number_of_indices. Default is 1 day.# Configure this if you used 'rotation_strategy = time' above.# Please note that this rotation period does not look at the time specified in the received messages, but is# using the real clock value to decide when to rotate the index!# Specify the time using a duration and a suffix indicating which unit you want:#  1w  = 1 week#  1d  = 1 day#  12h = 12 hours# Permitted suffixes are: d for day, h for hour, m for minute, s for second.## ATTENTION: These settings have been moved to the database in 2.0. When you upgrade, make sure to set these#            to your previous 1.x settings so they will be migrated to the database!#            This configuration setting is only used on the first start of Graylog. After that,#            index related settings can be changed in the Graylog web interface on the 'System / Indices' page.#            Also see http://docs.graylog.org/en/2.3/pages/configuration/index_model.html#index-set-configuration.#elasticsearch_max_time_per_index = 1d# Disable checking the version of Elasticsearch for being compatible with this Graylog release.# WARNING: Using Graylog with unsupported and untested versions of Elasticsearch may lead to data loss!#elasticsearch_disable_version_check = true# Disable message retention on this node, i. e. disable Elasticsearch index rotation.#no_retention = false# How many indices do you want to keep?## ATTENTION: These settings have been moved to the database in 2.0. When you upgrade, make sure to set these#            to your previous 1.x settings so they will be migrated to the database!#            This configuration setting is only used on the first start of Graylog. After that,#            index related settings can be changed in the Graylog web interface on the 'System / Indices' page.#            Also see http://docs.graylog.org/en/2.3/pages/configuration/index_model.html#index-set-configuration.elasticsearch_max_number_of_indices = 5# Decide what happens with the oldest indices when the maximum number of indices is reached.# The following strategies are availble:#   - delete # Deletes the index completely (Default)#   - close # Closes the index and hides it from the system. Can be re-opened later.## ATTENTION: These settings have been moved to the database in 2.0. When you upgrade, make sure to set these#            to your previous 1.x settings so they will be migrated to the database!#            This configuration setting is only used on the first start of Graylog. After that,#            index related settings can be changed in the Graylog web interface on the 'System / Indices' page.#            Also see http://docs.graylog.org/en/2.3/pages/configuration/index_model.html#index-set-configuration.retention_strategy = delete# How many Elasticsearch shards and replicas should be used per index? Note that this only applies to newly created indices.# ATTENTION: These settings have been moved to the database in Graylog 2.2.0. When you upgrade, make sure to set these#            to your previous settings so they will be migrated to the database!#            This configuration setting is only used on the first start of Graylog. After that,#            index related settings can be changed in the Graylog web interface on the 'System / Indices' page.#            Also see http://docs.graylog.org/en/2.3/pages/configuration/index_model.html#index-set-configuration.elasticsearch_shards = 1elasticsearch_replicas = 0# Prefix for all Elasticsearch indices and index aliases managed by Graylog.## ATTENTION: These settings have been moved to the database in Graylog 2.2.0. When you upgrade, make sure to set these#            to your previous settings so they will be migrated to the database!#            This configuration setting is only used on the first start of Graylog. After that,#            index related settings can be changed in the Graylog web interface on the 'System / Indices' page.#            Also see http://docs.graylog.org/en/2.3/pages/configuration/index_model.html#index-set-configuration.elasticsearch_index_prefix = graylog# Name of the Elasticsearch index template used by Graylog to apply the mandatory index mapping.# Default: graylog-internal## ATTENTION: These settings have been moved to the database in Graylog 2.2.0. When you upgrade, make sure to set these#            to your previous settings so they will be migrated to the database!#            This configuration setting is only used on the first start of Graylog. After that,#            index related settings can be changed in the Graylog web interface on the 'System / Indices' page.#            Also see http://docs.graylog.org/en/2.3/pages/configuration/index_model.html#index-set-configuration.#elasticsearch_template_name = graylog-internal# Do you want to allow searches with leading wildcards? This can be extremely resource hungry and should only# be enabled with care. See also: http://docs.graylog.org/en/2.1/pages/queries.htmlallow_leading_wildcard_searches = false# Do you want to allow searches to be highlighted? Depending on the size of your messages this can be memory hungry and# should only be enabled after making sure your Elasticsearch cluster has enough memory.allow_highlighting = false# Analyzer (tokenizer) to use for message and full_message field. The "standard" filter usually is a good idea.# All supported analyzers are: standard, simple, whitespace, stop, keyword, pattern, language, snowball, custom# Elasticsearch documentation: https://www.elastic.co/guide/en/elasticsearch/reference/2.3/analysis.html# Note that this setting only takes effect on newly created indices.## ATTENTION: These settings have been moved to the database in Graylog 2.2.0. When you upgrade, make sure to set these#            to your previous settings so they will be migrated to the database!#            This configuration setting is only used on the first start of Graylog. After that,#            index related settings can be changed in the Graylog web interface on the 'System / Indices' page.#            Also see http://docs.graylog.org/en/2.3/pages/configuration/index_model.html#index-set-configuration.elasticsearch_analyzer = standard# Global timeout for index optimization (force merge) requests.# Default: 1h#elasticsearch_index_optimization_timeout = 1h# Maximum number of concurrently running index optimization (force merge) jobs.# If you are using lots of different index sets, you might want to increase that number.# Default: 20#elasticsearch_index_optimization_jobs = 20# Time interval for index range information cleanups. This setting defines how often stale index range information# is being purged from the database.# Default: 1h#index_ranges_cleanup_interval = 1h# Time interval for the job that runs index field type maintenance tasks like cleaning up stale entries. This doesn't# need to run very often.# Default: 1h#index_field_type_periodical_interval = 1h# Batch size for the Elasticsearch output. This is the maximum (!) number of messages the Elasticsearch output# module will get at once and write to Elasticsearch in a batch call. If the configured batch size has not been# reached within output_flush_interval seconds, everything that is available will be flushed at once. Remember# that every outputbuffer processor manages its own batch and performs its own batch write calls.# ("outputbuffer_processors" variable)output_batch_size = 500# Flush interval (in seconds) for the Elasticsearch output. This is the maximum amount of time between two# batches of messages written to Elasticsearch. It is only effective at all if your minimum number of messages# for this time period is less than output_batch_size * outputbuffer_processors.output_flush_interval = 1# As stream outputs are loaded only on demand, an output which is failing to initialize will be tried over and# over again. To prevent this, the following configuration options define after how many faults an output will# not be tried again for an also configurable amount of seconds.output_fault_count_threshold = 5output_fault_penalty_seconds = 30# The number of parallel running processors.# Raise this number if your buffers are filling up.processbuffer_processors = 5outputbuffer_processors = 3# The following settings (outputbuffer_processor_*) configure the thread pools backing each output buffer processor.# See https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ThreadPoolExecutor.html for technical details# When the number of threads is greater than the core (see outputbuffer_processor_threads_core_pool_size),# this is the maximum time in milliseconds that excess idle threads will wait for new tasks before terminating.# Default: 5000#outputbuffer_processor_keep_alive_time = 5000# The number of threads to keep in the pool, even if they are idle, unless allowCoreThreadTimeOut is set# Default: 3#outputbuffer_processor_threads_core_pool_size = 3# The maximum number of threads to allow in the pool# Default: 30#outputbuffer_processor_threads_max_pool_size = 30# UDP receive buffer size for all message inputs (e. g. SyslogUDPInput).#udp_recvbuffer_sizes = 1048576# Wait strategy describing how buffer processors wait on a cursor sequence. (default: sleeping)# Possible types:#  - yielding#     Compromise between performance and CPU usage.#  - sleeping#     Compromise between performance and CPU usage. Latency spikes can occur after quiet periods.#  - blocking#     High throughput, low latency, higher CPU usage.#  - busy_spinning#     Avoids syscalls which could introduce latency jitter. Best when threads can be bound to specific CPU cores.processor_wait_strategy = blocking# Size of internal ring buffers. Raise this if raising outputbuffer_processors does not help anymore.# For optimum performance your LogMessage objects in the ring buffer should fit in your CPU L3 cache.# Must be a power of 2. (512, 1024, 2048, ...)ring_size = 65536inputbuffer_ring_size = 65536inputbuffer_processors = 2inputbuffer_wait_strategy = blocking# Enable the disk based message journal.message_journal_enabled = true# The directory which will be used to store the message journal. The directory must be exclusively used by Graylog and# must not contain any other files than the ones created by Graylog itself.## ATTENTION:#   If you create a seperate partition for the journal files and use a file system creating directories like 'lost+found'#   in the root directory, you need to create a sub directory for your journal.#   Otherwise Graylog will log an error message that the journal is corrupt and Graylog will not start.message_journal_dir = data/journal# Journal hold messages before they could be written to Elasticsearch.# For a maximum of 12 hours or 5 GB whichever happens first.# During normal operation the journal will be smaller.#message_journal_max_age = 12h#message_journal_max_size = 5gb#message_journal_flush_age = 1m#message_journal_flush_interval = 1000000#message_journal_segment_age = 1h#message_journal_segment_size = 100mb# Number of threads used exclusively for dispatching internal events. Default is 2.#async_eventbus_processors = 2# How many seconds to wait between marking node as DEAD for possible load balancers and starting the actual# shutdown process. Set to 0 if you have no status checking load balancers in front.lb_recognition_period_seconds = 3# Journal usage percentage that triggers requesting throttling for this server node from load balancers. The feature is# disabled if not set.#lb_throttle_threshold_percentage = 95# Every message is matched against the configured streams and it can happen that a stream contains rules which# take an unusual amount of time to run, for example if its using regular expressions that perform excessive backtracking.# This will impact the processing of the entire server. To keep such misbehaving stream rules from impacting other# streams, Graylog limits the execution time for each stream.# The default values are noted below, the timeout is in milliseconds.# If the stream matching for one stream took longer than the timeout value, and this happened more than "max_faults" times# that stream is disabled and a notification is shown in the web interface.#stream_processing_timeout = 2000#stream_processing_max_faults = 3# Since 0.21 the Graylog server supports pluggable output modules. This means a single message can be written to multiple# outputs. The next setting defines the timeout for a single output module, including the default output module where all# messages end up.## Time in milliseconds to wait for all message outputs to finish writing a single message.#output_module_timeout = 10000# Time in milliseconds after which a detected stale master node is being rechecked on startup.#stale_master_timeout = 2000# Time in milliseconds which Graylog is waiting for all threads to stop on shutdown.#shutdown_timeout = 30000# MongoDB connection string# See https://docs.mongodb.com/manual/reference/connection-string/ for details#mongodb_uri = mongodb://localhost/graylogmongodb_uri = mongodb://mongo/graylog# Authenticate against the MongoDB server# '+'-signs in the username or password need to be replaced by '%2B'#mongodb_uri = mongodb://grayloguser:secret@localhost:27017/graylog# Use a replica set instead of a single host#mongodb_uri = mongodb://grayloguser:secret@localhost:27017,localhost:27018,localhost:27019/graylog?replicaSet=rs01# DNS Seedlist https://docs.mongodb.com/manual/reference/connection-string/#dns-seedlist-connection-format#mongodb_uri = mongodb+srv://server.example.org/graylog# Increase this value according to the maximum connections your MongoDB server can handle from a single client# if you encounter MongoDB connection problems.mongodb_max_connections = 1000# Number of threads allowed to be blocked by MongoDB connections multiplier. Default: 5# If mongodb_max_connections is 100, and mongodb_threads_allowed_to_block_multiplier is 5,# then 500 threads can block. More than that and an exception will be thrown.# http://api.mongodb.com/java/current/com/mongodb/MongoOptions.html#threadsAllowedToBlockForConnectionMultipliermongodb_threads_allowed_to_block_multiplier = 5# Email transport#transport_email_enabled = false#transport_email_hostname = mail.example.com#transport_email_port = 587#transport_email_use_auth = true#transport_email_auth_username = [email protected]#transport_email_auth_password = secret#transport_email_subject_prefix = [graylog]#transport_email_from_email = [email protected]# Encryption settings## ATTENTION:#    Using SMTP with STARTTLS *and* SMTPS at the same time is *not* possible.# Use SMTP with STARTTLS, see https://en.wikipedia.org/wiki/Opportunistic_TLS#transport_email_use_tls = true# Use SMTP over SSL (SMTPS), see https://en.wikipedia.org/wiki/SMTPS# This is deprecated on most SMTP services!#transport_email_use_ssl = false# Specify and uncomment this if you want to include links to the stream in your stream alert mails.# This should define the fully qualified base url to your web interface exactly the same way as it is accessed by your users.#transport_email_web_interface_url = https://graylog.example.com# The default connect timeout for outgoing HTTP connections.# Values must be a positive duration (and between 1 and 2147483647 when converted to milliseconds).# Default: 5s#http_connect_timeout = 5s# The default read timeout for outgoing HTTP connections.# Values must be a positive duration (and between 1 and 2147483647 when converted to milliseconds).# Default: 10s#http_read_timeout = 10s# The default write timeout for outgoing HTTP connections.# Values must be a positive duration (and between 1 and 2147483647 when converted to milliseconds).# Default: 10s#http_write_timeout = 10s# HTTP proxy for outgoing HTTP connections# ATTENTION: If you configure a proxy, make sure to also configure the "http_non_proxy_hosts" option so internal#            HTTP connections with other nodes does not go through the proxy.# Examples:#   - http://proxy.example.com:8123#   - http://username:[email protected]:8123#http_proxy_uri =# A list of hosts that should be reached directly, bypassing the configured proxy server.# This is a list of patterns separated by ",". The patterns may start or end with a "*" for wildcards.# Any host matching one of these patterns will be reached through a direct connection instead of through a proxy.# Examples:#   - localhost,127.0.0.1#   - 10.0.*,*.example.com#http_non_proxy_hosts =# Disable the optimization of Elasticsearch indices after index cycling. This may take some load from Elasticsearch# on heavily used systems with large indices, but it will decrease search performance. The default is to optimize# cycled indices.## ATTENTION: These settings have been moved to the database in Graylog 2.2.0. When you upgrade, make sure to set these#            to your previous settings so they will be migrated to the database!#            This configuration setting is only used on the first start of Graylog. After that,#            index related settings can be changed in the Graylog web interface on the 'System / Indices' page.#            Also see http://docs.graylog.org/en/2.3/pages/configuration/index_model.html#index-set-configuration.#disable_index_optimization = true# Optimize the index down to <= index_optimization_max_num_segments. A higher number may take some load from Elasticsearch# on heavily used systems with large indices, but it will decrease search performance. The default is 1.## ATTENTION: These settings have been moved to the database in Graylog 2.2.0. When you upgrade, make sure to set these#            to your previous settings so they will be migrated to the database!#            This configuration setting is only used on the first start of Graylog. After that,#            index related settings can be changed in the Graylog web interface on the 'System / Indices' page.#            Also see http://docs.graylog.org/en/2.3/pages/configuration/index_model.html#index-set-configuration.#index_optimization_max_num_segments = 1# The threshold of the garbage collection runs. If GC runs take longer than this threshold, a system notification# will be generated to warn the administrator about possible problems with the system. Default is 1 second.#gc_warning_threshold = 1s# Connection timeout for a configured LDAP server (e. g. ActiveDirectory) in milliseconds.#ldap_connection_timeout = 2000# Disable the use of SIGAR for collecting system stats#disable_sigar = false# The default cache time for dashboard widgets. (Default: 10 seconds, minimum: 1 second)#dashboard_widget_default_cache_time = 10s# For some cluster-related REST requests, the node must query all other nodes in the cluster. This is the maximum number# of threads available for this. Increase it, if '/cluster/*' requests take long to complete.# Should be http_thread_pool_size * average_cluster_size if you have a high number of concurrent users.proxied_requests_thread_pool_size = 32# The server is writing processing status information to the database on a regular basis. This setting controls how# often the data is written to the database.# Default: 1s (cannot be less than 1s)#processing_status_persist_interval = 1s# Configures the threshold for detecting outdated processing status records. Any records that haven't been updated# in the configured threshold will be ignored.# Default: 1m (one minute)#processing_status_update_threshold = 1m# Configures the journal write rate threshold for selecting processing status records. Any records that have a lower# one minute rate than the configured value might be ignored. (dependent on number of messages in the journal)# Default: 1#processing_status_journal_write_rate_threshold = 1# Configures the prefix used for graylog event indices# Default: gl-events#default_events_index_prefix = gl-events# Configures the prefix used for graylog system event indices# Default: gl-system-events#default_system_events_index_prefix = gl-system-events# Automatically load content packs in "content_packs_dir" on the first start of Graylog.#content_packs_loader_enabled = false# The directory which contains content packs which should be loaded on the first start of Graylog.#content_packs_dir = /usr/share/graylog/data/contentpacks# A comma-separated list of content packs (files in "content_packs_dir") which should be applied on# the first start of Graylog.# Default: empty#content_packs_auto_install = grok-patterns.json# The allowed TLS protocols for system wide TLS enabled servers. (e.g. message inputs, http interface)# Setting this to an empty value, leaves it up to system libraries and the used JDK to chose a default.# Default: TLSv1.2,TLSv1.3  (might be automatically adjusted to protocols supported by the JDK)#enabled_tls_protocols= TLSv1.2,TLSv1.3
  • createlog4j2.xmlfile
<? The XML version = "1.0" encoding = "utf-8"? ><Configuration packages="org.graylog2.log4j" shutdownHook="disable"> <Appenders> <Console name="STDOUT" target="SYSTEM_OUT"> <PatternLayout pattern="%d %-5p: %c - %m%n"/> </Console> <! -- Internal Graylog log appender. Please do not disable. This makes internal log messages available via REST calls. --> <Memory name="graylog-internal-logs" bufferSize="500"/> </Appenders> <Loggers> <! -- Application Loggers --> <Logger name="org.graylog2" level="info"/> <Logger name="com.github.joschi.jadconfig" level="warn"/> <! -- Prevent DEBUG message about Lucene Expressions not found. --> <Logger name="org.elasticsearch.script" level="warn"/> <! -- Disable messages from the version check --> <Logger name="org.graylog2.periodical.VersionCheckThread" level="off"/> <! -- Silence chatty natty --> <Logger name="com.joestelmach.natty.Parser" level="warn"/> <! -- Silence Kafka log chatter --> <Logger name="org.graylog.shaded.kafka09.log.Log" level="warn"/> <Logger name="org.graylog.shaded.kafka09.log.OffsetIndex" level="warn"/> <Logger name="org.apache.kafka.clients.consumer.ConsumerConfig" level="warn"/> <! -- Silence useless session validation messages --> <Logger name="org.apache.shiro.session.mgt.AbstractValidatingSessionManager" level="warn"/> <Root level="warn"> <AppenderRef ref="STDOUT"/> <AppenderRef ref="graylog-internal-logs"/> </Root> </Loggers></Configuration>
  • createdocker-compose_graylog.ymlfile
version: '2'services: # MongoDB: https://hub.docker.com/_/mongo/ mongodb: container_name: mongo image: mongo:3 volumes: - mongo_data:/data/db # Elasticsearch: https://www.elastic.co/guide/en/elasticsearch/reference/6.x/docker.html elasticsearch: container_name: es image: Docker. Elastic. Co/elasticsearch/elasticsearch: 7.13.2 volumes: - es_data: / usr/share/elasticsearch/data environment: -tz =Asia/Shanghai -http. host=0.0.0.0 -transport.host = localhost-network. host=0.0.0.0 - "ES_JAVA_OPTS= -xms1024m -Xmx1024m" ulimits: memlock: soft: -1 hard: -1 mem_limit: 4g # Graylog: https://hub.docker.com/r/graylog/graylog/ graylog: container_name: graylog image: graylog/graylog: 4.1 volumes: - graylog_journal:/usr/share/graylog/data/journal - ./graylog/config:/usr/share/graylog/data/config environment: # CHANGE ME (must be at least 16 characters)! - GRAYLOG_PASSWORD_SECRET=somepasswordpepper # Password: admin - GRAYLOG_ROOT_PASSWORD_SHA2=8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918 #- GRAYLOG_HTTP_EXTERNAL_URI=http://1.1.1.1:9000/ # -tz =Asia/Shanghai links: - mongodb:mongo - elasticsearch depends_on: - mongodb - elasticsearch ports: # Graylog web interface and REST API - 9000:9000 # Syslog TCP - 1514:1514 # Syslog UDP - 1514:1514/udp # GELF TCP - 12201:12201 # GELF UDP - 12201-12205:12201-12205/udp# Volumes for persisting data, see https://docs.docker.com/engine/admin/volumes/volumes/volumes: mongo_data: driver: local es_data: driver: local graylog_journal: driver: local
  • The installation
[root@localhost graylog]# docker-compose -f docker-compose_graylog.yml up -dCreating network "graylog_default" with the default driverCreating volume "graylog_mongo_data" with local driverCreating volume "graylog_es_data" with local driverCreating volume "graylog_graylog_journal" with local driverPulling mongodb (mongo:3)...
  • Browser access after successhttp://192.168.9.140:9000/system/inputsAnd create the input

  • To send data
/ root @ localhost ~ # curl - XPOST http://127.0.0.1:12201/gelf - p0 - d '{" message ":" hello Tinywan222 ", "the host" : "127.0.0.1", "facility":"test", "topic": "meme"}'

https://www.cnblogs.com/tinyw…

https://www.cnblogs.com/jonny…

Container Platform Technology

  • Known as orchestration, it typically includes container management, scheduling, cluster definition, and service discovery. Through the container orchestration engine, containers are organically composed into micro-service applications to fulfill business requirements.
  • The Container Management Platform is a more general platform architected on top of the container orchestration engine. Container management platforms typically support multiple orchestration engines, abstracts the underlying implementation details of orchestration engines, and provides users with more convenient features such as Application Catalog and One-click Application Deployment.
  • Container-based PaaS provides microservice application developers and companies with a platform to develop, deploy, and manage applications, allowing users to focus on application development without worrying about the underlying infrastructure.

Container Support Technology

  • Containers make network topologies more dynamic and complex. Users need specialized solutions to manage the connectivity and isolation between containers and between containers and other entities.
  • Dynamic change is one of the characteristics of microservice applications. As the load increases, the cluster automatically creates new containers; The load is reduced and the excess containers are destroyed. Containers can also be migrated from host to host based on resource usage, and the IP and port of the container can change accordingly.
  • Monitoring is important to the infrastructure, and the dynamic nature of the container makes monitoring even more challenging.
  • Containers are often migrated from host to host, and data management tools such as REX-Ray provide the ability to ensure that persistent data is also migrated dynamically.
  • Logs provide an important basis for problem detection and event management.
  • Security has long been the focus of industry debate for young containers, and OpenSCAP is a container security tool.

The Docker accelerator

  • daocloud.io
  • aliyun
Sudo tee /etc/docker-daemon. json <<-'EOF'{" Registry - Mirrors ": ["https:// your own Alicloud mirror.aliyuncs.com"]}EOF
sudo systemctl daemon-reloadsudo systemctl restart docker

The problem

WARNING: Found orphan containers

  • Issue: The docker-compose startup container reports the following error

    • WARNING: Found orphan containers (prometheus, grafana) for this project. If you removed or renamed this service in your compose file, you can run this command with the –remove-orphans flag to clean it up.
  • Reason: if the docker – compose a mirror image of the configuration in the same directory, docker run-time image generated by the instance will have the same prefix, is the current directory name, that is to say, the default is the same set of instances of the same prefix, when do you have any other images in the current directory configuration file, will appear the warning at runtime

<img title=”” src=”https://gitee.com/wholegale39/pictures_markdown/raw/master/20210518185307.png” alt=”” data-align=”center”>

  • The solution

    • 1. Rename the instance at startup
    docker-compose -p node_exporter -f docker-compose_node-exporter.yml up -d
  • 2. Or put the files in a different directory to run

Private repository upload image

  • Phenomenon: Upload mirror prompt
[root @ localhost docker] # docker pull 192.168.9.140:5000 / wholegale39 / tomcat: latestError response from the daemon: A Get HTTP: https://192.168.9.140:5000/v2/: server gave, the HTTP response to HTTPS client
  • Reason: Docker does not allow non-HTTPS image push by default
  • Json, restart the Docker service and upload again
vim /etc/docker/daemon.json{ "registry-mirrors": (" https://dnw6qtuv.mirror.aliyuncs.com "), "insecure - registries:" [] "192.168.9.140:5000"}
[root@localhost docker]# systemctl restart docker

Reference books

Docker Technology Introduction and Practice (3rd Edition)

Docker Container Technology with High Availability Practices

Docker: Containers and Container Clouds (Version 2)

Docker advanced and actual combat

Learn Docker step by step

Easy to understand, Docker

Play with Docker Container Technology for 5 minutes a day