This article is the third article of Ceph practice column. By default, we have installed Docker on the virtual machine. This article mainly tells how to start each component of Ceph file system and set up the cluster.

The environment

Create three virtual machines. This tutorial uses Linux version 7.6 for Centos, Docker version 19.03.13, and Ceph version Nautilus. The three VMS are as follows:

The host name Host IP instructions
ceph1 192.168.161.137 Container master node (Dashbaord, MON, RGW, MGR, OSD)
ceph2 192.168.161.135 Container child nodes (MON, RGW, MGR, OSD)
ceph3 192.168.161.136 Container child nodes (MON, RGW, MGR, OSD)

preview

We need to do a pre-check of our machine’s environment before deploying Ceph. This mainly involves setting firewalls and host names.

  1. Disabling the Firewall
systemctl stop firewalld
systemctl disable firewalld
Copy the code
  1. Turn off Selinux (Linux security subsystem)
sed -i 's/enforcing/disabled/' /etc/selinux/config
setenforce 0
Copy the code

PS: In actual deployment, you are advised to add IP addresses to the whitelist rather than disable the firewall. 3. Set the host names of the three VMS to CEph1, CEph2, and CEPH3.

hostnamectl set-hostname ceph1
hostnamectl set-hostname ceph2
hostnamectl set-hostname ceph3
Copy the code
  1. Configure cryptographic-free login to CEph2 and CEph3 on ceph1. Run the following command on ceph1.
Ssh-copy-id ceph2 ssh-copy-id ceph3 ssh-copy-id ceph3Copy the code
  1. Run the following command to configure host on each of the three nodes.
Cat >> /etc/hosts <<EOF 192.168.161.137 cePH1 192.168.161.135 cePH2 192.168.161.136 cePH3 EOFCopy the code
  1. Enabling the NTP Service

The NTP service is used to synchronize the time of different machines. Note If the NTP service is not enabled, clock skew detected on Mon. ceph1, Mon. ceph2 may occur.

If it is inactive, it indicates that the NTP service is not started. Systemctl status NTPD # Start the NTP service systemctl start NTPD # Enable the automatic NTP service systemctl enable NTPDCopy the code
  1. Other configuration

Add ceph commands to the ceph local alias for easy use.

echo 'alias ceph="docker exec mon ceph"' >> /etc/profile
source /etc/profile
Copy the code

With the above pre-check done, let’s start the specific deployment.

The deployment of

1. Create the Ceph directory

Create a Ceph directory on the host to map the Ceph configuration file to the container for direct manipulation and management. Create these four folders on ceph1 as root.

mkdir -p /usr/local/ceph/{admin,etc,lib,logs}
Copy the code

This command creates four specified directories at a time, separated by commas and without Spaces. The admin folder is used to store startup scripts. The etc folder is used to store configuration files such as ceph.conf. The lib folder is used to store key files of each component. 2. Authorize docker users

Chown -r 167:167 /usr/local/ceph/Copy the code

2. Install Docker (omitted, see above, three machines)

3. Create an OSD disk

  1. Create an OSD disk. The OSD service is an object storage daemon process that stores objects to a local file system. An independent disk must be used as storage.

  2. If there is no independent disk, we can create a virtual disk under Linux to mount, as follows: 2.1. Initialize a 10GB image file:

    mkdir -p /usr/local/ceph-disk
    dd if=/dev/zero of=/usr/local/ceph-disk/ceph-disk-01 bs=1G count=10
    Copy the code

    Losetup -f /usr/local/ceph-disk/ceph-disk-01 2.3. Format (name based on fdisk -l) : mkfs. XFS -f /dev/loop0 2.4. To mount a file system, you can mount the loop0 disk to the /dev/osd directory.

     ```
        mkdir -p /dev/osd
        mount /dev/loop0  /dev/osd
     ```
    Copy the code
  3. If you have a separate disk (for a VIRTUAL machine, just add a hard disk to the VM Settings, as shown)

XFS -f /dev/sdb 3.2 Mounting a file system: mkdir -p /dev/osd mount /dev/sdb /dev/osdCopy the code
  1. Can be achieved bydf -hCommand to view the mounting result

4. Pull ceph

Here we use the most popular ceph/daemon images on Dockerhub (here we need to pull the latest- Nautilus version of ceph)

docker pull ceph/daemon:latest-nautilus
Copy the code

5. Write scripts (all scripts are placed in the admin folder)

1. start_mon.sh

! /bin/bash docker run -d --net=host \ --name=mon \ -v /etc/localtime:/etc/localtime \ -v /usr/local/ceph/etc:/etc/ceph \ -v /usr/local/ceph/lib:/var/lib/ceph \ -v /usr/local/ceph/logs:/var/log/ceph \ -e MON_IP = 192.168.161.137 192.168.161.135, 192.168.161.136 24 \ \ e CEPH_PUBLIC_NETWORK = 192.168.161.0 / ceph/daemon:latest-nautilus monCopy the code

This script is used to start the monitor, which maintains the global status of the entire Ceph cluster. A cluster should have at least one monitor, preferably an odd number. This allows you to elect other monitors when one is down. Startup script description:

  1. The name parameter specifies the node name, in this case set to mon
  2. -v XXX: XXX sets up the directory mapping between the host and the container, including the etc, lib, and logs directories.
  3. MON_IPIs the RUNNING IP address of Docker (queried by ifconfig, take the inet IP address in ENS33), here we have 3 servers, so MAN_IP needs to write 3 IP addresses, if the IP address is across network segmentCEPH_PUBLIC_NETWORKAll network segments must be included.
  4. CEPH_PUBLIC_NETWORKAll network segments are configured for running the Docker host

You must specify the nautilus version here, otherwise the latest version ceph will default, and mon must be the same as the name defined earlier. 2. start_osd.sh

#! /bin/bash docker run -d \ --name=osd \ --net=host \ --restart=always \ --privileged=true \ --pid=host \ -v /etc/localtime:/etc/localtime \ -v /usr/local/ceph/etc:/etc/ceph \ -v /usr/local/ceph/lib:/var/lib/ceph \ -v /usr/local/ceph/logs:/var/log/ceph \ -v /dev/osd:/var/lib/ceph/osd \ ceph/daemon:latest-nautilus osd_directoryCopy the code

This script is used to start the Object Storage Device (OSD) component, which is a RADOS component used for Storage resources. Net is used to specify host. 3. Restart is set to always so that the OSD component can be restarted when it is Down. 4. Privileged Specifies that the OSD is privileged. Here we use the osd_directory mirroring mode 3.start_Mgr. sh

#! /bin/bash docker run -d --net=host \ --name=mgr \ -v /etc/localtime:/etc/localtime \ -v /usr/local/ceph/etc:/etc/ceph \ -v /usr/local/ceph/lib:/var/lib/ceph \ -v /usr/local/ceph/logs:/var/log/ceph \ ceph/daemon:latest-nautilus mgrCopy the code

This script is used to start the MGR component, which shares and extends some of the functions of Monitor and provides a graphical management interface to better manage ceph storage systems. The startup script is relatively simple and will not be described here. 4. start_rgw.sh

#! /bin/bash docker run \ -d --net=host \ --name=rgw \ -v /etc/localtime:/etc/localtime \ -v /usr/local/ceph/etc:/etc/ceph \ -v /usr/local/ceph/lib:/var/lib/ceph \ -v /usr/local/ceph/logs:/var/log/ceph \ ceph/daemon:latest-nautilus rgwCopy the code

This script is used to start the RGW component. As the GateWay system of object storage, the RGW plays the role of Rados cluster client to provide data storage for object storage applications, and the role of HTTP server to accept and parse data transmitted over the Internet.

6. Execute the script

Start the mon

  1. The start_mon.sh script is executed on the primary node ceph1docker ps -a|grep monView the startup result and generate the configuration data after the startup. Add the following information to the ceph master configuration file:
Cat > > / usr/local/ceph/etc/ceph. Conf < < EOF # tolerate more mon clock drift clock error allowed = 2 mon clock drift warn backoff = 30 # Pool mon_allow_pool_delete = true [MGR] # Enable WEB dashboard MGR modules = dashboard [client.rgw. Ceph1] # Set the WEB access port of RGW gateway rgw_frontends = "civetweb port=20003" EOFCopy the code
  1. Copy all data (including scripts) to the other 2 servers
scp -r /usr/local/ceph ceph2:/usr/local/
scp -r /usr/local/ceph ceph3:/usr/local/
Copy the code
  1. Start mon on ceph2 and ceph3 using remote SSH (do not modify the ceph.conf file before starting)
ssh ceph2 bash /usr/local/ceph/admin/start_mon.sh
ssh ceph3 bash /usr/local/ceph/admin/start_mon.sh
Copy the code

After the cluster is started, check the cluster status through ceph -s. If cePH2 and CEPH3 can be seen, the cluster is successfully created and the state should be HEALTH_OK.

Start the OSD

Before running the start_osd.sh script, you need to generate the OSD key information on the MON node. Otherwise, an error message will be displayed if you directly start the osd. The command is as follows:

docker exec -it mon ceph auth get client.bootstrap-osd -o /var/lib/ceph/bootstrap-osd/ceph.keyring
Copy the code

Then run the following command on the primary node:

bash /usr/local/ceph/admin/start_osd.sh
ssh ceph2 bash /usr/local/ceph/admin/start_osd.sh
ssh ceph3 bash /usr/local/ceph/admin/start_osd.sh
Copy the code

After all osd nodes are started, run the ceph -s command to check the status of all OSD nodes.

  osd: 3 osds: 3 up, 3 in 

Copy the code

PS: The number of OSD nodes must be an odd number.

Start the MGR

Run the following commands on ceph1:

bash /usr/local/ceph/admin/start_mgr.sh
ssh ceph2 bash /usr/local/ceph/admin/start_mgr.sh
ssh ceph3 bash /usr/local/ceph/admin/start_mgr.sh
Copy the code

Start the RGW

Similarly, we need to generate the key information of RGW in the MON node first. The command is as follows:

docker exec mon ceph auth get client.bootstrap-rgw -o /var/lib/ceph/bootstrap-rgw/ceph.keyring
Copy the code

Then execute the following three commands on the primary node ceph1:

bash /usr/local/ceph/admin/start_rgw.sh
ssh ceph2 bash /usr/local/ceph/admin/start_rgw.sh
ssh ceph3 bash /usr/local/ceph/admin/start_rgw.sh
Copy the code

Do not pass until the startup is completeceph-sCheck the cluster status

Install the Dashboard management background

First determine the primary node, run the ceph -s command to check the cluster status, find the node whose MGR is active as follows:

  mgr: ceph1(active), standbys: ceph2, ceph3
Copy the code

The primary node here is the CEPH1 node.

  1. Enabling the Dashboard Function
docker exec mgr ceph mgr module enable dashboard
Copy the code
  1. Create a login user and password
docker exec mgr ceph dashboard set-login-credentials admin test
Copy the code

Set the user name to admin and password to test. 3. Configure external access ports. The port number is 18080, which can be customized

docker exec mgr ceph config set mgr mgr/dashboard/server_port 18080
Copy the code
  1. Configure the external access address, here my primary node IP is 192.168.161.137, you need to change to your own IP address.
Docker exec MGR ceph config set MGR/Dashboard /server_addr 192.168.161.137Copy the code
  1. Disable HTTPS (you can disable HTTPS if there are no certificates or Intranet access)
docker exec mgr ceph config set mgr mgr/dashboard/ssl false
Copy the code
  1. Restart the Mgr DashBoard service
docker restart mgr
Copy the code
  1. View the Mgr DashBoard service
docker exec mgr ceph mgr services
Copy the code

At last,http://192.168.161.137:18080/#/dashboardAccess.

View information about the entire cluster

At this point, the entire cluster has been set up, passceph -sCommand to view information about the entire cluster. All the planned nodes have been created and added to the cluster

conclusion

This paper introduces the detailed steps of deploying ceph cluster by Docker. This paper only starts the core necessary components, including MON, OSD, MGR and RGW. Among them, MON, OSD and MGR components must be started, and RGW component is the gateway system of object storage. If the object storage is not used, you do not need to start this component, and other components are only needed for cephFS to join the MDS. The purpose of these components will be described in more detail in later articles, but this article focuses on setting up the cluster.

reference

Docker Ceph distributed file system MImic13.2 based on Docker