Welcome to pay attention to personal public account “Operation and Maintenance Development Story”

The premise condition

Cephadm uses containers and Systemd to install and manage Ceph clusters and is tightly integrated with the CLI and dashboard GUI.

  • Cephadm supports only Octopus V15.2.0 and later versions.

  • Cephadm is fully integrated with new business process apis and fully supports new CLI and dashboard capabilities to manage cluster deployments.

  • Cephadm requires container support (Podman or Docker) and Python 3.

  • when

  • Synchronization between

Basic configuration

Ceph, which I am using with centos8, already has python3 built in, so it is no longer installed separately. Centos7 requires python3 installed separately

Configuring hosts Resolution

cat >> /etc/hosts <<EOF
192.16893.70. node1
192.16893.71. node2
192.16893.72. node3
EOF
Copy the code

Disable the firewall and Selinux

systemctl stop firewalld && systemctl disable firewalld
setenforce 0 && sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
Copy the code

Set the host name on each of the three nodes

hostnamectl set-hostname node1
hostnamectl set-hostname node2
hostnamectl set-hostname node3
Copy the code

Configure time synchronization for the host

systemctl restart chronyd.service && systemctl enable chronyd.service
Copy the code

Install the docker – ce

dnf config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
dnf install -y https:/ / download.docker.com/linux/centos/7/x86_64/stable/Packages/containerd.io-1.2.13-3.1.el7.x86_64.rpm
dnf -y install docker-ce --nobest
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://s7owcmp8.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker
sudo systemctl enable docker
Copy the code

Install cephadm

The cephadm command does

  1. Boot a new cluster
  2. Start the containerized Shell using a valid Ceph CLI
  3. Helps debug the containerized Ceph daemon.

The following operations can be performed on only one node

usecurlGet the latest version of the standalone script. If the network is not good, you can directly go to GitHub to copy

curl --silent --remote-name --location https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm
chmod +x cephadm
Copy the code

Install cephadmn

./cephadm add-repo --release octopus
./cephadm install
Copy the code

Boot a new cluster

To boot the cluster, create a directory: /etc/ceph

mkdir -p /etc/ceph
Copy the code

Then run the command: ceph bootstrap

cephadm bootstrap --mon-ip 192.1682.16.
Copy the code

This command does the following:

  • Create monitor and Manager daemons for the new cluster on the localhost.
  • Generate a new SSH key for the Ceph cluster and add it to the root user’s file/root/.ssh/authorized_keys
  • Save the minimum configuration file required to communicate with the new cluster to/etc/ceph/ceph.conf
  • willclient.adminManage (privilege!) A copy of the key is written/etc/ceph/ceph.client.admin.keyring
  • Writes a copy of the public key/etc/ceph/ceph.pub

When the installation is complete, there will be a dashboard screen After executing, we can check that ceph.conf has been written

Enable the CEPH CLI

The cephadm shell command starts bash shell in the container where all Ceph packages are installed. By default, if configuration and keyring files are found in /etc/ceph on the host, they are passed to the container environment so that the shell works perfectly.

cephadm shell
Copy the code

You can install packages containing all ceph commands on nodes, including, (for installing CephFS file systems), and so on

cephadm add-repo --release octopus
cephadm install ceph-common
Copy the code

The installation process is slow and you can manually change the source to Ali’s

Add hosts to clusters

Add the public key to the new host

ssh-copy-id -f -i /etc/ceph/ceph.pub node2
ssh-copy-id -f -i /etc/ceph/ceph.pub node3
Copy the code

Tell Ceph that the new node is part of the cluster

[root@localhost ~]# ceph orch host add node2
Added host 'node2'
[root@localhost ~]# ceph orch host add node3
Added host 'node3'
Copy the code

Adding a master automatically extends mon and MGR nodes

Deploy additional monitors (optional)

A typical Ceph cluster has three or five MON daemons distributed across different hosts. If there are five or more nodes in a cluster, it is recommended to deploy five Mon nodes. When Ceph knows what IP subnet MON should use, it can automatically deploy and scale MON as the cluster grows (or shrinks). By default, Ceph assumes that other Mon’s use the same subnet as the first MON’s IP address. In the case of a single subnet, if hosts are added to the cluster, a maximum of five MON are added by default. If you have a specific IP subnet for mon, you can configure the subnet in CIDR format:

ceph config set mon public_network 10.12.. 0/24
Copy the code

Cephadm deploys the MON daemon only on hosts with specific subnet IP addresses. To adjust the default number of Mon for a specific subnet, run the following command:

ceph orch apply mon *<number-of-monitors>*
Copy the code

To deploy MON on a specific set of hosts, run the following command:

ceph orch apply mon *<host1,host2,host3,... > *Copy the code

To view the current host and label, run the following command:

 
[root@node1 ~]# ceph orch host ls
HOST   ADDR   LABELS  STATUS  
node1  node1                  
node2  node2                  
node3  node3  
Copy the code

To disable automatic MON deployment, run the following command:

ceph orch apply mon --unmanaged
Copy the code

To add mon to a different network run the following command:

ceph orch apply mon --unmanaged
ceph orch daemon add mon newhost1:10.12.123.
ceph orch daemon add mon newhost2:10.12.. 0/24
Copy the code

If you want to add mon to multiple hosts, you can also use the following command:

ceph orch apply mon "host1,host2,host3"
Copy the code

The deployment of OSD

You can use the following command to display a list of storage devices in the cluster

ceph orch device ls
Copy the code

The storage device is considered available if all of the following conditions are met:

  • The device must have no partitions.

  • The device must not have any LVM state.

  • Equipment shall not be installed.

  • The device cannot contain a file system.

  • The device must not contain Ceph BlueStore OSD.

  • The device must be larger than 5 GB.

Ceph refuses to pre-configure OSD on unavailable devices. To ensure successful addition of an OSD node, I have just added a disk to each node. You can use the following methods to create an OSD node:

Automatically create osd nodes on unused devices

[root@node1 ~]# ceph orch apply osd --all-available-devices
Scheduled osd.all-available-devices update...
Copy the code

Osd nodes have been created on the three disks

Create an OSD node from a specific device on a specific host

ceph orch daemon add osd host1:/dev/sdb
Copy the code

Deploy the MDS

Using the CephFS file system requires one or more MDS daemons. If you use the new CEPh FS volume interface to create a new file system, these file deployment metadata servers are automatically created:

ceph orch apply mds *<fs-name>* --placement="*
      
       * [*
       
        * ...] "
       
      
Copy the code

CephFS requires two Pools, cephfs-data and cephfs-metadata, to store file data and file metadata, respectively

[root@node1 ~]# ceph osd pool create cephfs_data 64 64
[root@node1 ~]# ceph osd pool create cephfs_metadata 64 64Create a CephFS named CephFS [root@node1 ~]# cephfsnew cephfs cephfs_metadata cephfs_data
[root@node1 ~]# ceph orch apply mds cephfs --placement="3 node1 node2 node3"
Scheduled mds.cephfs update...
Copy the code

Verify that at least one MDS is in the active state. By default, Ceph supports only one active MDS and the others as standby MDS

ceph fs status cephfs
Copy the code

The deployment of RGW

Cephadm deploys RadosGW as a collection of daemons to manage specific domains and regions. RGW is short for Ceph object storage Gateway service RADOS Gateway, which is a FastCGI service implemented based on LIBRADOS interface encapsulation. Provides RESTful apis for accessing and managing object storage data.With Cephadm, the Radosgw daemon configes the database through MON rather than through ceph.conf or the command line. If this configuration is not ready, the Radosgw daemon will start with the default Settings (bound to port 80 by default). Three RGW daemons serving the MyORG domain and US-East-1 zone are deployed on node1, node2, and node3. Before deploying the RGW daemons, if they do not exist, the supplied domains and zones are automatically created:

ceph orch apply rgw myorg cn-east- 1 --placement="3 node1 node2 node3"
Copy the code

Alternatively, you can manually create regions, area groups, and regions using the radosgw-admin command:

radosgw-admin realm create --rgw-realm=myorg --default
radosgw-admin zonegroup create --rgw-zonegroup=default --master --default
radosgw-admin zone create --rgw-zonegroup=default --rgw-zone=cn-east- 1 --master --default
radosgw-admin period update --rgw-realm=myorg --commit
Copy the code