1 Environment Introduction and Configuration

1.1 introduce ceph

Ceph supports three types of interfaces: 1 Object: Has a native API and is also compatible with Swift and S3 apis. 2 Block: Supports thin provisioning, snapshot, and cloning. 3 File: Indicates a Posix interface that supports snapshots.

The advantages and disadvantages of the three types of Ceph:

1.2 Environment Introduction

[root@ceph131 ~]# cat /etc/redhat-release
CentOS Linux release 7.8.2003 (Core)

#ceph Nautilus 14.2.10-0. El7# Network design
172.16.1.0/24 #Management Network
172.16.2.0/24 #Public Network
172.16.3.0/24 #Cluster Network

In addition to the system disk, each CEPh node is mounted with one 32G diskCeph131 eth0:172.16.1.131 eth1:172.16.2.131 eth2:172.16.3.131 1U2G CEPH132 eth0:172.16.1.132 eth1:172.16.2.132 Eth2 :172.16.3.132 1U2G CEPH133 eth0:172.16.1.133 eth1:172.16.2.133 eth2:172.16.3.133 1U2GCopy the code

1.3 Preparing the Basic Environment

1.3.1 Disabling selinux and the Firewall

Disable firewall
systemctl stop firewalld.service
systemctl disable firewalld.service
firewall-cmd --state
Close # SElinux
sed -i '/^SELINUX=.*/c SELINUX=disabled' /etc/selinux/config
sed -i 's/^SELINUXTYPE=.*/SELINUXTYPE=disabled/g' /etc/selinux/config
grep --color=auto '^SELINUX' /etc/selinux/config
setenforce 0
reboot
Copy the code

1.3.2 Set the host name for each host

hostnamectl set-hostname ceph135
su -
Copy the code

1.3.3 Setting the NIC IP Address (Change the NIC name to replace the IP address)

#vim /etc/sysconfig/network-scripts/ifcfg-eth0

NetName=eth0
rm -f /etc/sysconfig/network-scripts/ifcfg-$NetName
nmcli con add con-name $NetName ifname $NetName autoconnect yes typeEthernet \ ip4 172.16.1.131/24 ipv4. DNS"114.114.114.114" ipv4.gateway "172.16.1.254"
# After setting, run reload network
systemctl network restart
Copy the code

#vim /etc/sysconfig/network-scripts/ifcfg-eth0 #vim /etc/sysconfig/network-scripts/ifcfg-eth0

IPV4_ROUTE_METRIC=0
Copy the code

1.3.4 Adding ceph node information to hosts

#vim /etc/hosts

#[ceph14]
172.16.2.131 ceph131
172.16.2.132 ceph132
172.16.2.133 ceph133
Copy the code

1.3.5 Adding the Ceph Nautilus version source

# Change system yum source to Ali source, and update the yum file cache

wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo 
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
yum clean all && yum makecache
Copy the code

# Add the Nautilus source

cat << EOM > /etc/yum.repos.d/ceph.repo
[ceph-noarch]
name=Ceph noarch packages
baseurl=https://download.ceph.com/rpm-nautilus/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
EOM
Copy the code

Update the application version

yum update -y
Copy the code

1.3.6 Time synchronization

I prefer the following ways to synchronize time

ntpdate ntp3.aliyun.com 

echo "*/3 * * * * ntpdate ntp3.aliyun.com &> /dev/null" > /tmp/crontab

crontab /tmp/crontab
Copy the code

##3# 1.3.7 (Optional) Install base software

yum install net-tools wget vim bash-completion lrzsz unzip zip -y
Copy the code

2 Ceph installation and configuration

2.1 Ceph deploy is deployed

Ceph131 == (ceph131== (ceph131== (ceph131== (ceph131==))

yum install ceph-deploy -y

# install ceph-deploy-2.0.1-0. Noarch # install ceph-deploy-2.0.1-0
Copy the code

# ceph-deploy must log in to the Ceph node as a user with the password-less sudo privilege, as it is required to install software and configuration files without prompting for a password #== Create the ceph-deploy user on each Ceph node with the password ceph.123==

useradd -d /home/cephdeploy -m cephdeploy
passwd cephdeploy
usermod -G root cephdeploy
Copy the code

For each user added to each Ceph node, ensure that the user has sudo privileges.

echo "cephdeploy ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/cephdeploy
chmod 0440 /etc/sudoers.d/cephdeploy
Copy the code

Select * from ceph131 where ceph131 is executed and select * from ceph131.

[cephdeploy@ceph131 ~]$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:CsvXYKm8mRzasMFwgWVLx5LvvfnPrRc5S1wSb6kPytM root@ceph131
The key's randomart image is: +---[RSA 2048]----+ | +o. | | =oo. . | |. oo o . | | .. ... = | |... + S . * | | + o.=.+ O | | + * oo.. + * | | B *o .+.E . | | o * ... ++. | +----[SHA256]-----+Copy the code

Copy the key to each Ceph node

ssh-copy-id cephdeploy@ceph131
ssh-copy-id cephdeploy@ceph132
ssh-copy-id cephdeploy@ceph133
Copy the code

# Verify, do not need to enter the password is successful

[cephdeploy@ceph131 ~]$ ssh 'cephdeploy@ceph133'Last failed login: Thu Jul 2 11:18:06 +08 2020 from 172.16.1.131 on SSH: Notty There were 2 failed login attempts since the last successful Last login: Thu Jul 2 10:32:51 2020 from 172.16.1.131 [cephdeploy@ceph133 ~]$exit
logout
Connection to ceph133 closed.
Copy the code

2.2 Creating and Configuring the Ceph cluster

2.2.1 Creating a Ceph configuration directory and cluster

# Create a cluster directory to maintain the configuration files and keys generated by ceph-deploy for the cluster.

su cephdeploy
mkdir ~/cephcluster && cd ~/cephcluster
[cephdeploy@ceph131 cephcluster]$ pwd
/home/cephdeploy/cephcluster
Copy the code

#== Note == #1 Do not use sudo to call ceph-deploy, and if you are logged in as another user, do not run it as root, as it will not issue the sudo commands required on the remote host. #2 ceph-deploy will output the file to the current directory. If you execute cceph-deploy at ==, make sure you are in the directory ==.

If at any time you run into trouble and you want to start over, perform the following cleanup of the Ceph package and erase all its data and configuration ==:

ceph-deploy purge ceph131 ceph132 ceph133
ceph-deploy purgedata ceph131 ceph132 ceph133
ceph-deploy forgetkeys
rm ceph.*
Copy the code

Create cluster in cephCluster

[cephdeploy@ceph131 cephcluster]$ ceph-deploy new ceph131 ceph132 ceph133 [ceph_deploy.conf][DEBUG ] found configuration The file at: / home/cephdeploy /. Cephdeploy. Conf [ceph_deploy. Cli] [INFO] Invoked (2.0.1) : /bin/ceph-deploy new ceph131 ceph132 ceph133 [ceph_deploy.cli][INFO ] ceph-deploy options: [ceph_deploy.cli][INFO ] username : None [ceph_deploy.cli][INFO ] func : <function new at 0x7f9f54674e60>
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f9f541fa9e0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  ssh_copykey                   : True
[ceph_deploy.cli][INFO  ]  mon                           : ['ceph131'.'ceph132'.'ceph133']
[ceph_deploy.cli][INFO  ]  public_network                : None
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  cluster_network               : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  fsid                          : None
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
[ceph131][DEBUG ] connection detected need for sudo
[ceph131][DEBUG ] connected to host: ceph131
[ceph131][DEBUG ] detect platform information from remote host
[ceph131][DEBUG ] detect machine type
[ceph131][DEBUG ] find the location of an executable
[ceph131][INFO  ] Running command: sudo /usr/sbin/ip link show
[ceph131][INFO  ] Running command: sudo /usr/sbin/ip addr show
[ceph131][DEBUG ] IP addresses found: [u'172.16.1.131', u'172.16.2.131', u'172.16.3.131'[ceph_resolving. New][DEBUG] Resolving host ceph131 [ceph_deploy.new][DEBUG] Monitor ceph131 at 172.16.1.131 [ceph_deploy.new][INFO ] making sure passwordless SSH succeeds [ceph132][DEBUG ] connected to host: ceph131 [ceph132][INFO ] Runningcommand: ssh -CT -o BatchMode=yes ceph132
[ceph132][DEBUG ] connection detected need for sudo
[ceph132][DEBUG ] connected to host: ceph132
[ceph132][DEBUG ] detect platform information from remote host
[ceph132][DEBUG ] detect machine type
[ceph132][DEBUG ] find the location of an executable
[ceph132][INFO  ] Running command: sudo /usr/sbin/ip link show
[ceph132][INFO  ] Running command: sudo /usr/sbin/ip addr show
[ceph132][DEBUG ] IP addresses found: [u'172.16.2.132', u'172.16.1.132', u'172.16.3.132'[ceph_resolving. New][DEBUG] Resolving host ceph132 [ceph_deploy.new][DEBUG] Monitor ceph132 at 172.16.1.132 [ceph_deploy.new][INFO ] making sure passwordless SSH succeeds [ceph133][DEBUG ] connected to host: ceph131 [ceph133][INFO ] Runningcommand: ssh -CT -o BatchMode=yes ceph133
[ceph133][DEBUG ] connection detected need for sudo
[ceph133][DEBUG ] connected to host: ceph133
[ceph133][DEBUG ] detect platform information from remote host
[ceph133][DEBUG ] detect machine type
[ceph133][DEBUG ] find the location of an executable
[ceph133][INFO  ] Running command: sudo /usr/sbin/ip link show
[ceph133][INFO  ] Running command: sudo /usr/sbin/ip addr show
[ceph133][DEBUG ] IP addresses found: [u'172.16.2.133', u'172.16.1.133', u'172.16.3.133'[ceph_resolving. New][DEBUG] Resolving host ceph133 [ceph_deploy.new][DEBUG] Monitor ceph133 at 172.16.1.133 [ceph_deploy.new][DEBUG ] Monitor initial members are ['ceph131'.'ceph132'.'ceph133']
[ceph_deploy.new][DEBUG ] Monitor addrs are ['172.16.1.131'.'172.16.1.132'.'172.16.1.133']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...
Copy the code

2.2.2 Modifying the Ceph configuration file

# # to create public access network of vim/root/cephcluster/ceph. Conf

[global] fsid = 76235629-6feb-4f0c-a106-4be33d485535 mon_initial_members = ceph131, ceph132, Ceph133 mon_host = 172.16.1.131 172.16.1.132, 172.16.1.133 auth_cluster_required = cephx auth_service_required = cephx Auth_client_required = cephx public_network = 172.16.1.0/24 Cluster_network = 172.16.2.0/24# Set the number of copies
osd_pool_default_size = 3
Set minimum number of copies
osd_pool_default_min_size = 2
Set the clock offset to 0.5s
mon_clock_drift_allowed = .50
Copy the code

2.2.3 Installing basic packages for each node

Select * from ceph > ceph > ceph > ceph > ceph > ceph

ceph-deploy install --repo-url https://mirrors.tuna.tsinghua.edu.cn/ceph/rpm-nautilus/el7/ --gpg-url https://mirrors.tuna.tsinghua.edu.cn/ceph/keys/release.asc ceph131 ceph132 ceph133
Copy the code

2.2.4 Deploying the initial MON and generating keys:

# Initialize mon without error

ceph-deploy mon create-initial
Copy the code

There should be more keys in the directory

[cephdeploy@ceph131 cephcluster]$ ll
total 220
-rw------- 1 cephdeploy cephdeploy    113 Jul  2 14:16 ceph.bootstrap-mds.keyring
-rw------- 1 cephdeploy cephdeploy    113 Jul  2 14:16 ceph.bootstrap-mgr.keyring
-rw------- 1 cephdeploy cephdeploy    113 Jul  2 14:16 ceph.bootstrap-osd.keyring
-rw------- 1 cephdeploy cephdeploy    113 Jul  2 14:16 ceph.bootstrap-rgw.keyring
-rw------- 1 cephdeploy cephdeploy    151 Jul  2 14:16 ceph.client.admin.keyring
-rw-rw-r-- 1 cephdeploy cephdeploy    453 Jul  2 14:05 ceph.conf
-rw-rw-r-- 1 cephdeploy cephdeploy 177385 Jul  2 14:16 ceph-deploy-ceph.log
-rw------- 1 cephdeploy cephdeploy     73 Jul  2 14:03 ceph.mon.keyring
Copy the code

# Copy the configuration file and administration key to your administration node and Ceph node using ceph-deploy

[cephdeploy@ceph131 cephcluster]$ ceph-deploy admin ceph131 ceph132 ceph133 [ceph_deploy.conf][DEBUG ] found The configuration file at: / home/cephdeploy /. Cephdeploy. Conf [ceph_deploy. Cli] [INFO] Invoked (2.0.1) : /bin/ceph-deploy admin ceph131 ceph132 ceph133 [ceph_deploy.cli][INFO ] ceph-deploy options: [ceph_deploy.cli][INFO ] username : None [ceph_deploy.cli][INFO ] verbose : False [ceph_deploy.cli][INFO ] overwrite_conf : False [ceph_deploy.cli][INFO ] quiet : False [ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7ff46c014638> [ceph_deploy.cli][INFO ] cluster : ceph [ceph_deploy.cli][INFO ] client : ['ceph131'.'ceph132'.'ceph133']
[ceph_deploy.cli][INFO  ]  func                          : <function admin at 0x7ff46c8b52a8>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph131
[ceph131][DEBUG ] connection detected need for sudo
[ceph131][DEBUG ] connected to host: ceph131
[ceph131][DEBUG ] detect platform information from remote host
[ceph131][DEBUG ] detect machine type[ceph131][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf [ceph_deploy.admin][DEBUG ] Pushing admin keys  and conf to ceph132 [ceph132][DEBUG ] connection detected needfor sudo
[ceph132][DEBUG ] connected to host: ceph132
[ceph132][DEBUG ] detect platform information from remote host
[ceph132][DEBUG ] detect machine type[ceph132][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf [ceph_deploy.admin][DEBUG ] Pushing admin keys  and conf to ceph133 [ceph133][DEBUG ] connection detected needfor sudo
[ceph133][DEBUG ] connected to host: ceph133
[ceph133][DEBUG ] detect platform information from remote host
[ceph133][DEBUG ] detect machine type
[ceph133][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
Copy the code

2.2.5 Deploying the MGR service

# Create MGR and no error is reported during execution

ceph-deploy mgr create ceph131 ceph132 ceph133
Copy the code

2.2.6 add OSD

# Create OSD. No error is reported during the process

ceph-deploy osd create --data /dev/sdb ceph131
ceph-deploy osd create --data /dev/sdb ceph132
ceph-deploy osd create --data /dev/sdb ceph133
Copy the code

2.2.7 Verifying the Cluster Status

[root@ceph132 ~]# ceph osd treeID CLASS WEIGHT TYPE NAME STATUS REWEIGHT pri-AFF-1 0.09357 root default-3 0.03119 Host CEPH131 0 HDD 0.03119 osd.0 up 1.00000 1.00000-5 0.03119 Host CEPH132 1 HDD 0.03119 OSD.1 up 1.00000 1.00000-7 0.03119 host CEPH133 2 HDD 0.03119 Osd. 2 up 1.00000 1.00000Copy the code

3 Expand cluster services ==- Enable services as required

All ceph-deploy operations must be performed using the cephdeploy user and in the configuration file directory!! = =

[cephdeploy@ceph131 cephcluster]$ pwd
/home/cephdeploy/cephcluster
Copy the code

3.1 Adding a Metadata Server (MDS)

ceph-deploy mds create ceph132 
Copy the code

3.2 Adding a Monitoring End (MON)

ceph-deploy mon add ceph132 ceph133
Copy the code

# Once you have added a new Ceph monitor, Ceph will begin to synchronize monitors and form a quorum. You can check the arbitration status by performing the following operations:

ceph quorum_status --format json-pretty
Copy the code

3.3 Adding a Daemon (MGR)

#Ceph Manager daemon operates in active/standby mode. Deploying additional manager daemons ensures that if one daemon or host fails, another daemon can take over without interrupting service.

ceph-deploy mgr create ceph132 ceph133
Copy the code

# validation

ceph -s
Copy the code

3.4 Adding an Object Storage Gateway (RGW)

ceph-deploy rgw create ceph131
Copy the code

By default, RGW instances will listen on port 7480. This can be changed by editing ceph.conf on the node where RGW is running, as shown below:

[client]
rgw frontends = civetweb port=80
Copy the code

Restart the service after modifying the port

systemctl restart ceph-radosgw.service
Copy the code

4 Storage Deployment

4.1 enable CephFS

4.1.1 Please make sure that MDS service is enabled on at least one node. The number of ==pg is calculated by using the calculator on the official website. = =

== Number of PGS The number of PGS in a pool in a cluster can be calculated as follows: Total number of PGS = (Number of OSD nodes * 100)/Maximum number of copies/Number of pools (the result must be rounded to the nearest N power of 2)== # Run the cePH132 command on one of the ceph clusters

[root@ceph132 ~]# ceph osd pool create cephfs_data 16
pool 'cephfs_data' created
[root@ceph132 ~]# ceph osd pool create cephfs_metadata 16
pool 'cephfs_metadata' created
[root@ceph132 ~]# ceph fs new cephfs_storage cephfs_metadata cephfs_data
new fs with metadata pool 2 and data pool 1
[root@ceph132 ~]# ceph df
RAW STORAGE:
    CLASS     SIZE       AVAIL      USED       RAW USED     %RAW USED
    hdd       96 GiB     93 GiB     11 MiB      3.0 GiB          3.14
    TOTAL     96 GiB     93 GiB     11 MiB      3.0 GiB          3.14

POOLS:
    POOL                ID     STORED      OBJECTS     USED        %USED     MAX AVAIL
    cephfs_data          1         0 B           0         0 B         0        29 GiB
    cephfs_metadata      2     2.2 KiB          22     1.5 MiB         0        29 GiB
Copy the code

4.1.2 mount cephfs

Create password file on client

[root@ceph133 ~]# cat /etc/ceph/ceph.client.admin.keyring
[client.admin]
	key = AQC5e/1eXm9WExAAmlD9aZoc2dZO6jbU8UXSqg==
	caps mds = "allow *"
	caps mgr = "allow *"
	caps mon = "allow *"
	caps osd = "allow *"
[root@ceph133 ~]# vim admin.secret
[root@ceph133 ~]# ll
total 8
-rw-r--r--  1 root root   41 Jul  2 15:17 admin.secret

Copy the code

Mount the cephfs folder and verify

[root@ceph133 ~]# mkdir /mnt/cephfs_storage
[root@ceph133 ~]# mount -t ceph 172.16.1.133:6789:/ / MNT /cephfs_storage -o name=admin,secretfile=admin.secret
[root@ceph133 ~]# df -Th
Filesystem          Type      Size  Used Avail Use% Mounted on
devtmpfs            devtmpfs  983M     0  983M   0% /dev
tmpfs               tmpfs     995M     0  995M   0% /dev/shm
tmpfs               tmpfs     995M  8.6M  987M   1% /run
tmpfs               tmpfs     995M     0  995M   0% /sys/fs/cgroup
/dev/sda1           xfs        20G  2.2G   18G  11% /
tmpfs               tmpfs     199M     0  199M   0% /run/user/0
tmpfs               tmpfs     995M   52K  995M   1% /var/lib/ceph/osd/ceph-2
172.16.1.133:6789:/ ceph       30G     0   30G   0% /mnt/cephfs_storage
Copy the code

4.2 Enabling block Storage

4.2.1 In the Ceph cluster, run the cePH133 command on one of the ceph clusters

The estimated number of PGS in a pool in a cluster is calculated as follows: Total number of PGS = (Number of OSD nodes * 100)/Maximum number of copies/Number of pools (the result must be rounded to the nearest N power of 2)==

[root@ceph132 ~]# ceph osd pool create rbd_storage 16 16 replicated
pool 'rbd_storage' created
Copy the code

Create a block device

[root@ceph132 ~]# rbd create --size 1024 rbd_image -p rbd_storage
[root@ceph132 ~]# rbd ls rbd_storage
rbd_image
Copy the code

# Delete command

rbd rm rbd_storage/rbd_image
Copy the code

# add RBD pool related operation command

4.2.2 Mounting the RBD Block Device

Map block devices to the system kernel

[root@ceph133 ~]# rbd map rbd_storage/rbd_image
/dev/rbd0
[root@ceph133 ~]# lsblkNAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT rbd0 252:0 0 1G 0 disk sdb 8:16 0 32G 0 disk ├ ─ceph--376ebd83--adf0--4ee1-- b4b79bae048e-osd--block--4b0444fa--535e--40c7--b55a--167ab21dbf9b 253:0 0 32G 0 LVM Sr0 11:012 MB 0 ROM 20 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 part /Copy the code

Format the RBD device

[root@ceph133 ~]# mkfs.ext4 -m0 /dev/rbd/rbd_storage/rbd_imageMke2fs 1.42.9 (28-dec-2013) Discarding Device Blocks:done
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=16 blocks, Stripe width=16 blocks
65536 inodes, 262144 blocks
0 blocks (0.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=268435456
8 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
	32768, 98304, 163840, 229376

Allocating group tables: done
Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done
Copy the code

Mount the RBD device

[root@ceph133 ~]# mkdir /mnt/rbd_storage
[root@ceph133 ~]# mount /dev/rbd/rbd_storage/rbd_image /mnt/rbd_storage
[root@ceph133 ~]# df -ThFilesystem Type Size Used Avail Use% Mounted on devtmpfs devtmpfs 983M 0 983M 0% /dev tmpfs tmpfs 995M 0 995M 0% /dev/shm TMPFS TMPFS 995M 8.6m 987M 1% /run TMPFS TMPFS 995M 0 995M 0% /sys/fs/cgroup /dev/sda1 XFS 20G 2.2g 18G 11% / TMPFS TMPFS 199M 0 199M 0% /run/user/0 TMPFS TMPFS 995M 52K 995M 1% /var/lib/ceph/osd/ceph-2 172.16.1.133:6789:/ ceph 30G 0 30G 0% / MNT /cephfs_storage /dev/rbd0 ext4 976M 2.6m 958M 1% / MNT /rbd_storageCopy the code

# Cancel the kernel mount

rbd unmap /dev/rbd0
Copy the code

4.3 Enabling RGW Object Storage

4.3.1 Ensure that the RGW service is enabled on at least one node

# Enable RGW see 3.4 # Using a browser, viewhttp://172.16.1.131:7480/, has enabled a success

5 Enable the Ceph dashboard

5.1 enable the dashboard

#Ceph Dashboard is a built-in Web-based Ceph management and monitoring application for managing clusters. # The node does not have the mgr-dashboard program installed by default, so install it first

yum install ceph-mgr-dashboard -y
Copy the code

# Dashboard is enabled for any node

[root@ceph132 ~]# ceph mgr module enable dashboard
Copy the code

Configure login authentication

[root@ceph132 ~]# ceph dashboard create-self-signed-cert
Self-signed certificate created
Copy the code

Configure the login account

[root@ceph132 ~]# ceph dashboard ac-user-create admin admin.123 administrator
{"username": "admin"."lastUpdate": 1593679011, "name": null, "roles": ["administrator"]."password": "$2b$12$kQYtMXun1jKdTDKwjfuNj.WYyJcr3vSHLTMWfXIi.wrKkFRCmmC1."."email": null}

Copy the code

# test of landing, the browser view: https://172.16.1.131:8443/ username: admin password: admin. 123

[root@ceph132 ~]# ceph mgr services
{
    "dashboard": "https://ceph131:8443/"
}

Copy the code

Enabling RGW in 5.2 Dashboard

# Create user for RGW

[root@ceph131 ~]# radosgw-admin user create --uid=rgwadmin --display-name=rgwadmin --system
{
    "user_id": "rgwadmin"."display_name": "rgwadmin"."email": ""."suspended": 0."max_buckets": 1000,
    "subusers": []."keys": [{"user": "rgwadmin"."access_key": "VHK8BZYA3F5BDWMJFUER"."secret_key": "WVBdt66ZbmGe0l6Wu3LoPKM1GlzF6V35JCpNKPJw"}]."swift_keys": []."caps": []."op_mask": "read, write, delete"."system": "true"."default_placement": ""."default_storage_class": ""."placement_tags": []."bucket_quota": {
        "enabled": false."check_on_raw": false."max_size": 1,"max_size_kb": 0."max_objects": 1},"user_quota": {
        "enabled": false."check_on_raw": false."max_size": 1,"max_size_kb": 0."max_objects": 1},"temp_url_keys": []."type": "rgw"."mfa_ids": []}Copy the code

Set credential to the key generated by the create user

[root@ceph131 ~]# ceph dashboard set-rgw-api-access-key VHK8BZYA3F5BDWMJFUER
Option RGW_API_ACCESS_KEY updated
[root@ceph131 ~]# ceph dashboard set-rgw-api-secret-key WVBdt66ZbmGe0l6Wu3LoPKM1GlzF6V35JCpNKPJw
Option RGW_API_SECRET_KEY updated
Copy the code

# disable SSL

[root@ceph131 ~]# ceph dashboard set-rgw-api-ssl-verify False
Option RGW_API_SSL_VERIFY updated
Copy the code

# Enable RGW dashboard

[root@ceph131 ~]# ceph dashboard set-rgw-api-host 172.16.1.131
Option RGW_API_HOST updated
[root@ceph131 ~]# ceph dashboard set-rgw-api-port 7480
Option RGW_API_PORT updated
[root@ceph131 ~]# ceph dashboard set-rgw-api-scheme http
Option RGW_API_SCHEME updated
[root@ceph131 ~]# ceph dashboard set-rgw-api-admin-resource admin
Option RGW_API_ADMIN_RESOURCE updated
[root@ceph131 ~]# ceph dashboard set-rgw-api-user-id rgwadmin
Option RGW_API_USER_ID updated
[root@ceph131 ~]# systemctl restart ceph-radosgw.target
Copy the code

X. Problems encountered during the deployment

eg1.[root@ceph131 cephcluster]# ceph-deploy mon create-initial
[ceph131][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph131.asok mon_status
[ceph_deploy.mon][WARNIN] mon.ceph131 monitor is not yet in quorum, tries left: 5
[ceph_deploy.mon][WARNIN] waiting 5 seconds before retrying
[ceph131][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph131.asok mon_status
[ceph_deploy.mon][WARNIN] mon.ceph131 monitor is not yet in quorum, tries left: 4
[ceph_deploy.mon][WARNIN] waiting 10 seconds before retrying
[ceph131][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph131.asok mon_status
[ceph_deploy.mon][WARNIN] mon.ceph131 monitor is not yet in quorum, tries left: 3
[ceph_deploy.mon][WARNIN] waiting 10 seconds before retrying
[ceph131][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph131.asok mon_status
[ceph_deploy.mon][WARNIN] mon.ceph131 monitor is not yet in quorum, tries left: 2
[ceph_deploy.mon][WARNIN] waiting 15 seconds before retrying
[ceph131][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph131.asok mon_status
[ceph_deploy.mon][WARNIN] mon.ceph131 monitor is not yet inquorum, tries left: 1 [ceph_deploy.mon][WARNIN] waiting 20 seconds before retrying [ceph_deploy.mon][ERROR ] Some monitors have still not reached quorum: [ceph_deploy.mon][ERROR] CePH131 solution: Check whether the configurations of ceph.com and public_network or mon_host are correct. Check whether ipv6 parsing is added to the hosts configuration. [ceph_deploy][ERROR] IOError: [Errno 13] Permission denied:'/root/cephcluster/ceph-deploy-ceph.log'Solution: [root@ceph131 cephcluster]# echo "cephdeploy ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/cephdeploy
cephdeploy ALL = (root) NOPASSWD:ALL
[root@ceph131 cephcluster]# chmod 0440 /etc/sudoers.d/cephdeploy
[root@ceph131 cephcluster]# cat /etc/sudoers.d/cephdeploy
cephdeploy ALL = (root) NOPASSWD:ALL
[root@ceph131 cephcluster]# usermod -G root cephdeploy

eg3.[root@ceph131 ~]# ceph -scluster: id: 76235629-6feb-4f0c-a106-4be33d485535 health: HEALTH_WARN clock skew detected on Mon. cePH133 Cause: The clock is offset. Solution: 1 In the deployment directory of the deploy node, modify the ceph.conf file [cephdeploy@ceph131 cephcluster]$pwd/ home/cephdeploy/cephcluster 2 edit ceph. Conf, and add the following fields keep out# Set clock offsetMon clock drift allowed = 2 Mon clock drift warn backoff = 30 3 Configure [cephdeploy@ceph131 cephcluster]$ceph-deploy again --overwrite-conf config push ceph{131.. 133} 4 Restart the mon [root@ceph133 ~]# systemctl restart [email protected]

eg4.[root@ceph132 ~]# ceph mgr module enable dashboard
Error ENOENT: all mgr daemons do not support module 'dashboard', pass --force to force enablement The cause is that cph-Mgr-dashboard is not installed on the node and is installed on the MGR node. yum install ceph-mgr-dashboard eg5.[root@ceph133 ~]# ceph -s
  cluster:
    id:     76235629-6feb-4f0c-a106-4be33d485535
    health: HEALTH_WARN
            application not enabled on 1 pool(s)
[root@ceph133 ~]# ceph health detail
HEALTH_WARN application not enabled on 1 pool(s)
POOL_APP_NOT_ENABLED application not enabled on 1 pool(s)
    application not enabled on pool 'rbd_storage'
    use 'ceph osd pool application enable <pool-name> <app-name>'.where <app-name> is 'cephfs'.'rbd'.'rgw', or freeform forCustom Applications. Solution: Ceph OSD Pool Applicationenable rbd_storage rdb
Copy the code