1 Environment Introduction and Configuration

1.1 introduce ceph

Ceph supports three types of interfaces: 1 Object: Has a native API and is also compatible with Swift and S3 apis. 2 Block: Supports thin provisioning, snapshot, and cloning. 3 File: Indicates a Posix interface that supports snapshots.

The advantages and disadvantages of the three types of Ceph:

1.2 Environment Introduction

[root@ceph135 ~]# cat /etc/redhat-release
CentOS Linux release 8.1.1911 (Core)

#ceph 
Octopus 15.2.3
# Systems supported by this releaseCentOS 8 CentOS 7 (partial - see below)# Partial supportUbuntu 18.04 (Bionic) Debian Buster Container Image The dashboard, Prometheus, and restful manager modules cannot be used in the CentOS7 release because CentOS7 lacks a dependency on the Python3 module.# Network design
172.16.1.0/24 #Management Network
172.16.2.0/24 #Public Network
172.16.3.0/24 #Cluster Network

In addition to the system disk, two 30G disks are mounted under each CEPh nodeCeph135 eth0:172.16.1.135 eth1:172.16.2.135 eth2: 172.16.3.1351C1g CEPH136 eth0:172.16.1.136 eth1:172.16.2.136 Eth2 :172.16.3.136 1C1G CEPH137 eth0:172.16.1.137 eth1:172.16.2.137 eth2:172.16.3.137 1C1gCopy the code

1.2 Preparing for the Basic Environment

1.2.1 Disabling selinux and the Firewall
Disable firewall
systemctl stop firewalld.service
systemctl disable firewalld.service
firewall-cmd --state
Close # SElinux
sed -i '/^SELINUX=.*/c SELINUX=disabled' /etc/selinux/config
sed -i 's/^SELINUXTYPE=.*/SELINUXTYPE=disabled/g' /etc/selinux/config
grep --color=auto '^SELINUX' /etc/selinux/config
setenforce 0
reboot
Copy the code
1.2.2 Set the host name for each host
hostnamectl set-hostname ceph135
su -
Copy the code
1.2.3 Setting the NIC IP Address (Change the NIC name to replace the IP address)

#vim /etc/sysconfig/network-scripts/ifcfg-eth0

NetName=eth0
rm -f /etc/sysconfig/network-scripts/ifcfg-$NetName
nmcli con add con-name $NetName ifname $NetName autoconnect yes typeEthernet \ ip4 172.16.1.135/24 ipv4. DNS"114.114.114.114" ipv4.gateway "172.16.1.254"
# After setting, run reload network
nmcli c reload
Copy the code

#vim /etc/sysconfig/network-scripts/ifcfg-eth0 #vim /etc/sysconfig/network-scripts/ifcfg-eth0

IPV4_ROUTE_METRIC=0
Copy the code
1.2.4 Adding ceph node information to hosts

#vim /etc/hosts

#[ceph]
172.16.2.135 ceph135
172.16.2.136 ceph136
172.16.2.137 ceph137
Copy the code
1.2.5 Adding the Yum source for Octopus

#vim /etc/yum.repos.d/ceph.repo

[Ceph]
name=Ceph packages for $basearch
baseurl=https://mirrors.aliyun.com/ceph/rpm-octopus/el8/$basearch
enabled=1
gpgcheck=0
type=rpm-md [Ceph-noarch] name=Ceph noarch packages baseurl=https://mirrors.aliyun.com/ceph/rpm-octopus/el8/noarch enabled=1  gpgcheck=0type=rpm-md

[ceph-source]
name=Ceph source packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-octopus/el8/SRPMS
enabled=1
gpgcheck=0
type=rpm-md
Copy the code

# Change system yum source to Ali source, and update the yum file cache

wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-8.repo
sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo

yum clean all && yum makecache
Copy the code
1.2.6 Time synchronization

I prefer the following ways to synchronize time

rpm -ivh http://mirrors.wlnmp.com/centos/wlnmp-release-centos.noarch.rpm

dnf  install wntp

ntpdate ntp3.aliyun.com 

echo "*/3 * * * * ntpdate ntp3.aliyun.com &> /dev/null" > /tmp/crontab

crontab /tmp/crontab
Copy the code
1.2.7 (Optional) Installing Basic Software
yum install net-tools wget vim bash-completion lrzsz unzip zip -y
Copy the code

2 Ceph installation and configuration

2.1 Cephadm deployment

In version 15, cephadm deployment is supported, and ceph-deploy is supported until version 14

curl --silent --remote-name --location https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm
chmod +x cephadm
Copy the code

Use Cephadm to get the latest version of Octopus and install it

[root@ceph135 ~]# DNF install Python3 Podman # install each node
[root@ceph135 ~]#./cephadm add-repo -- Release octopus #

INFO:root:Writing repo to /etc/yum.repos.d/ceph.repo...
INFO:cephadm:Enabling EPEL...
[root@ceph135 ~]# ./cephadm install
INFO:cephadm:Installing packages ['cephadm']...
[root@ceph135 ~]# which cephadm
/usr/sbin/cephadm
Copy the code

2.2 Creating a Ceph Cluster

2.2.1 Specifying management nodes

Create a network that can be accessed by any host accessing the Ceph cluster, specify mon-ip, and write the generated configuration file to /etc/ceph

[root@ceph135 ~]# mkdir -p /etc/ceph
[root@ceph135 ~]# cephadm bootstrap --mon-ip 172.16.2.135
INFO:cephadm:Verifying podman|docker is present...
INFO:cephadm:Verifying lvm2 is present...
INFO:cephadm:Verifying time synchronization is inplace... INFO:cephadm:Unit chronyd.service is enabled and running INFO:cephadm:Repeating the final host check... INFO:cephadm:podman|docker (/usr/bin/podman) is present INFO:cephadm:systemctl is present INFO:cephadm:lvcreate is present INFO:cephadm:Unit chronyd.service is enabled and running INFO:cephadm:Host looks OK INFO:root:Cluster fsid: B3add0aa-aee7-11ea-a3e4-5e7ce92c6bef INFO: CephaDM :Verifying IP 172.16.2.135 port 3300... INFO:cephadm:Verifying IP 172.16.2.135 port 6789... INFO: cephadm: Mon IP 172.16.2.135 isinCIDR network 172.16.2.0/24 INFO: Cephadm :Pulling Latest Docker. IO /ceph/ceph:v15 container... INFO:cephadm:Extracting ceph user uid/gid from container image... INFO:cephadm:Creating initial keys... INFO:cephadm:Creating initial monmap... INFO:cephadm:Creating mon... INFO:cephadm:Waitingfor mon to start...
INFO:cephadm:Waiting for mon...
INFO:cephadm:Assimilating anything we can from ceph.conf...
INFO:cephadm:Generating new minimal ceph.conf...
INFO:cephadm:Restarting the monitor...
INFO:cephadm:Setting mon public_network...
INFO:cephadm:Creating mgr...
INFO:cephadm:Wrote keyring to /etc/ceph/ceph.client.admin.keyring
INFO:cephadm:Wrote config to /etc/ceph/ceph.conf
INFO:cephadm:Waiting for mgr to start...
INFO:cephadm:Waiting for mgr...
INFO:cephadm:mgr not available, waiting (1/10)...
INFO:cephadm:mgr not available, waiting (2/10)...
INFO:cephadm:Enabling cephadm module...
INFO:cephadm:Waiting for the mgr to restart...
INFO:cephadm:Waiting for Mgr epoch 5...
INFO:cephadm:Setting orchestrator backend to cephadm...
INFO:cephadm:Generating ssh key...
INFO:cephadm:Wrote public SSH key to to /etc/ceph/ceph.pub
INFO:cephadm:Adding key to root@localhost's authorized_keys...
INFO:cephadm:Adding host ceph135...
INFO:cephadm:Deploying mon service with default placement...
INFO:cephadm:Deploying mgr service with default placement...
INFO:cephadm:Deploying crash service with default placement...
INFO:cephadm:Enabling mgr prometheus module...
INFO:cephadm:Deploying prometheus service with default placement...
INFO:cephadm:Deploying grafana service with default placement...
INFO:cephadm:Deploying node-exporter service with default placement...
INFO:cephadm:Deploying alertmanager service with default placement...
INFO:cephadm:Enabling the dashboard module...
INFO:cephadm:Waiting for the mgr to restart...
INFO:cephadm:Waiting for Mgr epoch 12...
INFO:cephadm:Generating a dashboard self-signed certificate...
INFO:cephadm:Creating initial admin user...
INFO:cephadm:Fetching dashboard port number...
INFO:cephadm:Ceph Dashboard is now available at:

	     URL: https://ceph135:8443/
	    User: admin
	Password: vcxbz7cubp

INFO:cephadm:You can access the Ceph CLI with:

	sudo /usr/sbin/cephadm shell --fsid b3add0aa-aee7-11ea-a3e4-5e7ce92c6bef -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring

INFO:cephadm:Please consider enabling telemetry to help improve Ceph:

	ceph telemetry on

For more information see:

	https://docs.ceph.com/docs/master/mgr/telemetry/

INFO:cephadm:Bootstrap complete.
Copy the code

# at this point, login URL:https://ceph135:8443/, the first login to change the password, verification

2.2.2 Mapping the ceph command to the Local

#Cephadm does not require any Ceph packages to be installed on the host. However, it is recommended to enable simple access to the ceph command. The #cephadm shell command starts a bash shell in the container where all the Ceph packages are installed. By default, if the configuration and keyring files are found in /etc/ceph on the host, they are passed to the container environment so that the shell works perfectly.

[root@ceph135 ~]# cephadm shell
INFO:cephadm:Inferring fsid 9849edac-a547-11ea-a767-12702e1b568d
INFO:cephadm:Using recent ceph image docker.io/ceph/ceph:v15
[ceph: root@ceph135 /]# alias ceph='cephadm shell -- ceph'
[ceph: root@ceph135 /]# exit
exit

[root@ceph135 ~]# cephadm install ceph-common
INFO:cephadm:Installing packages ['ceph-common']...

[root@ceph135 ~]# ceph -vCeph version 15.2.3 (d289bbdec69ed7c1f516e0a093594580a76b78d0) octopus (stable) [root @ ceph135 ~]# ceph status
  cluster:
    id:     b3add0aa-aee7-11ea-a3e4-5e7ce92c6bef
    health: HEALTH_WARN
            Reduced data availability: 1 pg inactive
            OSD count 0 < osd_pool_default_size 3

  services:
    mon: 1 daemons, quorum ceph135 (age 19m)
    mgr: ceph135.omlfxo(active, since 15m)
    osd: 0 osds: 0 up, 0 in

  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:     100.000% pgs unknown
             1 unknown

[root@ceph135 ~]# ceph health
HEALTH_WARN Reduced data availability: 1 pg inactive; OSD count 0 < osd_pool_default_size 3
Copy the code
2.2.3 Adding a Server to the Ceph cluster
[root@ceph135 ~]# ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph136
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/etc/ceph/ceph.pub"
The authenticity of host 'ceph136 (172.16.2.136)' can't be established. ECDSA key fingerprint is SHA256:UiF5sLefJuaY6uueUxyu0t0Xdeha8BPZXGvQHZrco1M. ECDSA key fingerprint is  MD5:87:59:6e:b5:42:6d:c4:02:d8:ef:29:56:4e:0d:1d:09. Are you sure you want to continue connecting (yes/no)? yes root@ceph136's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'root@ceph136'"
and check to make sure that only the key(s) you wanted were added.

[root@ceph135 ~]# ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph137
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/etc/ceph/ceph.pub"
The authenticity of host 'ceph137 (172.16.2.137)' can't be established. ECDSA key fingerprint is SHA256:UiF5sLefJuaY6uueUxyu0t0Xdeha8BPZXGvQHZrco1M. ECDSA key fingerprint is  MD5:87:59:6e:b5:42:6d:c4:02:d8:ef:29:56:4e:0d:1d:09. Are you sure you want to continue connecting (yes/no)? yes root@ceph137's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'root@ceph137'"
and check to make sure that only the key(s) you wanted were added.

[root@ceph135 ~]# ceph orch host add ceph136
Added host 'ceph136'
[root@ceph135 ~]# ceph orch host add ceph137
Added host 'ceph137'
Copy the code
2.2.4 Deployment Add Monitor

Set the public_network segment for clients to access

ceph config setMon public_network 172.16.2.0/24Copy the code

Select the nodes on which you want to set mon

[root@ceph135 ~]# ceph orch host label add ceph135 mon
Added label mon to host ceph135
[root@ceph135 ~]# ceph orch host label add ceph136 mon
Added label mon to host ceph136
[root@ceph135 ~]# ceph orch host label add ceph137 mon
Added label mon to host ceph137
[root@ceph135 ~]# ceph orch host ls
HOST     ADDR     LABELS  STATUS
ceph135  ceph135  mon
ceph136  ceph136  mon
ceph137  ceph137  mon
Copy the code

# Tell Cephadm to deploy Mon according to the tags, which requires waiting for each node to pull the images and start the container

[root@ceph135 ~]# ceph orch apply mon label:mon
Scheduled mon update...
To verify whether the installation is complete, check the other two nodes
[root@ceph136 ~]# podman ps -a
CONTAINER ID  IMAGE                                COMMAND               CREATED         STATUS             PORTS  NAMES
a24ab51b5f62  docker.io/prom/node-exporter:latest  --no-collector.ti...  5 minutes ago   Up 5 minutes ago          ceph-b3add0aa-ae
37ef832554fd  docker.io/ceph/ceph:v15              -n mon.ceph136 -f...  6 minutes ago   Up 6 minutes ago          ceph-b3add0aa-ae
10122c06ad1a  docker.io/ceph/ceph:v15              -n mgr.ceph136.iy...  7 minutes ago   Up 7 minutes ago          ceph-b3add0aa-ae
df5275a6684f  docker.io/ceph/ceph:v15              -n client.crash.c...  12 minutes ago  Up 12 minutes ago         ceph-b3add0aa-ae
[root@ceph136 ~]# podman imagesREPOSITORY TAG IMAGE ID CREATED SIZE Docker. IO /ceph/ceph v15 d72755C420BC 2 Weeks ago 1.13 GB IO/PROM/Noder-Exporter Latest 14191DBfb45B 2 Weeks ago 27.7 MBCopy the code
2.2.5 deployment OSD

# View available hard drives

[root@ceph135 ~]# ceph orch device ls
HOST     PATH      TYPE   SIZE  DEVICE                     AVAIL  REJECT REASONS
ceph135  /dev/sdb  hdd   32.0G  QEMU_HARDDISK_drive-scsi1  True
ceph135  /dev/sdc  hdd   32.0G  QEMU_HARDDISK_drive-scsi2  True
ceph135  /dev/sda  hdd   20.0G  QEMU_HARDDISK_drive-scsi0  False  locked
ceph136  /dev/sdb  hdd   32.0G  QEMU_HARDDISK_drive-scsi1  True
ceph136  /dev/sdc  hdd   32.0G  QEMU_HARDDISK_drive-scsi2  True
ceph136  /dev/sda  hdd   20.0G  QEMU_HARDDISK_drive-scsi0  False  locked
ceph137  /dev/sdb  hdd   32.0G  QEMU_HARDDISK_drive-scsi2  True
ceph137  /dev/sdc  hdd   32.0G  QEMU_HARDDISK_drive-scsi1  True
ceph137  /dev/sda  hdd   20.0G  QEMU_HARDDISK_drive-scsi0  False  locked
Copy the code

For convenience, I use all available hard disks directly here

[root@ceph135 ~]# ceph orch apply osd --all-available-devices
NAME                  HOST    DATA     DB WAL
all-available-devices ceph135 /dev/sdb -  -
all-available-devices ceph135 /dev/sdc -  -
all-available-devices ceph136 /dev/sdb -  -
all-available-devices ceph136 /dev/sdc -  -
all-available-devices ceph137 /dev/sdb -  -
all-available-devices ceph137 /dev/sdc -  -
# Add a single disk
ceph orch daemon add osd ceph135:/dev/sdb
Copy the code

# Verify deployment

[root@ceph135 ~]# ceph osd dfID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS 0 HDD 0.03119 1.00000 32 GiB 1.0GIB 5.4 MiB 0 B 1 GiB 31 GiB 3.14 1.00 1 up 1 HDD 0.03119 1.00000 32 GiB 1.0 GiB 5.4 MiB 0 B 1 GiB 31 GiB 3.14 1.000 up 2 HDD 0.03119 1.00000 32 GiB 1.0 GiB 5.4 MiB 0 B 1 GiB 31 GiB 3.14 1.000 up 3 HDD 0.03119 1.00000 32 GiB 1.0 GiB 5.4 MiB 0 B 1 GiB 31 GiB 3.14 1.00 1 up 4 HDD 0.03119 1.00000 32 GiB 1.0 GiB 5.4 MiB 0 B 1 GiB 31 GiB 3.14 1.000 up 5 HDD 0.03119 1.00000 32 GiB 1.0 GiB 5.4 MiB 0 B 1 GiB 31 GiB 3.14 1.00 1 UP TOTAL 192 GiB 6.0 GiB 32 MiB 0 B 6 GiB 186 GiB 3.14 MIN/MAX VAR: 1.00/1.00stddev: 0Copy the code

3 Storage Deployment

3.1 CephFS deployment

Deploy the CEphfs MDS service, specify the cluster name and the number of MDSS

[root@ceph135 ~]# ceph orch apply mds fs-cluster --placement=3
Scheduled mds.fs-cluster update...
Copy the code

# validation:

[root@ceph135 ~]# ceph -s
  cluster:
    id:     b3add0aa-aee7-11ea-a3e4-5e7ce92c6bef
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum ceph135,ceph136,ceph137 (age 47s)
    mgr: ceph135.omlfxo(active, since 89m), standbys: ceph136.iyehke, ceph137.fywkvw
    mds:  3 up:standby
    osd: 6 osds: 6 up (since 20m), 6 in (since 20m)

  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   6.0 GiB used, 186 GiB / 192 GiB avail
    pgs:     1 active+clean
Copy the code

3.2 deployment RGW

Create a domain:

[root@ceph135 ~]# radosgw-admin realm create --rgw-realm=rgw-org --default
{
    "id": "31424ff4-38a1-48d9-bab4-fcfe8d75efcc"."name": "rgw-org"."current_period": "06f0511d-58cd-4acd-aac1-da25ea785454"."epoch": 1}Copy the code

Create a zoneGroup

[root@ceph135 ~]# radosgw-admin zonegroup create --rgw-zonegroup=rgwgroup --master --default
{
    "id": "35dcfee7-fa47-4e53-b41d-9718fd029782"."name": "rgwgroup"."api_name": "rgwgroup"."is_master": "true"."endpoints": []."hostnames": []."hostnames_s3website": []."master_zone": ""."zones": []."placement_targets": []."default_placement": ""."realm_id": "31424ff4-38a1-48d9-bab4-fcfe8d75efcc"."sync_policy": {
        "groups": []}}Copy the code

# Create a region

[root@ceph135 ~]# radosgw-admin zone create --rgw-zonegroup=rgwgroup --rgw-zone=zone-dc1 --master --default
{
    "id": "ec441ad3-1167-459d-9d1c-cf21e5625cbf"."name": "zone-dc1"."domain_root": "zone-dc1.rgw.meta:root"."control_pool": "zone-dc1.rgw.control"."gc_pool": "zone-dc1.rgw.log:gc"."lc_pool": "zone-dc1.rgw.log:lc"."log_pool": "zone-dc1.rgw.log"."intent_log_pool": "zone-dc1.rgw.log:intent"."usage_log_pool": "zone-dc1.rgw.log:usage"."roles_pool": "zone-dc1.rgw.meta:roles"."reshard_pool": "zone-dc1.rgw.log:reshard"."user_keys_pool": "zone-dc1.rgw.meta:users.keys"."user_email_pool": "zone-dc1.rgw.meta:users.email"."user_swift_pool": "zone-dc1.rgw.meta:users.swift"."user_uid_pool": "zone-dc1.rgw.meta:users.uid"."otp_pool": "zone-dc1.rgw.otp"."system_key": {
        "access_key": ""."secret_key": ""
    },
    "placement_pools": [{"key": "default-placement"."val": {
                "index_pool": "zone-dc1.rgw.buckets.index"."storage_classes": {
                    "STANDARD": {
                        "data_pool": "zone-dc1.rgw.buckets.data"}},"data_extra_pool": "zone-dc1.rgw.buckets.non-ec"."index_type": 0}}],"realm_id": "31424ff4-38a1-48d9-bab4-fcfe8d75efcc"
}

Copy the code

# Deploy a set of radosgw daemons for a specific domain and region, where only two nodes are specified to have RGW enabled

[root@ceph135 ~]# ceph orch apply rgw rgw-org zone-dc1 --placement="2 ceph136 ceph137"
Scheduled rgw.rgw-org.zone-dc1 update...
Copy the code

# validation

[root@ceph135 ~]# ceph -s
  cluster:
    id:     b3add0aa-aee7-11ea-a3e4-5e7ce92c6bef
    health: HEALTH_WARN
            1 daemons have recently crashed

  services:
    mon: 3 daemons, quorum ceph135,ceph136,ceph137 (age 9m)
    mgr: ceph135.omlfxo(active, since 108m), standbys: ceph136.iyehke, ceph137.fywkvw
    mds:  3 up:standby
    osd: 6 osds: 6 up (since 39m), 6 in(since 39m) rgw: 2 daemons active (rgw-org.zone-dc1.ceph136.ddujbi, rgw-org.zone-dc1.ceph137.mnfhhp) task status: data: Pools: 5 Pools, 129 PGS Objects: 105 Objects, 5.4 KiB Usage: 6.1 GiB Used, 186 GiB / 192 GiB Avail PGS: 太 阳 太 阳 的 太 阳 的 太 阳 的 太 阳 的 太 阳 的 太 阳 的 太 阳 的 太 阳 的 太 阳 的 太 阳 的 太 阳 的 太 阳 的 太 阳 的 太 阳 PG autoscaler decreasing pool 5 PGs from 32 to 8 (0s) [............................]Copy the code

Create a admin user for RGW

[root@ceph135 ~]# radosgw-admin user create --uid=admin --display-name=admin --system
{
    "user_id": "admin"."display_name": "admin"."email": ""."suspended": 0."max_buckets": 1000,
    "subusers": []."keys": [{"user": "admin"."access_key": "XY518C4I2RO51D4S2JGT"."secret_key": "e9akFxQwOM8Y9zxDum4CLCQEOXaImVomGiqIsutC"}]."swift_keys": []."caps": []."op_mask": "read, write, delete"."system": "true"."default_placement": ""."default_storage_class": ""."placement_tags": []."bucket_quota": {
        "enabled": false."check_on_raw": false."max_size": 1,"max_size_kb": 0."max_objects": 1},"user_quota": {
        "enabled": false."check_on_raw": false."max_size": 1,"max_size_kb": 0."max_objects": 1},"temp_url_keys": []."type": "rgw"."mfa_ids": []}Copy the code

# Set dashboard credentials

[root@ceph135 ~]# ceph dashboard set-rgw-api-access-key XY518C4I2RO51D4S2JGT
Option RGW_API_ACCESS_KEY updated
[root@ceph135 ~]# ceph dashboard set-rgw-api-secret-key e9akFxQwOM8Y9zxDum4CLCQEOXaImVomGiqIsutC
Option RGW_API_SECRET_KEY updated
Copy the code

Disable certificate authentication, HTTP access mode and admin account

[root@ceph135 ~]# ceph dashboard set-rgw-api-ssl-verify False
Option RGW_API_SSL_VERIFY updated
[root@ceph135 ~]# ceph dashboard set-rgw-api-scheme http
Option RGW_API_SCHEME updated
[root@ceph135 ~]# ceph dashboard set-rgw-api-host 172.16.2.137
Option RGW_API_HOST updated
[root@ceph135 ~]# ceph dashboard set-rgw-api-port 80
Option RGW_API_PORT updated
[root@ceph135 ~]# ceph dashboard set-rgw-api-user-id admin
Option RGW_API_USER_ID updated
Copy the code

# restart RGW

[root@ceph135 ~]# ceph orch restart rgw
restart rgw.rgw-org.zone-dc1.ceph136.ddujbi from host 'ceph136'
restart rgw.rgw-org.zone-dc1.ceph137.mnfhhp from host 'ceph137'
Copy the code

X. Problems encountered during the deployment

eg1.
[root@ceph135 ~]# cephadm shell
ERROR: Cannot infer an fsid, one must be specified: ['00482894-a564-11ea-8617-12702e1b568d'.'9849edac-a547-11ea-a767-12702e1b568d'[root@ceph135 ceph] Delete the old cluster and leave only the new cluster folder.# cd /var/lib/ceph
[root@ceph135 ceph]# ls
00482894-a564-11ea-8617-12702e1b568d  9849edac-a547-11ea-a767-12702e1b568d
[root@ceph135 ceph]# rm -rf 9849edac-a547-11ea-a767-12702e1b568d/
[root@ceph135 ceph]# ll

eg2.[root@ceph135 ~]# ./cephadm add-repo --release octopus-bash: ./cephadm: /usr/bin/python3: bad interpreter: No such file or directory solution: DNF install python3 eg3.[root@ceph135 ~]# ./cephadm install
Unable to locate any of ['podman'.'docker'DNF install -y podman eg4.ERROR: lvcreate binary does not appear to be installedCopy the code