Introduction to the

Ceph is an open source distributed storage system that supports PB-level storage, object storage, block storage and file storage with high performance, high availability and scalability.

Deployment network proposed architecture diagram

The deployment of

Deployment architecture diagram. The JEWEL version is deployed in this experiment

Experimental environmentVagrantfile

Lab1 node is used as both admin node and Node node, lab2 and LAB3 are only used as node nodes, and Lab4 is used as ceph node for testing

# -*- mode: ruby -*-
# vi: set ft=ruby :

ENV["LC_ALL"] = "en_US.UTF-8"

Vagrant.configure("2") do |config|
    (1.4).each do |i|
      config.vm.define "lab#{i}" do |node|
        node.vm.box = "Centos - 7.4 - the docker - 17"
        node.ssh.insert_key = false
        node.vm.hostname = "lab#{i}"
        node.vm.network "private_network".ip: "11.11.11.11#{i}"
        node.vm.provision "shell".inline: "echo hello from node #{i}"
        node.vm.provider "virtualbox" do |v|
          v.cpus = 3
          v.customize ["modifyvm".:id."--name"."lab#{i}"."--memory"."3096"]
          file_to_disk = "lab#{i}_vdb.vdi"
          unlessFile.exist? (file_to_disk)# 50GB
            v.customize ['createhd'.'--filename', file_to_disk, '--size'.50 * 1024]
          end
          v.customize ['storageattach'.:id.'--storagectl'.'IDE'.'--port'.1.'--device'.0.'--type'.'hdd'.'--medium', file_to_disk]
        end
      end
    end
end
Copy the code

Configure ali ceph source

Perform the following operations on all nodes

cat >/etc/yum.repos.d/ceph.repo<<EOF
[ceph]
name=ceph
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/x86_64/
gpgcheck=0
priority=1

[ceph-noarch]
name=cephnoarch
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch/
gpgcheck=0
priority=1

[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/SRPMS
enabled=0
gpgcheck=1
type=rpm-md
gpgkey=http://mirrors.163.com/ceph/keys/release.asc
priority=1
EOF
yum makecache
Copy the code

Install ceph-deploy on the admin node

Lab1 node

# the official source
If you have already configured the above source, you do not need to configure the following source
# I recommend using Ali source, because the official source is too slow
cat >/etc/yum.repos.d/ceph.repo<<EOF
[ceph-noarch]
name=Ceph noarch packages
baseurl=https://download.ceph.com/rpm-jewel/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
EOF

Update system software
# This operation can be omitted
# yum update -y

# installation ceph - deploy
yum install -y ceph-deploy
Copy the code

Connect the admin node to the node

After the installation, you need to configure the admin node. You can log in to each node and test node using SSH without a password. You need to have the sudo permission

Execute on each node
useradd ceph
echo 'ceph' | passwd --stdin ceph
echo "ceph ALL = (root) NOPASSWD:ALL" > /etc/sudoers.d/ceph
chmod 0440 /etc/sudoers.d/ceph
You can log in to SSHD using the password
sed -i 's/PasswordAuthentication no/PasswordAuthentication yes/' /etc/ssh/sshd_config
systemctl reload sshd
Sudo does not require a TTY
sed -i 's/Default requiretty/#Default requiretty/' /etc/sudoers

Configure hosts on all nodes
Include the machines for cepH testing
Be careful when using the Vagrant experiment
Since Vagrant automatically resolves the host name to 127.0.0.1
# So when the experiment is carried out on any machine in the CEPH cluster
Note the line comment that resolves the native name to 127.0.0.1, as shown below
# 127.0.0.1	lab1	lab1Cat >>/etc/hosts<<EOF 11.11.11.111 LAB1 11.11.11.112 LAB2 11.11.11.113 LAB3 11.11.11.113 LAB4 EOF# execute on admin node
Create a ceph user and configure sshkey
Ceph users have already been created when lab1 is a node
The first command may fail, so ignore it
useradd ceph
su - ceph
ssh-keygen
ssh-copy-id ceph@lab1
ssh-copy-id ceph@lab2
ssh-copy-id ceph@lab3
ssh-copy-id ceph@lab4
Copy the code

Create a cluster on the admin node

Perform the following operations on the Lab1 node. Ensure that the node host name is correct lab1, lab2, and lab3. Otherwise it might not work

# Do not use sudo or run the following command as root
su - ceph
mkdir my-cluster
cd my-cluster
Create lab1 as monitor
ceph-deploy new lab1

View the configuration file
ls -l

# configuration ceph. Conf
[global]
...
If you have multiple nics, you should configure the following options:
Public network is a public network that is responsible for the traffic of external services provided by the cluster
# cluster network is a cluster network, load cluster data replication transmission communication, etc
# The same network card was used in this experiment, and it is recommended to use one network card in the habitatPublic network = 11.11.11.0/24 Cluster network = 11.11.11.0/24Install the ceph package
The official Ceph source will be reconfigured to install the ceph source if you follow the official document installation method
# Due to network problems, the installation may go wrong and need to be performed multiple times
Ceph ceph-radosgw package will be installed
# ceph-deploy install lab1 lab2 lab3
# We recommend using Ali source installation, because using ceph-deploy installation will be slow
Install the package manually using the following command instead of the official ceph-deploy install command
The following operations are performed on all nodes
yum install -y ceph ceph-radosgw

Deploy monitor and generate keys
ceph-deploy mon create-initial
ls -l *.keyring

Copy files to node
ceph-deploy admin lab1 lab2 lab3

# Deployment Manager (Luminous +) 12 and beyond require deployment
You do not need to run the following command to deploy the JEWEL
# ceph-deploy mgr create lab1

Add osd nodes in disk mode
# This experiment adopts this method
# SDB Sets the name of the disk to be added to the VM
ceph-deploy osd create lab1:sdb lab2:sdb lab3:sdb

Create a directory on node
rm -rf /data/osd1
mkdir -pv /data/osd1
chmod 777 -R /data/osd1
chown ceph.ceph -R /data/osd1

Add the OSD node as a file or directory
ceph-deploy osd prepare lab1:/data/osd1 lab2:/data/osd1 lab3:/data/osd1
ceph-deploy osd activate lab1:/data/osd1 lab2:/data/osd1 lab3:/data/osd1

# check status
ssh lab1 sudo ceph health
ssh lab1 sudo ceph -s
Copy the code

Clean up the cluster

If the installation process fails, use the following command to clean up and start again
ceph-deploy purge lab1 lab2 lab3
ceph-deploy purgedata lab1 lab2 lab3
ceph-deploy forgetkeys
rm ceph.*
Copy the code

Extension cluster

Improve usability

  • Running metadata Server on Lab1 for subsequent use of CEPHFS
  • Run Monitor and Manager on Lab2, Lab3 to improve cluster availability
Metadata Server must be started in order to use CephFS
ceph-deploy mds create lab1

Add the monitor #
ceph-deploy mon add lab2
ceph-deploy mon add lab3
ssh lab1 sudo ceph -s

# check the status of the monitor node.
ceph quorum_status --format json-pretty

# Add Manager (Luminous +) 12 and later require deployment
You do not need to run the following command to deploy the JEWEL
# ceph-deploy mgr create lab2 lab3
Copy the code

RGW is deployed using the Ceph Object Gateway

S3/Swift storage function, S3 and Swift compatible interface, you can use S3 or Swift command line tools or SDK to use ceph

# start RGW
ceph-deploy rgw create lab1

/etc/ceph/ceph.conf
Use RGW listening on port 80
# lab1 is the host name for starting RGW
[client.rgw.lab1]
rgw_frontends = "civetweb port=80"

# restart RGW
systemctl restart [email protected]

# Access testThe curl -i http://11.11.11.111/Copy the code

Use CEPh storage

Application storage usage architecture diagram

Object storage

# installation ceph
yum install -y ceph

Copy the related files to the machine that you want to use Ceph-client
ceph-deploy admin lab4

# test
# Save file
echo 'hello ceph oject storage' > testfile.txt
ceph osd pool create mytest 8
rados put test-object-1 testfile.txt --pool=mytest

# view read file
rados -p mytest ls
rados get test-object-1 testfile.txt.1 --pool=mytest
cat testfile.txt.1

# Check file location
ceph osd map mytest test-object-1

# delete file
rados rm test-object-1 --pool=mytest

# remove pool
ceph osd pool rm mytest mytest --yes-i-really-really-mean-it
Copy the code

Block storage

# installation ceph
yum install -y ceph

Copy the related files to the machine that you want to use Ceph-client
ceph-deploy admin lab4

Create a block device image
rbd create foo --size 4096 --image-feature layering
rbd info foo
rados -p rbd ls

Map mirrors to block devices
sudo rbd map foo --name client.admin

Create a file system using block devices
sudo mkfs.ext4 -m0 /dev/rbd/rbd/foo

# Mount use
sudo mkdir /mnt/ceph-block-device
sudo mount /dev/rbd/rbd/foo /mnt/ceph-block-device
cd /mnt/ceph-block-device
echo 'hello ceph block storage' > testfile.txt

# to clean up
cd ~
sudo umount -lf /mnt/ceph-block-device
sudo rbd unmap foo
rbd remove foo
rados -p rbd ls
Copy the code

S3 Object Storage

11.11.11.111 is the RGW installed

# installation
yum install -y ceph ceph-radosgw

Copy the related files to the machine that you want to use Ceph-client
ceph-deploy admin lab4

Create pool for S3
ceph osd pool create .rgw 128 128
ceph osd pool create .rgw.root 128 128
ceph osd pool create .rgw.control 128 128
ceph osd pool create .rgw.gc 128 128
ceph osd pool create .rgw.buckets 128 128
ceph osd pool create .rgw.buckets.index 128 128
ceph osd pool create .rgw.buckets.extra 128 128
ceph osd pool create .log 128 128
ceph osd pool create .intent-log 128 128
ceph osd pool create .usage 128 128
ceph osd pool create .users 128 128
ceph osd pool create .users.email 128 128
ceph osd pool create .users.swift 128 128
ceph osd pool create .users.uid 128 128

# check
rados lspools

# Access testThe curl -i http://11.11.11.111/Create an S3 user
# save the user access_key secret_key returned by the following command
radosgw-admin user create --uid=foo --display-name=foo [email protected]

Create user admin
radosgw-admin user create --uid=admin --display-name=admin

Allow admin to read and write users messages
radosgw-admin caps add --uid=admin --caps="users=*"

Allow admin to read and write all usage messages
radosgw-admin caps add --uid=admin --caps="usage=read,write"

# Install s3 test tools
yum install -y s3cmd

Access Key (Secret Key)
s3cmd --configure

Modify the generated configuration file
vim $HOME/. S3cfg Host_base = 11.11.11.111 Host_bucket = 11.11.11.111/%(bucket) use_https = False# create Bucket
s3cmd mb s3://mybucket
s3cmd ls

# upload Object
echo 'hello ceph block storage s3' > hello.txt
s3cmd put hello.txt s3://mybucket

# see the Object
s3cmd ls s3://mybucket

# download Object
cd /tmp
s3cmd get s3://mybucket/hello.txt
cd ~

# delete all objects under bucket
s3cmd del -rf s3://mybucket/
s3cmd ls -r s3://mybucket

# delete Bucket
s3cmd mb s3://mybucket1
s3cmd rb s3://mybucket1

# Delete an S3 user
radosgw-admin user rm --uid=foo
radosgw-admin user rm --uid=admin

# remove poolceph osd pool delete .rgw .rgw --yes-i-really-really-mean-it ceph osd pool delete .rgw.root .rgw.root --yes-i-really-really-mean-it ceph osd pool delete .rgw.control .rgw.control --yes-i-really-really-mean-it ceph osd pool  delete .rgw.gc .rgw.gc --yes-i-really-really-mean-it ceph osd pool delete .rgw.buckets .rgw.buckets --yes-i-really-really-mean-it ceph osd pool delete .rgw.buckets.index .rgw.buckets.index --yes-i-really-really-mean-it ceph osd pool delete .rgw.buckets.extra .rgw.buckets.extra --yes-i-really-really-mean-it ceph osd pool delete .log .log --yes-i-really-really-mean-it ceph osd pool delete .intent-log .intent-log --yes-i-really-really-mean-it ceph osd pool delete .usage .usage --yes-i-really-really-mean-it ceph osd pool delete .users .users --yes-i-really-really-mean-it ceph  osd pool delete .users.email .users.email --yes-i-really-really-mean-it ceph osd pool delete .users.swift .users.swift --yes-i-really-really-mean-it ceph osd pool delete .users.uid .users.uid --yes-i-really-really-mean-itCopy the code

CephFS storage

# installation ceph
yum install -y ceph ceph-fuse

Copy the related files to the machine that you want to use Ceph-client
ceph-deploy admin lab4

# CephFS requires two pools to store data and metadata separately
ceph osd pool create fs_data 128
ceph osd pool create fs_metadata 128
ceph osd lspools

Create a CephFS
ceph fs new cephfs fs_metadata fs_data

# check
ceph fs ls

Mount CephFS using the functionality provided by the kernel
Kernel 4.0 or higher is recommended due to possible bugs
# The advantage is better performance than using CEPh-FUSE
# name secret to/etc/ceph/ceph client. Admin. The content in the keyring
mkdir /mnt/mycephfs
mount -t ceph lab1:6789,lab2:6789,lab3:6789:/ /mnt/mycephfs -o name=admin,secret=AQBoclRaiilZJBAACLjqg2OUOOB/FNa20UJXYA==
df -h
cd /mnt/mycephfs
echo 'hello ceph CephFS' > hello.txt
cd ~
umount -lf /mnt/mycephfs
rm -rf /mnt/mycephfs

Mount CephFS using ceph-FUSE
mkdir /mnt/mycephfs
ceph-fuse -m lab1:6789 /mnt/mycephfs
df -h
cd /mnt/mycephfs
echo 'hello ceph CephFS' > hello.txt
cd ~
umount -lf /mnt/mycephfs
rm -rf /mnt/mycephfs

# to clean up
Stop the metadata server
# This deployment is in Lab1, go to Lab1 and stop service
systemctl stop ceph-mds@lab1
ceph fs rm cephfs --yes-i-really-mean-it
ceph osd pool delete fs_data fs_data --yes-i-really-really-mean-it
ceph osd pool delete fs_metadata fs_metadata --yes-i-really-really-mean-it

# Enable metadata Server
# Easy to use CEPHfs later
systemctl start ceph-mds@lab1
Copy the code

Reference documentation

  • Docs.ceph.com/docs/master…
  • Docs.ceph.com/docs/master…
  • docs.ceph.org.cn/start/
  • Docs.ceph.com/docs/master…
  • www.xuxiaopang.com/2016/10/13/…
  • Docs.ceph.com/docs/master…
  • Blog.frognew.com/tags/ceph.h…
  • www.centos.bz/2017/10/ with ce…