How RAID is implemented

Interaction: Do we do hardware RAID before or after the system is installed?

A: The system will be installed only after the array is made. Generally, when the server is started, there will be a prompt to enter the RIAD configuration. For example, press CTRL+L/H/M to enter the RAID configuration interface.

Hard RAID: A RAID card is required. Our disks are connected to a RAID card, which is used for unified management and control. It also distributes and maintains data; It has its own CPU and is fast to process.

Link: https://pan.baidu.com/s/1AFY9… Extraction code: WO3M Silent Video RAID: Implemented by the operating system.

There is an md(Multiple Devices) module in the Linux kernel that manages RAID devices at the bottom level, which is provided to us at the application level

An application tool, mdadm, is a command for creating and managing software RAID under Linux.

MDADM command common parameter explanation:





Interactive: RAID5 requires three hard drives. So using 4 hard disks, can we do RAID5?

Can the

Experiment environment: 11 new hard disks were added. The functions of each disk are as follows:



Interaction: how should the names be arranged after the disks reach the SDZ?

SDAA, SDAB…



Experimental environment:



Note: normal work in RAID all use a separate disk to do. To conserve resources, RAID10 is multiple on a single disk

Partition instead of multiple independent disks, but this makes a RAID that has no backup function because one disk is broken.

All the RAID on this disk is broken.

2 create a raid 0

Experimental environment:

Create raid0

[root@xuegod63 ~]#yum -y install mdadm [:]# mdadm-c -v /dev/md0-l 0-n 2 /dev/sdb/dev/sdc-c create -v MDADM: Chunk Size Defaults to 512K MDADM: Defaulting to version 1.2 metadata MDADM: Array /dev/md0 started. [root@xuegod63 ~]# mdadm-ds-d: /dev/md0 started. [root@xuegod63 ~]# mdadm-ds-d

2. View the array information

/ root @ xuegod63 ~ # for mdadm - Ds ARRAY/dev/md0 metadata name = = 1.2 xuegod63. Cn: 0 UUID = cadf4f55:226 ef97d: 565 eaba5:3 a3c7da4 [root@xuegod63 ~]# mdadm -d /dev/md0 /dev/md0: Version: 1.2 Creation Time: Thu May 17 15:59:16 2018 RAID LEVEL: Raid0 Array Size: 41910272 (39.97 GiB 42.92 GB) 2 Persistence : Superblock is persistent Update Time : Thu May 17 15:59:16 2018 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Chunk Size : 512K #chunk is the smallest storage unit in the RAID: Consistency Policy: none Name: xuegod63.cn:0 (local to host xuegod63.cn) UUID: cadf4f55:226ef97d:565eaba5:3a3c7da4 Events : 0 active sync: [root@xuegod63 ~]# mdadm-dsv > /etc/mdadm.conf # cat /proc/mdstat #

3. Create and mount the file system of the created RAID0

[root@xuegod63 ~]# mkfs.xfs /dev/md0 [root@xuegod63 ~]# mkdir /raid0 [root@xuegod63 ~]# mount /dev/md0 /raid0/ [root@xuegod63 ~]# df -th /raid0/ filesystem type capacity used used % mount point /dev/md0 XFS 40G 3M 40G 1% /raid0 [shack ~]# echo 324  > /raid0/a.txt

4. Automatic mount after boot

[root@xuegod63 ~]#   blkid /dev/md0
/dev/md0: UUID="3bf9c260-dc7b-4e37-a865-a8caa21ddf2c" TYPE="xfs"
[root@xuegod63 ~]# echo "UUID=5bba0862-c4a2-44ad-a78f-367f387ad001 /raid0 xfs
defaults 0 0" >> /etc/fstab

3 to create raid 1

The experimental contents are as follows:



1) Create RAID1

2) Add 1 hot standby plate

3) Simulate disk failure and automatically replace the failure disk

4) Remove the failure disk from RAID1

[root@xuegod63 ~]#   mdadm -C -v /dev/md1 -l 1 -n 2 -x 1 /dev/sd[d,e,f]

-C Create -V Details -L Array Level -N Array Members -X Array Spare Dets Save the RADI information to the configuration file

[root@xuegod63 ~]# mdadm   -Dsv > /etc/mdadm.conf

To view RAID array information:

[root@xuegod63 ~]# mdadm-d /dev/md1 RAID Level: raid1 Array Size: 20955136 (19.98gib 21.46GB)

.



Create a file system on a RAID device

[root@xuegod63 ~]# mkfs.xfs /dev/md1 [host ~]# mkdir /raid1 [host ~]# mount /dev/md1 [root@xuegod63 ~]# cp /etc/passwd /raid1: /dev/sde: /dev/sde: /dev/sde: /dev/sde See if the /dev/sdf standby disk can automatically replace the failed disk to confirm that synchronization has been achieved root@xuegod63 ~]# mdadm-d /dev/md1 Consistency Policy: Resync [root@xuegod63 ~]# mdadm /dev/md1-f Set device status to failure

Look at the array status information

[root@xuegod63 ~]# mdadm   -D /dev/md1



That is, the SDD would synchronize its data to the SDF as the Spare Rebuilding was being rebuilt

Rebuild Status: 13% complete synchronization Status (the files in MD1 are still in use because SDD is in work

A)

Faulty Update configuration file

[root@xuegod63 ~]# mdadm   -Dsv > /etc/mdadm.conf

-d Print array device details S Get array missing information V Check whether data is lost

[root@xuegod63 ~]# ls /raid1/ # Data normal, not lost

Important data such as: database; System disk (install system to RAID1 MD1 device, then partition MD1) remove the damaged device:

[root@xuegod63 ~]# mdadm-r/dev/md1-/dev/sde #-r: hot removed from /dev/md1

View information:

[root@xuegod63 ~]# mdadm   -D /dev/md1

There are no hot back-up plates. Add a new hot back-up plate.

[root@xuegod63 ~]# add device to /dev/dev/sde [root@xuegod63 ~]# add device to /dev/sde [root@xuegod63 ~]# add device to /dev/sde []# add device to /dev/sde

4 create RAID5

Experimental environment:



1) Create RAID5, add 1 hot standby, and specify the chunk size as 32K

-x specifies the number of spare disks in the array

-c or –chunk= Set the size of the array chunk in kilobytes (default for normal files, if the memory is large)

Files are bigger, if you’re storing small files you’re storing small files, and chunk is kind of like the concept of a cluster, or a chunk, which is the smallest chunk of storage in an array

A)

2) Stop the array and reactivate the array

3) Expand array capacity from 3 to 4 disks by using hot standby disks

(1) to create a RAID – 5

[root@xuegod63 ~]# mdadm -C -v /dev/md5 -l 5 -n 3 -x 1 -c 32 /dev/sd{g,h,i,j} [root@xuegod63 ~]# mdadm -D /dev/md5 /dev/md5: Version: 1.2 Creation Time: Thu May 17 18:54:20 2018 RAID Level: RAID5 Array Size: Used Dev Size: 20955136 (19.98gib 21.46GB) RAID Devices: 3 Total Devices: 4 Persistence : Superblock is persistent Update Time : Thu May 17 18:54:31 2018 State : clean, degraded, recovering Active Devices : 2 Working Devices : 4 Failed Devices : 0 Spare Devices : 2 Layout : Left-Symmetric Chunk Size: 32K Consistency Policy: Resync Rebuild Status: 7% Complete # Name : xuegod63.cn:5 (local to host xuegod63.cn) UUID : fa685cea:38778d6a:0eb2c670:07ec5797 Events : 2



(2) Extend the RAID5 disk array

Add the hot standby disk to MD5. The number of disks that can be used with MD5 is 4

[root@xuegod63 /]# mdadm -G /dev/md5 -n 4 -c 32

-g or –grow changes array size or shape

[root@xuegod63 ~]# mdadm-dsv > /etc/mdadm.conf # Save the configuration file

Note: the array can only be expanded under normal condition. It is not allowed to expand during degradation and reconstruction. For RAID5, you can only add member disks, not reduce them. For RAID 1, you can add or subtract member disks.

[root@xuegod63 ~]# mdadm-d /dev/md5... Array Size: 41910272 (39.97gib 42.92GB) Used Dev Size: 20955136 (19.96gib 21.46gb)... Reshape the Status: 3% complete # to Reshape state: 3% complete, wait for 100%, data synchronization, synchronization will become into: after Consistency Policy: resync # Consistency strategy: resynchronization, said it had finished...



In a moment, after all data synchronization is complete, check the MD5 space size:

Array Size: 62865408 Used Dev Size: 62865136 Used Dev Size: 62865136 (19.91gib 21.46gb)

(3) Stop the MD5 array

[root@xuegod63 ~]# mdadm-d: /etc/mdadm.conf # mdadm-d: /dev/md5 # mdadm-d: /dev/md5 # mdadm-d: /dev/md5 # mdadm-d: /dev/md5 # mdadm-d: /dev/md5 # Confirming that the data has been synchronized [root@xuegod63 ~]# mdadm-s /dev/md5 # -s Stop the array of mdadm: stopped /dev/md5

(4) Activate the MD5 array

[root@xuegod63 ~]# mdadm-as # -a mdadm: /dev/md5 has been started with 3 drives and 1 spare.

5 create RAID10

Experiment environment: RAID10 partition: SDK1, SDK2, SDK3. SDK4

[root@xuegod63 ~]# fdisk # has four primary partitions, [root@xuegod63 ~]# ls /dev/sdk* /dev/sdk # mdadm-c-v /dev/md10-l 10-n 4 /dev/sdk[1-4] # mdadm-c-v /dev/md10-l 10-n [root@xuegod63 ~]# mkfs.xfs /dev/md10 [root@xuegod63 ~]# cat /proc/mdstat

6 Delete all RAID information and precautions

[root@xuegod63 ~]# umount /dev/md0/raid0 # If you already have RAID mounted, uninstall it first. [root@xuegod63 ~]# mdadm-ss # stop the RAID device [press ~]# rm-rf /etc/mdadm.conf # delete the RAID configuration file []# --zero-superblock /dev/sdb # Clear RAID identifier from physical disk, [root@xuegod63 ~]# --zero-superblock /dev/sdc []# mdadm --zero-superblock /dev/sd[d-j] [root@xuegod63 ~]# mdadm --zero-superblock /dev/sdk[1-4] # mdadm-dsv # clears RAID identifier in physical disk, MD superblock

MDADM: Unrecognized MD component device # represents that the MD super block has been erased and the RAID identifier information could not be found. It is fast to erase MD. It will report this information when it is executed twice. Parameters: –zero-superblock: # Erases the MD superblock from the device

Today for you to share this, I will share every day for you technical articles, want to get detailed articles +V