1. Back up Elasticsearch data

The first snapshot is a full copy of the data, but all subsequent snapshots only store the difference between the existing snapshot and the new data. This means that subsequent backups are fairly fast because they transfer only a small amount of data.

The snapshot and recovery module allows you to create snapshots of individual indexes or entire clusters to a variety of backend repositories. This article focuses on storing snapshots on shared file systems.

To create a snapshot for a shared file system, perform the following steps:

  1. Create a shared directory for the cluster.
  2. Modify ES configuration to add Settings for shared directories.
  3. Create a backup warehouse;
  4. Create a snapshot.
  5. View the snapshot status.
  6. Restore data from the snapshot if necessary.

2. Set the shared directory using the NFS

NFS file sharing solves the problem of sharing files such as images and attachments in a cluster environment. It is mainly used to create snapshot shared folders for searching clusters.

2.1 Role Assignment

The host name IP role
zk-master01 192.168.1.190 NFS service side
zk-slaver01 192.168.1.224 The NFS client
zk-slaver02 192.168.1.48 The NFS client

2.2 Configuring the NFS Server

The following operations are performed only on ZK-Master01 (192.168.1.190).

2.2.1 Checking the NFS Service Installation

rpm -qa|grep nfs
rpm -qa|grep rpcbind
Copy the code

If the component is not installed, run the following command to install it:

yum install nfs-utils rpcbind
Copy the code

2.2.2 Setting automatic startup

CentOS 6 You can run the following commands to set the startup service:

chkconfig nfs on
chkconfig rpcbind on
Copy the code

CentOS 7 You can run the following command to enable automatic startup:

systemctl enable rpcbind.service    
systemctl enable nfs-server.service
Copy the code

2.2.3 Starting the Service

CentOS 6 Run the following command:

service rpcbind start
service nfs start
Copy the code

CentOS 7 Run the following command:

systemctl start rpcbind.service    
systemctl start nfs-server.service 
Copy the code

2.2.4 Creating a Shared Directory

mkdir /data/elastic/bak/backup_es
Set the owner of the directory to be the user who started the ES program
chown -R luculent /data/elastic/bak/backup_es
Copy the code

2.2.5 Modifying the Configuration File

vi /etc/exports
# add the following statement
/data/elastic/bak/backup_es *(rw,sync,no_root_squash,no_subtree_check)
Copy the code
  • *: Allows access from all network segments
  • rw: Read and write permission
  • sync: Data is synchronously written to the internal disk and hard disk
  • no_root_squash: Indicates the user permission of an NFS client shared directory

More configuration details are as follows:

Ro Read-only ACCESS RW Read/write Access sync All data is written to the share when requested. Async NFS Can respond to requests before data is written. Insecure NFS Is sent over the secure TCP/IP port less than 1024 Wdelay Group write if multiple users want to write to an NFS directory (default) no_wdelay Group write if multiple users want to write to an NFS directory immediately. This setting is not required when async is used. Hide Indicates that a subdirectory is not shared in the NFS shared directory. No_hide Indicates that a subdirectory of the NFS shared directory is subtree_check. If a subdirectory such as /usr/bin is shared, the NFS is forced to check the permission of the parent directory. Do not check the permission of the parent directory all_squash UID and GID of a shared file Mapping Anonymous. It is applicable to a public directory. No_all_squash The UID and GID of the shared file is reserved. (Default) root_squash All requests of user root are mapped as permission of user anonymous. (default) no_root_squas User root has full management access to the root directory Anonuid = XXX UID of an anonymous user in the /etc/passwd file on the NFS server anongid= XXX GID of an anonymous user in the /etc/passwd file on the NFS serverCopy the code

2.2.6 Refreshing the configuration takes effect immediately

# refresh the configuration so that the changes take effect immediately
exportfs -a

# View the mountable directory
showmount -e 192.168.1.190
Copy the code

2.3 Configuring the Client

The following commands are executed on zK-slaver01 (192.168.1.224) and ZK-slaver02 (192.168.1.48).

Perform steps 1-4 to configure the server. After NFS is installed and deployed, start NFS and create a backup folder.

2.3.5 Mounting a Directory

# View the mountable directory
showmount -e 192.168.1.190

# mountThe mount -t NFS 192.168.1.190: / data/elastic/bak/backup_es/data/elastic/bak/backup_esCopy the code

2.3.6 Setting Automatic Mounting upon Startup

# view the current mount
df -h

Set automatic mount upon startupVi/etc/fstab 192.168.1.190: / data/elastic/bak/backup_es/data/elastic/bak/backup_es NFS defaults 0 0Copy the code

3. Modify ES configurations

After configuring the shared directory, modify ES configuration and restart ES for the configuration to take effect.

# add the following configuration in elasticSearch.yml to set the backup repository path
path.repo: ["/data/elastic/bak/backup_es"]
Copy the code

4. Create a backup repository

4.1 Opening the Snapshot management page

The ES plug-in kopf provides a graphical interface to create administrative snapshots, directly accessible to the cluster where kopf is installed: http://es-ip:9200/_plugin/kopf/#! /snapshot the snapshot management page is displayed.

Of course, you can also click the menu to enter the interface.

4.2 Creating a Backup Warehouse

After adding the backup warehouse information in the text box on the left of the snapshot interface, click create button to complete the creation. The functions of each text box field are as follows:

  • repository name: Name of warehouse
  • type: Be sure to selectfs
  • location: Enter the name of the shared directory/data/elastic/bak/backup_es
  • max_restore_bytes_per_sec: Speed limit for data recovery, default (40M /s)
  • max_snapshot_bytes_per_sec: speed limit when creating a backup, default (40m/s)
  • chunk_size: Fragment size, unlimited by default
  • compress: Indicates whether to enable compression

It is also possible to create a backup repository by performing the following request via the REST client.

POST _snapshot/es_bak_20180710
{
  "type": "fs"."settings": {
    "location": "/data/elastic/bak/backup_es"."max_restore_bytes_per_sec": "50mb"."max_snapshot_bytes_per_sec": "50mb"."compress": true}}Copy the code

5. Create a snapshot

5.1 Creating a Snapshot

In the snapshot name text box on the right of the snapshot page, enter the snapshot name. Repository Select the created repository ES_BAK_20180710. Ignore_unavailable Check true; Include_global_state Select false; Finally, select the indexes that you want to back up (if you do not select all indexes, you can select multiple indexes by holding down CTRL) and click create to create a snapshot.

It is also possible to create a snapshot by performing the following request from the REST client.

POST _snapshot/es_bak_20180710/ss_2018_07_10
{
  "indices": "img_face,lk_other"."include_global_state": false."ignore_unavailable": true
}
Copy the code

5.2 Viewing a Snapshot

You can view information about the SS_2018_07_10 snapshot by accessing the following address in the address bar.

http://es-ip:9200/_snapshot/es_bak_20180710/ss_2018_07_10
Copy the code

A message is displayed indicating that the vm is successfully created.

6. Restore data from the snapshot

# Restore all
POST /_snapshot/my_backup/snapshot_1/_restore

Restore the specified index
POST /_snapshot/my_backup/snapshot_1/_restore
{
  "indices": "index_1,index_2"."ignore_unavailable": true."include_global_state": false."rename_pattern": "index_(.+)"."rename_replacement": "restored_index_The $1"."index_settings": {
    "index.number_of_replicas": 0}."ignore_index_settings": [
    "index.refresh_interval"]}Copy the code


Any Code, Code Any!

Scan code to pay attention to “AnyCode”, programming road, together forward.