Disaster recovery is a nerve-racking topic, but one that must be addressed. Although HBase is a distributed database, disaster recovery (Dr) and data backup need to be considered sometimes. The purpose of this article is to master common commands. In this paper, through the case to explain CopyTable, Import, Export, the Snapshot, hope everyone to have an intuitive understanding to use them.

CopyTable

  • Support time interval,row interval, change table name, change column family name, specify whether to copy deleted data and other functions
  • The CopyTable tool uses scan query, uses PUT and Delete apis to write new tables, and is based on hbase client apis
  1. Create a backup table and ensure that the columnFamily is consistent

  1. In another window, go to the hbase/bin directory and run the following command (fileTableNew is the backup table and fileTable is the original table)
hbase org.apache.hadoop.hbase.mapreduce.CopyTable --new.name=fileTableNew fileTable
Copy the code

Export/Import

  1. Export can Export data to the target cluster and then Import data to the target cluster. Export can specify the start time and end time because incremental backup can be done
  2. Like CopyTable, the Export tool relies on hbase scan data to read data

The Export of grammar

bin/hbase org.apache.hadoop.hbase.mapreduce.Export <tablename> hdfs://namenode:9000/table_bak <version> <startTime> <endTime>
Copy the code

The Import of grammar

Bin/hbase - Dhbase. Import. Version = 0.94 org.. Apache hadoop, hbase. Graphs. The import < tablename > < inputdir >Copy the code
  1. Only the fileTable table exists in the hbase database

  1. Execute export statement
# here storage path is stored in the HDFS. Above/hbase org.. Apache hadoop, hbase. Graphs. The Export fileTable/usr/local/hbase/fileTable dbCopy the code

  1. Create a new table that needs to be imported, ensuring that the structure of the imported table is the same as that of the imported table (same column cluster)
create 'fileTableNew','fileInfo','saveInfo'
Copy the code
  1. Execute import statement
./hbase org.apache.hadoop.hbase.mapreduce.Import fileTableNew /usr/local/hbase/fileTable.db
Copy the code

Handling snapshots

Create a snapshot

snapshot 'myTable','myTableSnapshot-181210'
Copy the code

Cloning a snapshot

clone_snapshot 'myTableSnapshot-181210', 'myNewTestTable'
Copy the code

List the snapshot

list_snapshots
Copy the code

To delete a snapshot

delete_snapshot 'myTableSnapshot-181210'
Copy the code

Restore data

disable 'myTable'
restore_snapshot 'myTableSnapshot-181210'
Copy the code

View Hadoop cluster information

  1. http://ip:50070

http://ip:50070/jmx you can see jSON-formatted messages, or you can get values by encoding, http://ip:50070/jmx?qry=< value of name in JSON >, http://192.168.239.134:50070/jmx?qry=java.lang:type=MemoryPool, for example, name = Survivor % 20 space

View hbase cluster information

  1. http://ip:16010

http://ip:16010/jmx, can also be filtered through Qry