This is the 14th day of my participation in Gwen Challenge

Hadoop framework overview of HDFS, xiaobian and you know HDFS together. But after all, we still need to work and do practical things. Today, WE will take you to Shell operation. This seemingly simple content is indeed the focus of development, we can not be ignored!

Here’s how to start today’s lesson: Common Shell commands for the Hadoop framework (development focus). For those who are not familiar with Shell, please refer to the following two articles:

The first: basic Shell programming introduction

The second chapter: Advanced Shell introduction programming

0. Preparations: Start the cluster

Start the cluster. Check the startup status each time you start the cluster

1. Basic grammar

Bin /hadoop fs command OR bin/ HDFS DFS command

DFS is an implementation class of FS.

2. Commands

Hadoop fs commands are as follows:

The HDFS DFS command is as follows:


3. Perform common commands

(0) Start the Hadoop cluster (for subsequent testing)

(1) -help: Prints the command parameters

(2) -ls: displays directory information

(3) -mkdir: create a directory on the HDFS. Add -p to the multi-level command

(4) -moveFromLocal: Cut and paste from local to HDFS

(5) -appendtoFile: Append a file to the end of an existing file

(6) -cat: displays file contents

(7) -chgrp, -chmod, -chown: the same as in the Linux file system, modify the owning permission of a file

(8) -copyfromLocal: copies files from the local file system to the HDFS path

(9) -copytolocal: copy data from HDFS to the local PC

(10) -cp: copies data from one HDFS path to another HDFS path

(11) -mv: Moves files in the HDFS directory

(12) -get: is the same as copyToLocal, which is to download the file from HDFS to the local

Log.1, log.2,log.3, log.

(14) -put: equivalent to copyFromLocal

(15) -tail: Displays the end of a file

(16) -rm: deletes a file or folder

(17) -rmdir: Deletes an empty directory

(18) -du Collects statistics about the folder size

(19) -setrep: sets the number of file copies in the HDFS

The number of copies set here is only recorded in NameNode metadata. Whether there are such copies depends on the number of Datanodes. Since there are only three devices, at most three copies, the number of copies can reach 10 only when the number of nodes increases to 10.

This is the end of today’s learning content, continue to learn, please stay tuned! More exciting content, please pay attention to the public number: Xiao Han senior take you to learn