Installation environment

VMware® Workstation 16 Pro

Vm OS: CentOS 7.9-minimal

Vm IP addresses: 192.168.153.11, 192.168.153.12, and 192.168.153.13

The early stage of the plan

A Hadoop cluster consists of two clusters: HDFS cluster and YARN cluster. The two clusters are logically separated, but usually share hosts.

Both clusters are standard master-slave architecture clusters.

Roles (daemons) in the HDFS cluster:

  • Primary role: NameNode
  • Secondary role: DataNode
  • Primary Role Secondary role: SecondaryNameNode

Roles (daemons) in the YARN cluster:

  • Active role: ResourceManager
  • Role: NodeManager

Cluster planning:

The server The IP address Running roles (daemons)
node1.hadoop.com 192.168.153.11 NameNode DataNode ResourceManager NodeManager
node2.hadoop.com 192.168.153.12 SecondaryNameNode DataNode NodeManager
node3.hadoop.com 192.168.153.13 DataNode NodeManager

Environment configuration

This parameter must be configured for each VM as user root.

1. Disable the firewall

systemctl stop firewalld
systemctl disable firewalld
Copy the code

2. Synchronize time

yum -y install ntpdate
ntpdate ntp5.aliyun.com
Copy the code

3. Configure the host name

vi /etc/hostname
Copy the code

Set the host names of the three VMS to node1.hadoop.com, node2.hadoop.com, and node3.hadoop.com.

4. Configure the hosts file

vi /etc/hosts
Copy the code

Add the following:

192.168.153.11 node1 node1.hadoop.com
192.168.153.12 node2 node1.hadoop.com
192.168.153.13 node3 node1.hadoop.com
Copy the code

5. Install JDK

Yum -y install java-1.8.0-openJDK java-1.8.0-openjdk-develCopy the code

Configuration JAVA_HOME

cat <<EOF | tee /etc/profile.d/hadoop_java.sh
export JAVA_HOME=\$(dirname \$(dirname \$(readlink \$(readlink \$(which javac)))))
export PATH=\$PATH:\$JAVA_HOME/bin
EOF
source /etc/profile.d/hadoop_java.sh
Copy the code

Confirmation:

echo $JAVA_HOME
Copy the code

6. Create a Hadoop user and set a password

adduser hadoop
usermod -aG wheel hadoop
passwd hadoop
Copy the code

Create a directory for storing data locally in HDFS:

mkdir /home/hadoop/data
chown hadoop: /home/hadoop/data
Copy the code

7. Configure environment variables

Echo 'export HADOOP_HOME=/home/hadoop/hadoop-3.3.2' >> /etc/profile echo 'export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin' >> /etc/profile source /etc/profileCopy the code

8. Configure SSH

yum install openssh
Copy the code

Switch to the Hadoop user and run the following command.

ssh-keygen
ssh-copy-id node1
ssh-copy-id node2
ssh-copy-id node3
Copy the code

Perform this operation on each VM as follows:

[hadoop@node1 ~]$ ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/home/hadoop/.ssh/id_rsa): Created directory '/home/hadoop/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/hadoop/.ssh/id_rsa. Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub. The key fingerprint is: SHA256:gFs4NEpc6MIVv7/r5f2rUFdOi7ht11GceM3fd/Uq/nU [email protected] The key's randomart image is: +---[RSA 2048]----+ | .. += | | .o+.+ .oo| |.. o +.o . =*| |... +.. . * B| | . .. S o o +*| | . . + .=| | . o .. o.. E| | + o...... . | | +.. o++o | +----[SHA256]-----+ [hadoop@node1 ~]$ ssh-copy-id node1 /bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/hadoop/.ssh/id_rsa.pub" The authenticity of host 'node1 (192.168.153.11)' can't be established fingerprint is SHA256:BxdxJ5ONWI6xkPrFWxy9MIFs/B3IpEgjhFxiwI6KOLU. ECDSA key fingerprint is MD5:78:ea:2d:36:7e:eb:83:47:8f:61:c6:70:b6:0f:20:d6. Are you sure you want to continue connecting (yes/no)? yes /bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys hadoop@node1's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'node1'" and check to make sure that only the key(s) you wanted were added. [hadoop@node1 ~]$ ssh-copy-id node2 /bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/hadoop/.ssh/id_rsa.pub" The authenticity of host 'node2 (192.168.153.12)' can't be established fingerprint is SHA256:BxdxJ5ONWI6xkPrFWxy9MIFs/B3IpEgjhFxiwI6KOLU. ECDSA key fingerprint is MD5:78:ea:2d:36:7e:eb:83:47:8f:61:c6:70:b6:0f:20:d6. Are you sure you want to continue connecting (yes/no)? yes /bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys hadoop@node2's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'node2'" and check to make sure that only the key(s) you wanted were added. [hadoop@node1 ~]$ ssh-copy-id node3 /bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/hadoop/.ssh/id_rsa.pub" The authenticity of host 'node3 (192.168.153.13)' can't be established fingerprint is SHA256:BxdxJ5ONWI6xkPrFWxy9MIFs/B3IpEgjhFxiwI6KOLU. ECDSA key fingerprint is MD5:78:ea:2d:36:7e:eb:83:47:8f:61:c6:70:b6:0f:20:d6. Are you sure you want to continue connecting (yes/no)? yes /bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys hadoop@node3's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'node3'" and check to make sure that only the key(s) you wanted were added. [hadoop@node1 ~]$Copy the code

Download and install

Install and configure the vm on Node1 and copy the installed directory to the other two VMS. (Hadoop user)

1. Download and unpack

Connect to node1 as user Hadoop and run the following command to download the installation package to the /home/hadoop directory.

CD/home/hadoop curl - Ok to https://dlcdn.apache.org/hadoop/common/hadoop-3.3.2/hadoop-3.3.2.tar.gzCopy the code

Extract:

The tar ZXF hadoop - 3.3.2 rainfall distribution on 10-12. Tar. GzCopy the code

Next, Hadoop is configured through a configuration file.

Hadoop configuration files fall into three categories:

  • Default configuration files – includingcore-default.xml,hdfs-default.xml,yarn-default.xmlmapred-default.xmlThese files are read-only and hold the default values of the parameters.
  • Custom configuration files — includesetc/hadoop/core-site.xml,etc/hadoop/hdfs-site.xml,etc/hadoop/yarn-site.xmletc/hadoop/mapred-site.xmlIs used to store custom configuration information and will override the default configuration.
  • Environment configuration files – includesetc/hadoop/hadoop-env.sh,etc/hadoop/mapred-env.shetc/hadoop/yarn-env.shThese files are used to configure the Java runtime environment for each daemon.

2. Configure the hadoop env.sh file

CD hadoop - 3.3.2 rainfall distribution on 10-12 vi/etc/hadoop/hadoop - env. ShCopy the code

Add the following:

export JAVA_HOME=$JAVA_HOME
export HDFS_NAMENODE_USER=hadoop
export HDFS_DATANODE_USER=hadoop
export HDFS_SECONDARYNAMENODE_USER=hadoop
export YARN_RESOURCEMANAGER_USER=hadoop
export YARN_NODEMANAGER_USER=hadoop
Copy the code

At a minimum, you need to configure the JAVA_HOME environment variable, and you can configure it separately for different daemons by using the following variables:

daemon The environment variable
NameNode HDFS_NAMENODE_OPTS
DataNode HDFS_DATANODE_OPTS
Secondary NameNode HDFS_SECONDARYNAMENODE_OPTS
ResourceManager YARN_RESOURCEMANAGER_OPTS
NodeManager YARN_NODEMANAGER_OPTS
WebAppProxy YARN_PROXYSERVER_OPTS
Map Reduce Job History Server MAPRED_HISTORYSERVER_OPTS

For example, use parallelGC and 4GB heap memory for Namenode configuration:

export HDFS_NAMENODE_OPTS="-XX:+UseParallelGC -Xmx4g"
Copy the code

3. Configure core-site. XML

This file will override the configuration in core-default.xml.

vi etc/hadoop/core-site.xml
Copy the code

Add the following:

<! -- Set default file system Hadoop -- set default file system Hadoop -- set default file system Hadoop
<property>
    <name>fs.defaultFS</name>
    <value>hdfs://node1:8020</value>
</property>

<! -- Set Hadoop local path to save data -->
<property>
    <name>hadoop.tmp.dir</name>
    <value>/home/hadoop/data</value>
</property>


<! -- Set Hadoop Web UI user identity -->
<property>
    <name>hadoop.http.staticuser.user</name>
    <value>hadoop</value>
</property>

<! Hive user agent setup -->
<property>
    <name>hadoop.proxyuser.root.hosts</name>
    <value>*</value>
</property>

<! -- File trash can save time -->
<property>
    <name>fs.trash.interval</name>
    <value>1440</value>
</property>
Copy the code

4. Configure the HDFS -site. XML file

This file will overwrite the configuration in hdFS-default. XML.

vi etc/hadoop/hdfs-site.xml
Copy the code

Add the following:

<! -- Set SNN process location -->
<property>
    <name>dfs.namenode.secondary.http-address</name>
    <value>node2:9868</value>
</property>
Copy the code

5. Configure the mapred-site. XML file

This file will overwrite the configuration in mapred-default.xml.

vi etc/hadoop/mapred-site.xml
Copy the code

Add the following:

<! -- Set the default running mode of MR: YARN cluster mode, local local mode -->
<property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
</property>

<! -- MR program history service address -->
<property>
    <name>mapreduce.jobhistory.address</name>
    <value>node1:10020</value>
</property>

<! -- MR program history server web address -->
<property>
    <name>mapreduce.jobhistory.webapp.address</name>
    <value>node1:19888</value>
</property>

<property>
    <name>yarn.app.mapreduce.am.env</name>
    <value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value>
</property>

<property>
    <name>mapreduce.map.env</name>
    <value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value>
</property>

<property>
    <name>mapreduce.reduce.env</name>
    <value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value>
</property>
Copy the code

6. Configure the yarn-site. XML file

This file will overwrite the configuration in yarn-default. XML.

vi etc/hadoop/yarn-site.xml
Copy the code

Add the following:

<! Set the machine where the YARN cluster primary role runs -->
<property>
    <name>yarn.resourcemanager.hostname</name>
    <value>node1</value>
</property>

<property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
</property>

<! Physical memory limits for containers -->
<property>
    <name>yarn.nodemanager.pmem-check-enabled</name>
    <value>false</value>
</property>

<! Virtual memory limits for containers -->
<property>
    <name>yarn.nodemanager.vmem-check-enabled</name>
    <value>false</value>
</property>

<! -- Enable log aggregation -->
<property>
    <name>yarn.log-aggregation-enable</name>
    <value>true</value>
</property>

<! -- Set yarn history server address -->
<property>
    <name>yarn.log.server.url</name>
    <value>http://node1:19888/jobhistory/logs</value>
</property>
Copy the code

7. Configure the Workers file

vi etc/hadoop/workers
Copy the code

Delete the original content and add the following:

node1.hadoop.com
node2.hadoop.com
node3.hadoop.com
Copy the code

8. Copy the configured installation packages to node2 and Node3 machines.

SCP -r /home/hadoop/hadoop-3.3.2 hadoop@node2: /home/hadoop/scp-r /home/hadoop/hadoop-3.3.2 hadoop@node3:/home/hadoop/Copy the code

Start the cluster

Hadoop provides two startup modes:

  • Start processes one by one using commands – Commands are manually executed on each machine, allowing precise control over the start of each process.
  • Use a script to start with one click — if you have configured SSH secret free logins between machines andetc/hadoop/workersFile.

Commands to start processes one by one:

# HDFS cluster
$HADOOP_HOME/bin/hdfs --daemon start namenode | datanode | secondarynamenode

# YARN cluster
$HADOOP_HOME/bin/yarn --daemon start resourcemanager | nodemanager | proxyserver
Copy the code

Scripts to start the cluster:

  • HDFS cluster –$HADOOP_HOME/sbin/start-dfs.shTo start all processes in the HDFS cluster.
  • YARN cluster –$HADOOP_HOME/sbin/start-yarn.shTo start all processes in the YARN cluster
  • Hadoop cluster –$HADOOP_HOME/sbin/start-all.shTo start all processes in the HDFS cluster and YARN cluster.

1. Format the file system

Before starting the cluster, you need to format the HDFS (only on the Node1 machine).

[hadoop@node1 ~]$ hdfs namenode -format
WARNING: /home/hadoop/hadoop-3.3.2/logs does not exist. Creating.
2022-03-17 23:22:55,296 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = node1/192.168.153.11
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 3.3.2
STARTUP_MSG:   classpath = /home/hadoop/hadoop-3.3.2/etc/hadoop:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/accessors-smart-2.4.7.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/animal-sniffer-annotations-1.17.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/asm-5.0.4.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/audience-annotations-0.5.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/avro-1.7.7.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/checker-qual-2.5.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/commons-beanutils-1.9.4.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/commons-cli-1.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/commons-codec-1.11.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/commons-collections-3.2.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/commons-compress-1.21.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/commons-configuration2-2.1.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/commons-daemon-1.0.13.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/commons-io-2.8.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/commons-lang3-3.12.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/commons-logging-1.1.3.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/commons-math3-3.1.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/commons-net-3.6.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/commons-text-1.4.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/curator-client-4.2.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/curator-framework-4.2.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/curator-recipes-4.2.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/dnsjava-2.1.7.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/failureaccess-1.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/gson-2.8.9.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/guava-27.0-jre.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/hadoop-annotations-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/hadoop-auth-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/hadoop-shaded-guava-1.1.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/hadoop-shaded-protobuf_3_7-1.1.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/httpclient-4.5.13.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/httpcore-4.4.13.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/j2objc-annotations-1.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/jackson-annotations-2.13.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/jackson-core-2.13.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/jackson-databind-2.13.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/jakarta.activation-api-1.2.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/javax.servlet-api-3.1.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/jaxb-api-2.2.11.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/jersey-core-1.19.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/jersey-json-1.19.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/jersey-server-1.19.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/jersey-servlet-1.19.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/jettison-1.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/jetty-http-9.4.43.v20210629.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/jetty-io-9.4.43.v20210629.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/jetty-security-9.4.43.v20210629.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/jetty-server-9.4.43.v20210629.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/jetty-servlet-9.4.43.v20210629.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/jetty-util-9.4.43.v20210629.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/jetty-util-ajax-9.4.43.v20210629.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/jetty-webapp-9.4.43.v20210629.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/jetty-xml-9.4.43.v20210629.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/jsch-0.1.55.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/json-smart-2.4.7.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/jsp-api-2.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/jsr305-3.0.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/jsr311-api-1.1.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/jul-to-slf4j-1.7.30.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/kerb-admin-1.0.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/kerb-client-1.0.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/kerb-common-1.0.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/kerb-core-1.0.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/kerb-crypto-1.0.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/kerb-identity-1.0.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/kerb-server-1.0.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/kerb-simplekdc-1.0.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/kerb-util-1.0.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/kerby-asn1-1.0.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/kerby-config-1.0.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/kerby-pkix-1.0.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/kerby-util-1.0.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/kerby-xdr-1.0.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/log4j-1.2.17.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/metrics-core-3.2.4.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/netty-3.10.6.Final.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/nimbus-jose-jwt-9.8.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/paranamer-2.3.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/re2j-1.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/slf4j-api-1.7.30.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/slf4j-log4j12-1.7.30.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/snappy-java-1.1.8.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/stax2-api-4.2.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/token-provider-1.0.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/woodstox-core-5.3.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/zookeeper-3.5.6.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/zookeeper-jute-3.5.6.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/hadoop-common-3.3.2-tests.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/hadoop-common-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/hadoop-kms-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/hadoop-nfs-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/hadoop-registry-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/accessors-smart-2.4.7.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/animal-sniffer-annotations-1.17.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/asm-5.0.4.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/audience-annotations-0.5.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/avro-1.7.7.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/checker-qual-2.5.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/commons-beanutils-1.9.4.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/commons-codec-1.11.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/commons-collections-3.2.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/commons-compress-1.21.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/commons-configuration2-2.1.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/commons-io-2.8.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/commons-lang3-3.12.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/commons-math3-3.1.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/commons-net-3.6.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/commons-text-1.4.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/curator-client-4.2.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/curator-framework-4.2.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/curator-recipes-4.2.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/dnsjava-2.1.7.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/failureaccess-1.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/gson-2.8.9.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/guava-27.0-jre.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/hadoop-annotations-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/hadoop-auth-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/hadoop-shaded-guava-1.1.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/hadoop-shaded-protobuf_3_7-1.1.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/httpclient-4.5.13.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/httpcore-4.4.13.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/j2objc-annotations-1.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/jackson-annotations-2.13.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/jackson-core-2.13.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/jackson-databind-2.13.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/jackson-jaxrs-1.9.13.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/jackson-xc-1.9.13.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/jakarta.activation-api-1.2.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/javax.servlet-api-3.1.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/jaxb-api-2.2.11.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/jaxb-impl-2.2.3-1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/jcip-annotations-1.0-1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/jersey-core-1.19.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/jersey-json-1.19.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/jersey-server-1.19.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/jersey-servlet-1.19.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/jettison-1.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/jetty-http-9.4.43.v20210629.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/jetty-io-9.4.43.v20210629.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/jetty-security-9.4.43.v20210629.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/jetty-server-9.4.43.v20210629.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/jetty-servlet-9.4.43.v20210629.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/jetty-util-9.4.43.v20210629.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/jetty-util-ajax-9.4.43.v20210629.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/jetty-webapp-9.4.43.v20210629.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/jetty-xml-9.4.43.v20210629.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/jsch-0.1.55.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/json-simple-1.1.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/json-smart-2.4.7.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/jsr305-3.0.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/jsr311-api-1.1.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/kerb-admin-1.0.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/kerb-client-1.0.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/kerb-common-1.0.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/kerb-core-1.0.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/kerb-crypto-1.0.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/kerb-identity-1.0.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/kerb-server-1.0.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/kerb-simplekdc-1.0.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/kerb-util-1.0.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/kerby-asn1-1.0.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/kerby-config-1.0.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/kerby-pkix-1.0.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/kerby-util-1.0.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/kerby-xdr-1.0.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/netty-3.10.6.Final.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/netty-all-4.1.68.Final.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/nimbus-jose-jwt-9.8.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/okhttp-2.7.5.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/okio-1.6.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/paranamer-2.3.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/re2j-1.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/snappy-java-1.1.8.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/stax2-api-4.2.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/token-provider-1.0.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/woodstox-core-5.3.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/zookeeper-3.5.6.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/zookeeper-jute-3.5.6.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/hadoop-hdfs-3.3.2-tests.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/hadoop-hdfs-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/hadoop-hdfs-client-3.3.2-tests.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/hadoop-hdfs-client-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/hadoop-hdfs-httpfs-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/hadoop-hdfs-native-client-3.3.2-tests.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/hadoop-hdfs-native-client-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/hadoop-hdfs-nfs-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/hadoop-hdfs-rbf-3.3.2-tests.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/hadoop-hdfs-rbf-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/mapreduce/hadoop-mapreduce-client-app-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/mapreduce/hadoop-mapreduce-client-common-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/mapreduce/hadoop-mapreduce-client-core-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.3.2-tests.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/mapreduce/hadoop-mapreduce-client-nativetask-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/mapreduce/hadoop-mapreduce-client-uploader-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/HikariCP-java7-2.4.12.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/aopalliance-1.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/asm-analysis-9.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/asm-commons-9.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/asm-tree-9.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/bcpkix-jdk15on-1.60.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/bcprov-jdk15on-1.60.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/ehcache-3.3.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/fst-2.50.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/guice-4.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/guice-servlet-4.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/jackson-jaxrs-base-2.13.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/jackson-jaxrs-json-provider-2.13.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/jackson-module-jaxb-annotations-2.13.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/jakarta.xml.bind-api-2.3.3.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/java-util-1.9.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/javax-websocket-client-impl-9.4.43.v20210629.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/javax-websocket-server-impl-9.4.43.v20210629.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/javax.inject-1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/javax.websocket-api-1.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/javax.websocket-client-api-1.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/javax.ws.rs-api-2.1.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/jersey-client-1.19.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/jersey-guice-1.19.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/jetty-annotations-9.4.43.v20210629.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/jetty-client-9.4.43.v20210629.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/jetty-jndi-9.4.43.v20210629.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/jetty-plus-9.4.43.v20210629.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/jline-3.9.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/jna-5.2.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/json-io-2.5.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/metrics-core-3.2.4.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/objenesis-2.6.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/snakeyaml-1.26.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/swagger-annotations-1.5.4.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/websocket-api-9.4.43.v20210629.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/websocket-client-9.4.43.v20210629.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/websocket-common-9.4.43.v20210629.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/websocket-server-9.4.43.v20210629.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/websocket-servlet-9.4.43.v20210629.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/hadoop-yarn-api-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/hadoop-yarn-applications-mawo-core-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/hadoop-yarn-client-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/hadoop-yarn-common-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/hadoop-yarn-registry-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/hadoop-yarn-server-common-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/hadoop-yarn-server-nodemanager-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/hadoop-yarn-server-router-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/hadoop-yarn-server-tests-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/hadoop-yarn-server-web-proxy-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/hadoop-yarn-services-api-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/hadoop-yarn-services-core-3.3.2.jar
STARTUP_MSG:   build = [email protected]:apache/hadoop.git -r 0bcb014209e219273cb6fd4152df7df713cbac61; compiled by 'chao' on 2022-02-21T18:39Z
STARTUP_MSG:   java = 1.8.0_322
************************************************************/
2022-03-17 23:22:55,312 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2022-03-17 23:22:55,408 INFO namenode.NameNode: createNameNode [-format]
2022-03-17 23:22:55,800 INFO namenode.NameNode: Formatting using clusterid: CID-4271710c-605c-44fe-be87-6cbbcbb60338
2022-03-17 23:22:55,834 INFO namenode.FSEditLog: Edit logging is async:true
2022-03-17 23:22:55,870 INFO namenode.FSNamesystem: KeyProvider: null
2022-03-17 23:22:55,872 INFO namenode.FSNamesystem: fsLock is fair: true
2022-03-17 23:22:55,873 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false
2022-03-17 23:22:55,886 INFO namenode.FSNamesystem: fsOwner                = hadoop (auth:SIMPLE)
2022-03-17 23:22:55,886 INFO namenode.FSNamesystem: supergroup             = supergroup
2022-03-17 23:22:55,886 INFO namenode.FSNamesystem: isPermissionEnabled    = true
2022-03-17 23:22:55,886 INFO namenode.FSNamesystem: isStoragePolicyEnabled = true
2022-03-17 23:22:55,886 INFO namenode.FSNamesystem: HA Enabled: false
2022-03-17 23:22:55,930 INFO common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
2022-03-17 23:22:55,940 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000
2022-03-17 23:22:55,941 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
2022-03-17 23:22:55,944 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2022-03-17 23:22:55,944 INFO blockmanagement.BlockManager: The block deletion will start around 2022 Mar 17 23:22:55
2022-03-17 23:22:55,947 INFO util.GSet: Computing capacity for map BlocksMap
2022-03-17 23:22:55,947 INFO util.GSet: VM type       = 64-bit
2022-03-17 23:22:55,950 INFO util.GSet: 2.0% max memory 839.5 MB = 16.8 MB
2022-03-17 23:22:55,950 INFO util.GSet: capacity      = 2^21 = 2097152 entries
2022-03-17 23:22:55,959 INFO blockmanagement.BlockManager: Storage policy satisfier is disabled
2022-03-17 23:22:55,959 INFO blockmanagement.BlockManager: dfs.block.access.token.enable = false
2022-03-17 23:22:55,968 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.threshold-pct = 0.999
2022-03-17 23:22:55,968 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.min.datanodes = 0
2022-03-17 23:22:55,968 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.extension = 30000
2022-03-17 23:22:55,969 INFO blockmanagement.BlockManager: defaultReplication         = 3
2022-03-17 23:22:55,969 INFO blockmanagement.BlockManager: maxReplication             = 512
2022-03-17 23:22:55,969 INFO blockmanagement.BlockManager: minReplication             = 1
2022-03-17 23:22:55,969 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
2022-03-17 23:22:55,969 INFO blockmanagement.BlockManager: redundancyRecheckInterval  = 3000ms
2022-03-17 23:22:55,969 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
2022-03-17 23:22:55,969 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
2022-03-17 23:22:55,996 INFO namenode.FSDirectory: GLOBAL serial map: bits=29 maxEntries=536870911
2022-03-17 23:22:55,996 INFO namenode.FSDirectory: USER serial map: bits=24 maxEntries=16777215
2022-03-17 23:22:55,996 INFO namenode.FSDirectory: GROUP serial map: bits=24 maxEntries=16777215
2022-03-17 23:22:55,996 INFO namenode.FSDirectory: XATTR serial map: bits=24 maxEntries=16777215
2022-03-17 23:22:56,023 INFO util.GSet: Computing capacity for map INodeMap
2022-03-17 23:22:56,023 INFO util.GSet: VM type       = 64-bit
2022-03-17 23:22:56,023 INFO util.GSet: 1.0% max memory 839.5 MB = 8.4 MB
2022-03-17 23:22:56,023 INFO util.GSet: capacity      = 2^20 = 1048576 entries
2022-03-17 23:22:56,024 INFO namenode.FSDirectory: ACLs enabled? true
2022-03-17 23:22:56,024 INFO namenode.FSDirectory: POSIX ACL inheritance enabled? true
2022-03-17 23:22:56,024 INFO namenode.FSDirectory: XAttrs enabled? true
2022-03-17 23:22:56,025 INFO namenode.NameNode: Caching file names occurring more than 10 times
2022-03-17 23:22:56,030 INFO snapshot.SnapshotManager: Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536
2022-03-17 23:22:56,033 INFO snapshot.SnapshotManager: SkipList is disabled
2022-03-17 23:22:56,037 INFO util.GSet: Computing capacity for map cachedBlocks
2022-03-17 23:22:56,037 INFO util.GSet: VM type       = 64-bit
2022-03-17 23:22:56,037 INFO util.GSet: 0.25% max memory 839.5 MB = 2.1 MB
2022-03-17 23:22:56,037 INFO util.GSet: capacity      = 2^18 = 262144 entries
2022-03-17 23:22:56,047 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
2022-03-17 23:22:56,047 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
2022-03-17 23:22:56,047 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
2022-03-17 23:22:56,051 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
2022-03-17 23:22:56,051 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2022-03-17 23:22:56,053 INFO util.GSet: Computing capacity for map NameNodeRetryCache
2022-03-17 23:22:56,053 INFO util.GSet: VM type       = 64-bit
2022-03-17 23:22:56,053 INFO util.GSet: 0.029999999329447746% max memory 839.5 MB = 257.9 KB
2022-03-17 23:22:56,053 INFO util.GSet: capacity      = 2^15 = 32768 entries
2022-03-17 23:22:56,080 INFO namenode.FSImage: Allocated new BlockPoolId: BP-571583129-192.168.153.11-1647530576071
2022-03-17 23:22:56,101 INFO common.Storage: Storage directory /home/hadoop/data/dfs/name has been successfully formatted.
2022-03-17 23:22:56,128 INFO namenode.FSImageFormatProtobuf: Saving image file /home/hadoop/data/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
2022-03-17 23:22:56,226 INFO namenode.FSImageFormatProtobuf: Image file /home/hadoop/data/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 401 bytes saved in 0 seconds .
2022-03-17 23:22:56,241 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
2022-03-17 23:22:56,259 INFO namenode.FSNamesystem: Stopping services started for active state
2022-03-17 23:22:56,260 INFO namenode.FSNamesystem: Stopping services started for standby state
2022-03-17 23:22:56,264 INFO namenode.FSImage: FSImageSaver clean checkpoint: txid=0 when meet shutdown.
2022-03-17 23:22:56,264 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at node1/192.168.153.11
************************************************************/
[hadoop@node1 ~]$
Copy the code

2. Start the HDFS cluster

start-dfs.sh
Copy the code

This script will start the NameNode daemons and DataNode daemons:

[hadoop@node1 hadoop-3.3.2]$start-dfs.sh Starting namenodes on [node1] Starting datanodes node1.hadoop.com: Warning Permanently added 'node1.hadoop.com' (ECDSA) to the list of known hosts. node3.hadoop.com: ssh: Could not resolve hostname node3.hadoop.com: Name or service not known node2.hadoop.com: ssh: Could not resolve hostname node2.hadoop.com: Name or service not known Starting secondary namenodes [node2] node2: WARNING: [hadoop@node1 hadoop-3.3.2]$[hadoop@node1 hadoop-3.3.2]$JPS. /home/ hadoop-3.3.2/logs does not exist 5001 DataNode 5274 Jps 4863 NameNode [hadoop@node1 hadoop-3.3.2]$Copy the code

After successful startup, you can access the Web interface of NameNode in the browser (default port: 9870) :

3. Start the YARN cluster

start-yarn.sh
Copy the code

This script will start the ResourceManager daemon and NodeManager daemon:

[hadoop@node1 hadoop-3.3.2]$start-yarn.sh Starting resourcemanager Starting nodemanagers node3.hadoop.com: SSH: Could not resolve hostname node3.hadoop.com: Name or service not known node2.hadoop.com: ssh: Could not resolve hostname node2.hadoop.com: Name or service not known [hadoop@node1 hadoop-3.3.2]$[hadoop@node1 hadoop-3.3.2]$JPS 5536 NodeManager 5395 ResourceManager 5001 DataNode 5867 Jps 4863 NameNode [hadoop@node1 hadoops-3.3.2]$Copy the code

After the ResourceManager is successfully started, you can access the Web UI of ResourceManager (default port: 8088) in the browser.

In addition to the start-dfs.sh and start-yarn.sh scripts, you can also use the start-all.sh script to start all Hadoop processes at one time.

Stop the cluster

As with starting a cluster, Hadoop provides two ways to stop a cluster.

Commands to terminate processes one by one:

# HDFS cluster
$HADOOP_HOME/bin/hdfs --daemon stop namenode | datanode | secondarynamenode

# YARN cluster
$HADOOP_HOME/bin/yarn --daemon stop resourcemanager | nodemanager | proxyserver
Copy the code

Stop cluster script:

  • HDFS cluster –$HADOOP_HOME/sbin/stop-dfs.shTo terminate all processes in the HDFS cluster.
  • YARN cluster –$HADOOP_HOME/sbin/stop-yarn.shTo stop all processes in the YARN cluster
  • Hadoop cluster –$HADOOP_HOME/sbin/stop-all.shTo stop all processes in the HDFS cluster and YARN cluster.

Run the stop-all.sh script to stop all Hadoop processes at one time.

[hadoop@node1 hadoop-3.3.2]$stop-all.sh WARNING: Stopping all Apache Hadoop daemons as hadoop in 10 seconds. WARNING: Use CTRL-C to abort. Stopping namenodes on [node1] Stopping datanodes node2.hadoop.com: ssh: Could not resolve hostname node2.hadoop.com: Name or service not known node3.hadoop.com: ssh: Could not resolve hostname node3.hadoop.com: Name or service not known Stopping secondary namenodes [node2] Stopping nodemanagers node3.hadoop.com: ssh: Could not resolve hostname node3.hadoop.com: Name or service not known node2.hadoop.com: ssh: Could not resolve hostname node2.hadoop.com: Name or service not known Stopping Resourcemanager [hadoop@node1 hadoop-3.3.2]$Copy the code

The relevant data

Hadoop: Setting up a Single Node Cluster

Hadoop Cluster Setup

How To Install Apache Hadoop / HBase on CentOS 7

Big Data Hadoop tutorial for programmers in 2022