1. Prepare the Linux environment

Vmnetcfg. exe -> VMnet1 host-only -> Modify subnet IP 255.255.255.0 -> apply -> OK Go back to Windows -> Open Network and Sharing Center -> Change adapter Settings -> right-click VMnet1 -> Properties -> Double-click IPv4 -> Set Windows IP: 192.168.1.100 Subnet mask: 255.255.255.0 -> Click OK on the virtual software -My Computer -> Select the virtual machine -> right-click -> Settings -> Network Adapter -> Host only -> OK 1.1 Modify the host name vim /etc/sysconfig/network NETWORKING=yes HOSTNAME=itcast ### Enter the Linux GUI -> right click on the two small computers in the upper right -> Edit Connections -> Select the current network System eth0 -> Click the Edit button -> Select IPv4 -> Select manual for method -> Click add button -> Add IP: 192.168.1.101 Subnet mask: 255.255.255.0 Gateway: 192.168.1.1 -> apply /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE="eth0" BOOTPROTO="static" ## HWADDR="00:0C:29:3C:BF:E7" IPV6INIT="yes" NM_CONTROLLED="yes" ONBOOT="yes" TYPE="Ethernet" UUID=" ce22eECa-ecde-4536-8cc2-ef0dc36d4a8c "IPADDR="192.168.1.101" ### NETMASK="255.255.255.0" ### GATEWAY="192.168.1.1" Vim /etc/hosts 192.168.1.101 ITcast 1.4 Disabling firewall service iptables status disabling firewall service Iptables stop chkconfig iptables --list chkconfig iptables off 1.5 Restarting Linux rebootCopy the code

2. Install JDK 2.1 After uploading Alt + P, the SFTP window is displayed, and then put D :\ XXX \ YY \ll\ JDK-7u_65-i585.tar.gz

Gz -d /home/hadoop/app -d /home/hadoop/app -d /home/hadoop/app -d /home/hadoop/app -d /home/hadoop/app Export JAVA_HOME=/home/hadoop/app/ jdK-7u_65-i585 export PATH=$PATH:$JAVA_HOME/bin /etc/profileCopy the code

Hadoop2.4.1 Upload the hadoop installation package to the server /home/hadoop/ Hadoop2. x configuration file $HADOOP_HOME/etc/hadoop Pseudo-distributed 5 configuration files need to be modified: Export JAVA_HOME=/usr/ Java /jdk1.7.0_65 第27行

Core-site.xml <! -- Specify the file system schema (URI) used by HADOOP, --> <property> <name>fs.defaultFS</name> <value> HDFS ://weekend-1206-01:9000</value> </property> <! <property> <name>hadoop.tmp.dir</name> <value>/home/hadoop/hadoop-2.4.1/ TMP </value> Hdfs-site. XML hdFs-default. XML (3) <! Replication </name> <value>1</value> </property> mapred-site.xml (mv mapred-site.xml.template mapred-site.xml) mv mapred-site.xml.template mapred-site.xml vim mapred-site.xml <! Mapreduce.framework. Name </name> <value> YARN </value> </property> yarn-site.xml <! - specify the YARN (ResourceManager) address - > < property > < name > YARN. The ResourceManager. The hostname < / name > <value>weekend-1206-01</value> </property> <! Mapreduce_shuffle </value> </property> <name> map.nodeManager. aux-services</name> <value>mapreduce_shuffle</value> </property> 3.2 Adding Hadoop to the environment variable vim /etc/proflie export JAVA_HOME= /usr/jav/jdk1.7.0_65 export HADOOP_HOME=/itcast/hadoop-2.4.1 Export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin source /etc/profile 3.3 Format namenode (initializing namenode) HDFS namenode-format (hadoop namenode-format) 3.4 Starting Hadoop Start HDFS sbin/start-dfs.sh and start YARN sbin/start-yarn.sh 3.5 Verifying startup Success Run the JPS command to verify 27408 NameNode 28218 JPS 27643 SecondaryNameNode 28066 NodeManager 27803 ResourceManager 27512 DataNodeCopy the code