The website describes three installation methods,

Mode A is used for automatic installation in non-production environment, B is package installation, and C is tar installation. Due to habits and other factors, I chose to use tar installation. Now I will record the problems encountered in the installation process, and also provide a reference for those who need it.

Specific operation process of the official documents… And the following reference articles:……

Note: Some of the images in this article are from the 5.13.0 installation, but the operation is the same. Please don't worry about these details.

The general process is as follows:

I. Preparation:

1. Change the host name and set the cluster host

Vim etc/sysconfig/network

Set the host of the cluster (modify the hosts file for each node in the cluster) : vim /etc/hosts

2. Time synchronization

Use the NTP service to synchronize the time of the nodes in the cluster. Two synchronization modes: 2.1. Time synchronization with the time server. 2.2. Master-slave mode: set one of them as master, synchronize with external time, and keep the other synchronized with master time. If conditions permit, it is better to use the latter.

3. Set up a firewall

The practice on the net is to close the firewall generally, the practical application often can not be turned off directly. Firewall rules can be set to eliminate the isolation between internal networks. Which way to use depends on your own situation. 'service iptables stop' : 'chkconfig iptables off' : 'chkconfig iptables off' : 'service iptables stop' : 'chkconfig iptables off' -a input-s ACCEPT '= /etc/sysconfig/iptables; =; =; Restart firewall: 'Service iptables restart

4. Set up the cluster-free login

If you do not need to enter the password, then you can enter the SSH localhost. 'ssh-keygen-t dsa-p' -f ~/.ssh/id_dsa 'public key placed in, private key placed in id_dsa 4.3 Local secret login append public key to authenticated information: 'cat ~/.ssh/ >> ~/.ssh/authorized_keys'' cat ~/.ssh/ >> ~/.ssh/authorized_keys' 'SCP ~/.ssh/ root@host29:~/.ssh/', enter the password of host29 and execute on host29: 'cat ~/.ssh/ >> ~/.ssh/authorized_keys' appends the public key to the authenticated information 4.5 Perform the operation above on all nodes 4.6 Failure Record: Unable to login after all Settings. /var/log/secure Authentication refused: bad ownership or modes for directory /root

Looking for the problem, you find that the owner and user group of the root folder have changed

Chown root. Root /root/ change back to chown root

Note that Oracle JDK is installed, not OpenJDK. Specific operation reference…

II. Install CM

1, download

Go to the official website to download the installation package. Official website address:…

Find the package address of the environment on the download page:

Wget HTTP: / /

Unzip the tar-zxvf cloudera-manager-el6-cm5.13.1_x86_64.tar.gz. Unzip the tar-zxvf cloudera-manager-el6-cm5.13.1_x86_64.tar.gz

There are two directories Cloudera and CM-5.13.1 after unpacking

Move both directories to the installation directory, such as /opt:Cloudera cm - 5.13.1 / opt/mv

3, prepare CDH installation package (a total of three file) to download address:…

Wget wget HTTP: / / wget

Place the installation package in /opt/cloudera/parcel-repo/ : ` mv CDH 5.13.1-1. Cdh5.13.1. P0.2 - el6. Parcel CDH 5.13.1-1. Cdh5.13.1. P0.2 - el6. Parcel. Sha manifest. Json /opt/cloudera/parcel-repo/ 'and change cdh-5.13.1-1.cdh5.13.1.p0.2-el6. Parcel. Sha1 to cdh-5.13.1-1.cdh5.13.1.p0.2-el6. Parcel. 'mv cdh-5.13.1-1. Cdh5.13.1.p0.2-el6. Parcel. Sha1 cdh-5.13.1-1. Cdh5.13.1.p0.2-el6. Parcel

4. Prepare the MySQL Connector

Website: to download: ` wget ` decompression: ` tar ZXVF mysql - connector - Java - 5.1.45. Tar. Gz `

/usr/shara/ Java: /usr/shara/ Java: /usr/shara/ Java: /usr/shara/ Java: /usr/shara/ Java: /usr/shara/ Java 'cp mysql-connector-java-5.1.45-bin.jar /usr/share/ Java /mysql-connector-java.jar' or /opt/cm-5.13.1/share/ CMF /lib/ The former can be used directly when installing Hive. The latter can not be found, so you need to execute the command again: ` cp/opt/cm - 5.13.1 / share/CMF/lib/mysql connector - Java - 5.1.45 - bin. The jar / opt/cloudera/parcels/CDH 5.13.1-1. Cdh5.13.1. P0.2 / lib/hive/lib / `


Server_host is the host name of CM Server or IP Server_port is the communication port of CM Server. The default is 7182

6. Copy Agent to other nodes

SCP - r/opt/cm - 5.13.1 host30: / opt

Create user Cloudera-SCM on all nodes

Useradd --system --home=/opt/cm-5.13.1/run/cloudera-scm-server --no-create-home --shell=/bin/false --comment "cloudera SCM User" cloudera-scm

8. Set database information

Mysql > create a new database on mysql; CMF (Cloudera Manage Database), Hive (Hive Database), AMON (Cloudera Acrivity Monitor), RMAN (Cloudera Reports Manager) ` / opt/cm - 5.13.1 / share/CMF/schema/scm_prepare_database. Sh mysql - hhost29 - SCM - host host28 CMF usrname password ` command: / opt/cm - 5.13.1 / share/CMF/schema/scm_prepare_database. Sh database types - h database host - SCM - cm host name of the database host User name password Specific parameters can see website ( I2r_m3m_hn__section_qjj_pyp_bm][3] 8.3 Check the configuration file and find that the information has been written: 'vim cm-5.13.1/etc/cloudera-scm-server/'

9, start,

Start the server:The/opt/cm - 5.13.1 / etc/init. D/cloudera - SCM - server start

Start the agent:The/opt/cm - 5.13.1 / etc/init. D/cloudera - SCM - agent start

Install Cluster

1. Log in to CM

Enter in the browser to log in CM (IP replaced by CM Server host IP, port is the server's HTTP service port, default is 7180), and the user name and password are admin

2. Select the CM version to install

You can choose the free version or trial version. If you don't use the advanced features, it doesn't matter if the trial version expires. You can continue to use it.

3. Cluster installation

After each node starts normally, the corresponding node can be seen in the currently managed host list. Just select the nodes you want to use, basically all of them.

/opt/cloudera/parcel-repo/ /opt/cloudera/parcel-repo/ / /opt/cloudera/parcel-repo/ / Ch-5.13.1-1.cdh5.13.1.p0.2-el6. Parcel. SHA1 is not named as ch-5.13.1-1.cdh5.13.1.p0.2-el6.

Here the first and second warning, according to the page prompts setting method can be set, the third warning is not the JDK version, can not use OpenJDK, Oracle JDK is required.

Set this on each node:

Execution: 'echo 10 > /proc/sys/vm/swappiness' and edit file sysctl.conf: 'vim /etc/sysctl.conf' and add or modify vm.swappiness = 0

Perform:  echo never > /sys/kernel/mm/transparent_hugepage/defrag echo never > /sys/kernel/mm/transparent_hugepage/enabled Local: 'vim /etc/rc.local'

If the installation process is interrupted and the installation is resumed again, the host may be managed:

At this point, stop all CM services, delete the CMF database, and reinitialize the CMF database. Mysql > drop database CMF mysql > drop database CMF; Unmount point: umount cm-5.13.1/run/cloudera-scm-agent/process Clear the agent UUID of the node you want: rm-rf cm-5.13.1/lib/cloudera-scm-agent/*

RM: Unable to delete "cm-5.13.1/run/cloudera-scm-agent/process": Device or resource is busy

Select the service you want to install, either by selecting the group installation directly or by customizing the installation

Fill in the pre-established database information

Go ahead and keep the default options.

Note: If you modify the HDFS storage folder, be sure that the folder already exists and ensure access rights. No side HDFS will not start, and the error folder does not exist.

Problems encountered in the process of installation or use

1, No portmap or RPCBIND service is running on this host. Please start portmap or RPCBIND service before attempting to start the NFS Gateway role on this host.

'yum install rpcbind' : 'service rpcbind start' : 'service rpcbind start


Operating on the machine on which the Hive Metastore Server service was installed: Cp/opt/cm - 5.13.1 / share/CMF/lib/mysql connector - Java - 5.1.45 - bin. The jar / opt/cloudera/parcels/CDH 5.13.1-1. Cdh5.13.1. P0.2 / lib/hive/lib /