Ambari upgrade


Confirm the version

Version 2.4.2, currently in use by the company, is relatively new in 2017. It’s now three big versions behind. I have to sigh that the technology iteration is really too fast.

The upgrade provided on the Ambari2.7.3 website is required on 2.6.x

The Ambari2.6.2.2 update is required in 2.4.2… On the upgrade

Ambari 2.6 does not support managing an HDP 2.3, 2.2, 2.1 or 2.0 cluster. If you are running HDP 2.3, 2.2, 2.1 or 2.0 cluster, In order to use Ambari 2.6, you must first upgrade to HDP 2.4 or higher using Ambari 2.5.2, 2.4.2, 2.2.2, 2.2.1, 2.2, 2.1, or 2.0 before upgrading to Ambari 2.6. Once completed, upgrade your current Ambari version to Ambari 2.6.

In other words, upgrade from 2.4.2 to Ambari2.7.3. You must first upgrade from 2.4.2 to 2.6.x, and then to 2.7.3. In addition to the HDP upgrade, Ambari-2.7.3 only supports a quick way to upgrade HDP-2.6.x to HDP-3.x. This means you must upgrade from version 2.5.0 to hdp2.6.x and then to hdp-3.x.

Ready to upgrade

When preparing to upgrade Ambari and HDP clusters, we strongly recommend that you review this list of items to confirm that your cluster operation is working properly. Trying to upgrade a cluster that is running in an unhealthy state can produce unexpected results.

The cluster checks the list of items

  • Ensure that all services in the cluster are running
  • Run each of the service checks (found under the Service Operations menu) and verify that they executed successfully
  • Clear all alarms, or learn why they are generated. Repair if necessary.
  • Verify that starting and stopping all services is successfully executing
  • Time service starts and stops.
  • Download the package before you upgrade. Put them in a local repository and/or consider using a storage agent, since all nodes in the cluster need to download thousands of megabytes.
  • Make sure you take point-in-time backups of all databases that support clustering. This included (among others) Ambari, Hive Metastore, Ranger, and Oozie.
  • Make sure you have enough disk space on /usr/hdp/

    (about 3GB for each additional HDP version).

Ambari upgrade preparation

  • You must have root, administrative, or root equivalent authorization on the Ambari server host and on all servers in the cluster.
  • You must back up the Ambari Server database.

    <font color=’red’> <font color=’red’> Must be backed up or you can’t roll back! </font>

  • You have to backup the/etc/ambari – server/conf/ambari properties

    Now updates when found that ambari script is to backup the file, named ambari. Properties. Rpmsave

  • Ready to upgrade Ambari Metries Service

    • Before starting the upgrade process, record the location of the Metrics Collector component.
    • You must stop the Ambari Metrics service from Ambari Web.
    • After the Ambari upgrade, you must also upgrade the Ambari Metrics System and add the Grafana component.
  • After you upgrade Ambari, you must also upgrade SmartSense(paid software, which can not be installed). .


Upgrade ambari

  1. If you are running the Ambari Metrics service in a cluster, stop the service. From Ambari Web, browse to Services> Ambari Metrics, and select Stop from the Service Actions menu.
  2. Stop the Ambari server. On a host running Ambari Server:

    ambari-server stop

  3. Stop all Ambari agents. On each host in the cluster where the Ambari agent is running:

    ambari-agent stop

  4. Make the new Ambari. Repo and replace the old repository files with the new repository files on all hosts in the cluster

  5. Upgrade the Ambari server. On a host running Ambari Server:

    First check that the ambari-server version in yum is correct

    yum clean all
    yum info ambari-server
    [root@rm yum.repos.d]# yum info ambari-server Determining fastest mirrors Installed Packages Name : Ambari-server Arch: x86_64 Version: Release: 22 Size: 700 M Repo: installed From Repo: installed Summary: Ambari Server URL: License: (c) Apache Software Foundation Description : Maven Recipe: RPM Package. Available Packages Name : ambari-server Arch : X86_64 Version: Release: 1 Size: 718 M Repo: ambari- Summary: ambari Server URL: License : (c) Apache Software Foundation Description : Maven Recipe: RPM Package.

    Yum upgrade ambari-server to ambari

    Backing up Ambari properties /etc/ambari-server/conf/ -> /etc/ambari-server/conf/ Backing up Ambari properties /var/lib/ambari-server/ -> /var/lib/ambari-server/ Backing up JAAS login file /etc/ambari-server/conf/krb5JAASLogin.conf -> /etc/ambari-server/conf/krb5JAASLogin.conf.rpmsave Backing up stacks directory /var/lib/ambari-server/resources/stacks -> /var/lib/ambari-server/resources/stacks_23_04_19_09_40.old Backing up common-services directory /var/lib/ambari-server/resources/common-services -> /var/lib/ambari-server/resources/common-services_23_04_19_09_40.old Backing up Ambari view jars /var/lib/ambari-server/resources/views/*.jar -> /var/lib/ambari-server/resources/views/backups/ Backing up Ambari server jar The/usr/lib/ambari - server/ambari - server - Jar - > / usr/lib/ambari - server - backups /
  6. Check the success of the upgrade by noting the progress during the Ambari Server installation that started in Step 5.

    • When the process runs, the console displays output similar to, but different from, the following

      Setting up Upgrade Process Resolving Dependencies --> Running transaction check
    • If the upgrade fails, the console displays output similar to the following

      Setting up Upgrade Process No Packages marked for Update

    • A successful upgrade displays output similar to the following

      Updated: ambari - server. Noarch 0-2. 6.2 155 Complete!
  7. Upgrade all Ambari-agents. On each host where the Ambari agent is running in the cluster

    yum upgrade ambari-agent

  8. After the upgrade process is complete, check each host to make sure the new files are installed

    rpm -qa | grep ambari-agent

  9. Upgrade the Ambari Server database schema. On the host on which Ambari Server is running

    ambari-server upgrade

  10. Start the Ambari server. On the host on which Ambari Server is running

    ambari-server start

  11. Activate all Ambari Agents. On each host where the Ambari agent is running in the cluster

    ambari-agent start

  12. On each host in the cluster where the Metrics Monitor is running, run the following command

    yum upgrade ambari-metrics-monitor ambari-metrics-hadoop-sink

  13. Execute the following command on all hosts running the Metrics Collector

    yum upgrade ambari-metrics-collector

  14. Execution the following command on the host on which the Grafana component is running

    yum upgrade ambari-metrics-grafana

Upgrade HDP

There are two ways to upgrade HDP with Ambari: rolling upgrade and fast upgrade.

  • Rolling Upgrades: Rolling Upgrades orchestrate HDP upgrades in order to maintain cluster operations and minimize service impact during the upgrade. This process has more stringent prerequisites (especially with regard to cluster high availability configurations) and may take longer to complete than Express Upgrade.
  • Quick Upgrade: Quick Upgrade Harmonizes the sequence of HDP upgrades that will result in a cluster outage, but the prerequisites are less stringent.

Rapid upgrade is adopted this time

  1. Registered version

  2. Install the HDP

  3. Upgrade to check

    The first is to close the auto-start Services and the second is to restart the service and execute the check. The first is to close the auto-start Services and restart the service.

  4. Close the Auto – Start Services

  5. Restart some of the configuration changed services and perform a check.

    Reason: The following service configurations have been updated and their Service Checks should be run again: HIVE, HDFS, MAPREDUCE2, TEZ, HBASE, ZOOKEEPER, YARN

    Go to the main screen Start Required Service with yellow circle. After restarting, execute Service Check.

  6. To upgrade the cluster

    Some of the intermediate steps can be checked and continued.

  7. Upgrade success

After you upgrade to Ambari2.6.2 and HDP2.6.5, do the same, and eventually upgrade to Ambari2.7.3.0 and HDP3.1.0

An error to sort out

Libtirpc – devel packet loss

ExecutionFailed: Execution of '/usr/bin/yum -d 0 -e 0 -y install hive_2_6_5_0_292' returned 1. Error: Package: Hadoop_2_6_5_0_292 - HDFS - X86_64 (HDP - 2.6 - '08-51) Requires: libtirpc-devel You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest

Yum install libtirpc-devel -0.2.4-0.0.el7.x86_64.rpm -y install libtirpc-devel -0.2.4-0.0.el7.x86_64.rpm -y install libtirpc-devel -0.2.4-0.0.el7.x86_64.rpm -y

Snappy relies on conflict

Error: Package: snappy-devel-1.0.5-1.el6.x86_64 (@hdp-utils- Requires: snappy(x86-64) = 1.0.5-1. El6 Removing: Requires: Snappy (x86-64) = 1.0.5-1. El6.x86_64 (@hdp-utils- snappy(x86-64) = 1.0.5-1. Updated By: Snappy -1.1.0-3.el7.x86_64 (centos7-media) snappy(x86-64) = 1.1.0-3.el7

MV centos. Repo centos. Repo. Backup

Executing ambari-server upgrade reported an error

The ambari-server upgrade command reported an error as follows

[root@rm yum.repos.d]# ambari-server upgrade
Using python  /usr/bin/python
Upgrading ambari-server
INFO: Upgrade Ambari Server
INFO: Updating Ambari Server properties in ...
INFO: Updating Ambari Server properties in ...
INFO: Original file kept
INFO: Fixing database objects owner
Ambari Server configured for MySQL. Confirm you have made a backup of the Ambari Server database [y/n] (n)? y
INFO: Upgrading database schema
ERROR: Unexpected ValueError: No JSON object could be decoded
For more info run ambari-server with -v or --verbose option

If ambari-server.log is checked, you can see that the insert into user_authentication statement failed to duplicate the primary key, and you can simply clear the table. It is suspected that this table was previously upgraded to this version and then rolled back, but this table may have been caused by the lower version not having it.

Custom service error

The new version does not recognize the previous group parameter

In Ambari, we need to customize some services embedded in Ambari management, such as Redis.

resource_management.core.exceptions.Fail: Execute['rm -rf /opt/ambari/custom_service/app/redis*'] received unsupported argument group

There is a sentence in that was not a problem in older versions.

if  os.path.exists(params.install_dir):
            Execute(format('rm -rf {install_dir}/redis*'),

For now, the new version does not support this way of writing, remove the user and group will be OK.

Spark Offline Module

The new Spark source code has been cut. Where the DataFrame and DataSet have been merged. You need to modify the Spark code to fix this

Error message:

java.lang.NoClassDefFoundError: org/apache/spark/sql/DataFrame

Hue module

  1. Key ERROR -u’hue’

    keyError -u'hue'
    Error: Error: Unable to run the custom hook script ['/usr/bin/python', '/var/lib/ambari-agent/cache/stack-hooks/before-ANY/scripts/', 'ANY', '/var/lib/ambari-agent/data/command-303.json', '/var/lib/ambari-agent/cache/stack-hooks/before-ANY', '/var/lib/ambari-agent/data/structured-out-303.json', 'INFO', '/var/lib/ambari-agent/tmp', 'PROTOCOL_TLSv1_2', '']

    Execute the following statement

    # cd /var/lib/ambari-server/resources/scripts
    # python -u admin -p admin -n $cluster_name -l $ambari_server -t 8080 -a get -c cluster-env |grep -i ignore_groupsusers_create
    # python -u admin -p admin -n $cluster_name -l $ambari_server -t 8080 -a set -c cluster-env -k ignore_groupsusers_create -v true
  2. List index out of error

    It is suspected that the method of getting RM_HOST in the new version is not working. Default (“/clusterHostInfo/rm_host”); [“rm.ambari”]) histroryserver_host = default(“/clusterHostInfo/hs_host”, [“rm.ambari”])

    Later found that the new version of the name changed.

    The script can be changed to the following:

    resourcemanager_hosts = default("/clusterHostInfo/resourcemanager_hosts", [])
    histroryserver_host = default("/clusterHostInfo/historyserver_hosts", [])
  3. KeyError: ‘group’

    Remove the method that writes HUE to EXECUTE with group, in

  4. Error chown hue:hue,invalid hue

    The new version does not create Hue users itself! Create your own useradd hue-g hue

  5. Start the error

    Traceback (most recent call last): The File "/ var/lib/ambari - agent/cache/sports/HDP / 3.1 / services/HUE/package/scripts/hue_server py", line 82, in <module> HueServer().execute() File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/", line 352, In the execute method (env) File "/ var/lib/ambari - agent/cache/sports/HDP / 3.1 / services/HUE/package/scripts/hue_server py", line 34, in start self.configure(env) File "/ var/lib/ambari - agent/cache/sports/HDP / 3.1 / services/HUE/package/scripts/hue_server py", line 29, In the configure setup_hue () the File "/ var/lib/ambari - agent/cache/sports/HDP / 3.1 / services/HUE/package/scripts/setup_hue py", line 35, in setup_hue recursive_chmod=True File "/usr/lib/ambari-agent/lib/resource_management/core/", line 166, in __init__ File "/usr/lib/ambari-agent/lib/resource_management/core/", line 160, in run self.run_action(resource, action) File "/usr/lib/ambari-agent/lib/resource_management/core/", line 119, in run_action provider = provider_class(resource) File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/", line 617, in __init__ self.assert_parameter_is_set('dfs_type') File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/", line 696, in assert_parameter_is_set raise Fail("Resource parameter '{0}' is not set.".format(parameter_name)) resource_management.core.exceptions.Fail: Resource parameter 'dfs_type' is not set.

    Comment out the

        #  params.HdfsResource(params.hue_hdfs_home_dir,
         #           type="directory",
        #            action="create_on_execute",
       #             owner=params.hue_user,
      #              mode=0755,
     #               recursive_chmod=True
    #  )

The appendix

  • Ambari2.6.2.2 upgrade
  • Ambari2.7.3 upgrade
  • Supported list of Ambari
  • MySQL backup and restore
  • HDP2.6.5 Error: Package: hadoop_2_6_0_3_8-hdfs- (HDP-2.6) Requires: libtirpc-devel” error.
  • HDP upgrade
  • Function comparison between Hadoop 2.x and Hadoop 3.x
  • New features you should know about HBase 2.0
  • InfoQ HBase2.0 status and future
  • Hive 3.x feature changes
  • Spark Release 2.3.0 releases new features and optimizations
  • A summary of new features of Spark2.0
  • What’s New in Kafka (0.8.2-1.0.0) — What’s New in Kafka 1.0