At Jfrog, we rely on Kubernetes and Helm to orchestrate our systems and keep our workloads running and up to date. Our JFrog Cloud service was initially deployed with Helm V2 and the Tillerless plug-in for enhanced security, but we have now successfully migrated thousands of versions to Helm V3.

Like many SaaS service providers, Jfrog Cloud runs in a number of Kubernetes clusters in different regions, including AWS, Azure, and Google Cloud providers.

We’ve learned some important lessons along the way, and we’re happy to share them with you.

The first release of Helm V3 was released in November 2019, and Helm V2 still has an updated release within a year. But with the final release of Helm 2.17.0 in November 2020, Helm v3 is now the only standard supported by the Helm developer community.

Helm V3 offers a number of significant improvements, most notably the removal of Tiller. Servers within this cluster that interact with Helm V2 clients require administrator privileges to perform their duties, which is considered a security risk in a shared K8S cluster. This can be overcome with the Tillerless plug-in, but Helm V3 no longer needs to do this.

In addition, Helm V3 offers some new features and greater stability. It is also currently the only version that will receive future effectiveness and security updates.

Migration strategy To make it easier to migrate the cluster from Helm V2 to V3, the Helm developer community created the Helm-2to3 plug-in to use with the Helm M3 client. There is a Helm blog post that provides some good information on how to use it.

Installation is simple: $helm3 plugin install https://github.com/helm/helm-…

But how you perform the task next may vary depending on the number of versions you need to migrate.

Manual migration If you only need to migrate a few versions, you can do them on the command line via the client.

First, you can use the Helm List command to list all deployed or failed versions of the current namespace:

$ helm2 list NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE postgres 1 Thu Nov 14 15:01:00 2019 DEPLOYED Postgresql – 7.1.0 11.5.0 postgres

You can then perform the transformation for each named version by migrating the plug-in. We recommend using the –dry-run flag first as a rehearsal to make sure the subsequent conversion works well.

$ helm3 2to3 convert –dry-run postgres

$ helm3 2to3 convert postgres

You can repeat this process for all versions, and you’re done!

Automated migration at the enterprise level

To migrate multiple Helm V2 versions to V3, you need to automate the process using shell scripts.

A list of all versions of your script will need to be converted. You can use the Helm V2 client to generate a list, in this case a file called release.log.

$ helm2 tiller run — helm2 ls –max=0 | sed -n ‘1! p’ | awk ‘FNR > 1 { print $1 }’ > releases.log

For a relatively small number of releases (about 200 at most), this produces quick results. More often, however, it takes a long time for the Helm client to get all the versions. In addition, I ran into Kubernetes API limitations for the AWS Eks cluster.

The Jfrog Cloud service runs thousands of Helm versions on each Kubernetes cluster, so an alternative, faster approach is needed.

Since all Tiller Secrets are tagged, we found that we could extract them into the release.log file using the kubectl command:

$ kubectl -n kube-system get secret -l OWNER=TILLER,STATUS=DEPLOYED | awk ‘FNR > 1 { print $1 }’ | cut -f1 -d”.” > releases.log

Using our list of Helm v2 versions in release. log, we can automate the migration steps using the bash script:

! /bin/bash

Get releases list

RELEASES=$(cat releases.log )

for r in $RELEASES

do

echo

echo “Processing release $r”

helm3 2to3 convert –tiller-out-cluster –release-versions-max=5 \

  --delete-v2-releases --dry-run 2>&1 | tee -a convert.log

done

In this script, you may want or need to change these flags:

— Tiler-out-of-cluster if you’re not running Tiler-in Kubenetes cluster; If Tiller is installed, it should be removed. –dry-run is used to test whether the migration script is working and does not actually execute. This parameter needs to be removed when performing the actual migration

If you choose to omit the flags –delete-v2-releases and leave Helm 2 version alone, you can clean them up later with the following command:

$ helm3 2to3 cleanup –tiller-out-cluster –releases-cleanup –skip-confirmation

The script looks at the migration results in the Convert.log file, which you should look at for any migration issues you might encounter.

When we migrated the JFrog Cloud service, not all versions were on the same Chart version — they used Charts that were in effect when they were first deployed. So some older versions of the migration cannot be upgraded with Helm V3.

The problem is that some Helm V3 tags and comments were not added to the migrated Kubernetes objects. When the check shows that they do not exist, the problem is easily resolved by adding them to the Helm upgrade step:

$ kubectl -n ${NAMESPACE} label deployment -l “app.kubernetes.io/instance=${RELEASE}” “app.kubernetes.io/managed-by=Helm”

$ kubectl -n ${NAMESPACE} annotate deployment -l “app.kubernetes.io/instance=${RELEASE}” “meta.helm.sh/release-name=${RELEASE}” “meta.helm.sh/release-namespace=${NAMESPACE}”

Let’s have fun migrating 🙂 That’s all you need to migrate your version to Helm V3! The process is simple, but keep in mind that it doesn’t have to be fast. When there are thousands of releases — as in most enterprise-level organizations — the migration process does take time to complete.

Using these steps, you can create an automated tool that helps you migrate the large number of versions running in Kubernetes from Helm V2 to Helm V3, and keep your Kubernetes infrastructure up to date.