preface

Helm is a package manager under the K8S cluster, through which its engineers can package applications as a whole, and users can use Helm to install packaged applications in a way that is similar to APT-GET for Ubuntu and YUM/DNF for RedHat. The author of this article will talk about how to package your application with Helm and how to deploy your application with Helm, but you need to know the basics of K8S, such as Deployment, SatefulSet, Service, ConfigMap, Secret, PV/PVC, etc. Otherwise, step into the “Door of Docker, Kubernetes Container World” here.

General Deployment Applications

This section uses the manifest file to deploy a test case with two services Hello and Greeter. Hello depends on Greeter, and the invocation relationship is as follows:

<http get> --------> (hello) --------> (greeter)

Execute the following command to clone repository 1, and then deploy the sample in the demo namespace.

git clone https://github.com/zylpsrs/helm-example.git
cd helm-example
kubectl create namespace demo
kubectl -n demo apply -f k8s

For simplicity, the example in this article is deployed using only Deployment, Ingress, and Service technologies, and the content is very simple. The k8s directory is shown below:

$tree k8s/ k8s/ yaml # deploy deploy-greeter. Yaml # deploy deploy-hello.yaml # deploy deploy-hello.yaml # deploy deploy-hello.yaml # deploy deploy-hello.yaml # deploy deploy-hello.yaml # - SvC-Greeter. YAML # Create Service for Greeter ─ SvC-Hello. YAML # Create Service for Hello

As shown below, this section deploys the following objects in the demo namespace:

$ kubectl -n demo get deploy,svc,ingress,pod NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/greeter 1/1 1 1 20h Deployment. Apps /hello 1/1 1 20h NAME TYPE cluster-IP external-IP PORT(S) AGE service/ Greeter ClusterIP 10.100.44.79 <none> 9080/TCP 20h service/hello ClusterIP 10.106.116.73 <none> 9080/TCP 20h NAME CLASS HOSTS ADDRESS PORTS AGE Ingress. extensions/hello <none> hello.app.zy. IO 192.168.120.6 80 20h NAME READY STATUS readstarts AGE pod/greeter-7cbd47b5bd-6vtxk 1/1 Running 1 20h pod/hello-6f8f75f799-z8kgj 1/1 Running 1 20h

If the Ingress Controller is installed on the cluster, then the Hello service can be accessed from the Ingress address, or from the Service address on the compute node, as shown below. When we call the Hello service, we pass the parameter helloTo=

. It returns Hello,

! A string.

$ curl http://hello.app.zyl.io/? helloTo=Ingress Hello, Ingress! $curl http://10.106.116.73:9080/? helloTo=Service Hello, Service!

Having learned about the test case functionality by deploying the test case from the manifest file above, this article then packages the two applications using Helm and deployments them to the cluster. For subsequent Helm deployment tests, execute the following command to remove the deployed application.

kubectl delete -f k8s

useHelmpackaging

First, install Helm by executing the following command, here selecting version V3:

Wget-o helm. TGZ https://get.helm.sh/helm-v3.2.3-linux-amd64.tar.gz tar PXZ helm. TGZ -c /usr/local/bin/ --strip-components=1 chmod 755 /usr/local/bin/helm

Execute the following command to package hello and greeter 2, respectively, which will generate two directories.

helm create hello
helm create greeter

Note: Helm will populate the catalog with templates, so you can make adjustments as needed by referring to the templates. We define variables in the values.yaml file, refer to variables in the template file through {{}}, and _helpers. Tpl can define some templates that support syntax such as conditional judgment.

$tree hello hello charts -- chart # Bass Exercises -- chart. YAML # This file contains chart metadata information, such as name, version, Templates # This directory contains the deployment list files, add or remove the list files as needed. │ ├─ Deployment. YAML │ ├─ Deployment. YAML │ _Helpers.tpl # This file defines some templates. YAML │ ├─ HPA. YAML │ ├─ Ingress. YAML │ ├─ Notes.txt │ ─ ServiceAccounts. YAML │ └── tests │ ├ ── test-connection.YAML ─ values. YAML # Values start

Optional. Take Greeter as an example to configure the Chart. Yaml file and add metadata information to it, which can be passed through. Chart. is referenced.

# Greeter dir: $cat > Chart. Yaml <<'EOF' # Greeter dir: $cat > Chart. Yaml <<'EOF' Receive Messages from HTTP and Return Welcome Message Type: Application Version: 0.1.0 Maintainers: -Email: [email protected] Name: yanlin. Zhou sources: - https://github.com/zylpsrs/helm-example.git keywords: - helm - deployment #home: https://github.com/zylpsrs/helm-example #icon: EOF

Configure the values.yaml file to add custom variable information, and we add or subtract the default information generated by the template as needed. Note: To minimize adjustments to the template configuration, it is best not to delete the configuration that we do not use.

  • greeterThe configuration is as follows. Deploy onereplicat, image namegreeter:latest.servicePort is9080, its useClusterIPMethod:
# greeter dir:
$ vi values.yaml
replicaCount: 1

image:
  repository: registry.cn-hangzhou.aliyuncs.com/zylpsrs/example/greeter
  pullPolicy: IfNotPresent
  tag: latest

service:
  type: ClusterIP
  port: 9080
EOF
  • helloThe configuration andgreeterSimilarly, we adjusted the defaultingressConfigure (although not enabled) because it depends ongreeter, we add to itgreeterInformation, here we set some variables to overridegreeterDefault configuration, such as Settingsreplicasfor2.
# hello dir: $vi values.yaml replicaCount: 1 image: repository: registry.cn-hangzhou.aliyuncs.com/zylpsrs/example/hello pullPolicy: IfNotPresent tag: latest service: type: ClusterIP port: 9080 ingress: enabled: false hosts: - host: hello.app.zyl.io paths: [/] greeter: enabled: true replicaCount: 2 service: port: 9080

Templates /service.yaml is a service configuration file. We don’t need to change the default for both applications. The configuration is refined by referring to the variables in values.yaml and the template in _helpers. TPL as follows:

# templates/service.yaml apiVersion: v1 kind: Service metadata: name: {{include "greeter.fullname".}} # Labels from a _helpers. TPL file: {{- include "greeter. Labels". | nindent 4}} # reference since _helpers. TPL file spec: type: {{.values.service.type}} # Service.type ports: -port: {{.values.service.port}} # Service.port targetPort: HTTP Protocol: TCP Name: HTTP Selector: {{- include "greeter. SelectorLabels". | nindent 4}} # reference since _helpers. TPL file

The service configuration does not need to be changed, but if its name refers to the template in _helpers. TPL, it is important to know that the definition of the greeter.fullname template generated by default is a bit complicated, but its value is generally equal to. Release.Name-.Chart.Name, if Chart is installed with Release Name test, greeter.fullName is test-greeter.

# templates/_helpers.tpl
...
{{- define "greeter.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
...

Refer to the k8s/deploy-hello. yAML deployment manifest file, which sets the GREETER address through the environment variable GREETER=http://greeter:9080. If GREETER is installed through Helm, The value of the service name reference template Greeter.FullName is. Release.name-greeter, so in the hello diagram, we set a template Name for the greeter service in _helpers. TPL with a value equal to. The Name – greeter.

# templates/_helpers.tpl
{{- define "hello.greeter.fullname" -}}
{{- $name := default "greeter" .Values.greeter.nameOverride -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}

Using two application deployment stateless deployment, here we adjust the templates/deployment. Yaml files:

  • Both applications are listening in9080Port, so adjust the container port to9080:
Ports: -name: HTTP ContainerPort: 80 Protocol: TCP -name: HTTP ContainerPort: 80 Protocol: TCP -name: 9080 ContainerPort: -name: HTTP ContainerPort: 80 Protocol: TCP 9080 protocol: TCP
  • For simplicity, we do not configure the Survive and Ready probe and delete itlivenessProbewithreadinessProbeItems.
Livenessprobe: httpGet: path: / port: HTTP ReadInessprobe: httpGet: path: / port: HTTP
  • helloDepends on thegreeterIn thehelloSet environment variables in the deployment manifestGREETERIf:helloEnabled in the deployment manifestgreeterService, the service name refers to the template namehello.greeter.fullname, or forgreeter.
Env: -name: GREETER {{-if.values.greeter.enabled}} value: 'http://{{ template "hello.greeter.fullname" . }}:9080' {{- else }} value: 'http://greeter:9080' {{- end }}

Templates /ingress.yaml is the INGRESS configuration file and you can set ingress.enabled=true to enable INGRESS. This file will remain unchanged by default and can be removed if the diagram does not want to provide the INGRESS configuration.

When a diagram is deployed through Helm, it is rendered Templates/Notes.txt to print NOTES. Normally we do not need to modify this file, but for this example we set container port 80 to 9080.

Finally, add the following dependencies to the requires.yaml or chart.yaml (recommended) file of the Hello Chart so that when Hello is installed through Chart, it will automatically install the Greeter application and, We also need to copy the greeter directory to the hello/charts directory.

# hello dir: $cat >> chart.yaml <<EOF # Dependencies: -name: Greeter # The repository address of Helm can also be pushed to the repository, as the image image of Helm can also be pushed to the repository. In this repository, we will further configure the repository: http://chartmuseum.app.zyl.io # applications through the condition judgment whether to rely on this condition: greerer. Enabled # can be applied to add some tags tags: - HTTP EOF $cp - a.. /greeter charts/ $helm dep list # Check dependencies, According to the normal NAME VERSION REPOSITORY STATUS greeter 0. 7.0.x.x unpacked at http://chartmuseum.app.zyl.io

At this point, we have deployed charts for Hello and Greeter, respectively, and Hello depends on Greeter, so we configure the dependencies in the chart.yaml file, with the warehouse name specified in it we will configure later.

useHelmThe deployment of

We use Helm to deploy the Hello chart, which automatically deploys its dependent Greeter application. Under the Hello chart directory, execute the following command to create an instance named test:

$ helm -n demo install test . NAME: test LAST DEPLOYED: Thu Jun 18 15:51:13 2020 NAMESPACE: demo STATUS: deployed REVISION: 1 NOTES: 1. Get the application URL by running these commands: export POD_NAME=$(kubectl get pods --namespace demo -l "app.kubernetes.io/name=hello,app.kubernetes.io/instance=test" -o JsonPath ="{.items[0].metadata.name}") echo "Visit http://127.0.0.1:8080 to use your application" kubectl --namespace demo port-forward $POD_NAME 8080:9080

Our executable Helm list looks at the instances deployed through Helm in the current namespace:

$ helm list -n demo NAME NAMESPACE ... STATUS CHART APP VERSION test demo ... Deployed hello - 0.1.0 from 1.0.0

As shown below, the Greeter application is also installed when the Hello chart is installed, and because Greeter.replicaCount =2 is configured in the VALUES. YAML file of the Hello chart, Greeter deploys 2 PODS there.

$ kubectl -n demo get svc,ingress,deploy,pod NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/test-greeter ClusterIP 10.99.226.226 < NONE > 9080/TCP 11M service/test-hello clusterIP 10.105.187.216 < NONE > 9080/TCP 11M NAME READY up-to-date  AVAILABLE AGE deployment.apps/test-greeter 2/2 2 2 11m deployment.apps/test-hello 1/1 1 1 11m NAME READY STATUS RESTARTS AGE pod/test-greeter-695ffd859d-pwc7s 1/1 Running 0 11m pod/test-greeter-695ffd859d-r2gcw 1/1 Running 0 11m Pod/test - hello - 779565 bb5d - 1/1 RDWBH Running 11 m $0 the curl http://10.105.187.216:9080/? helloTo=Helm Hello, Helm!

As with the Image image, we can also push Helm charts to the repository for distribution and deployment. Let’s set up a private Helm repository and push Chart to the repository, then use the repository to get Chart and deploy the application, but before that, execute the following command to remove the currently deployed instance.

$ helm -n demo uninstall test
release "test" uninstalled

Set upHelmwarehouse

The Chart Repository Guide for Helm shows that there are many ways to set up a Helm Repository. In this article, Chartmuseum is installed as The Helm Repository. However, if a Harbor mirror Repository is deployed, it already supports The Helm Repository. There is no need to re-deploy the repository separately for Helm.

Let’s use Helm to install ChartMuseum in a separate command space, first adding repository 3:

$ helm repo add stable https://kubernetes-charts.storage.googleapis.com

To ensure Charts persistence pushed to the warehouse, we need to provide a persistent storage volume for ChartMuseum. If readers refer to this article “Build a single-node K8S test cluster using Kubeadm” to deploy a test K8S cluster, then our cluster will have a default persistent store of NFS type.

$ oc get storageclass
NAME            PROVISIONER                            RECLAIMPOLICY   VOLUMEBINDINGMODE ...
nfs (default)   cluster.local/nfs-server-provisioner   Delete          Immediate         ...

Note: If the container uses NFS volumes, the cluster node needs to have an NFS client installed, otherwise it will not be able to mount NFS.

$ yum -y install nfs-utils

Deploy ChartMuseum into a separate namespace called Helm-repo by executing the following command. For simplicity, no validation is configured for the repository: Chart packages can be obtained and uploaded to anyone, and if the cluster is configured with Ingress Controller, the repository can be configured with Ingress, where the host name is configured as Chartmuseum.app.zyl.io.

$ kubectl create namespace helm-repo $ cat > /tmp/values.yaml <<'EOF' env: open: STORAGE: Chart DISABLE_API: false # Allow uploading of the same version of Chart ALLOW_OVERWRITE: true persistence: enabled: true accessMode: ReadWriteOnce size: 8Gi ingress: enabled: true hosts: - name: chartmuseum.app.zyl.io path: / tls: false EOF $ helm -n helm-repo install -f values.yaml mychart stable/chartmuseum

After POD is successfully launched, we refer to the document to upload a test chart to the warehouse with its API interface, as shown below:

$ helm create mychart $ helm package mychart Successfully packaged chart and saved it to: / TMP/mychart - 0.1.0 from. TGZ # upload: $curl - data - binary ". @ mychart - 0.1.0 from.tgz "{" saved" : true} http://chartmuseum.app.zyl.io/api/charts # list all charts: "Mychart" ${curl http://chartmuseum.app.zyl.io/api/charts: [{" name ":" mychart ", "version" : "0.1.0 from",... }

Using the curl tool to show that managing a repository through an API is not a good idea, we can install the [Helm -push]() plug-in, and then perform Helm push to upload the diagram created above to the repository, as shown below:

$ helm plugin install https://github.com/chartmuseum/helm-push.git $ helm repo add mychart http://chartmuseum.app.zyl.io  $ cd hello && helm push . mychart $ cd .. && helm push greeter mychart # or $helm push greeter http://chartmuseum.app.zyl.io # list all charts: $helm search repo mychart NAME CHART VERSION APP VERSION DESCRIPTION mychart/mychart 0.1.0 1.16.0 A Helm Chart for Kubernetes MyChart/Greeter 0.1.0 1.0.0 Receive messages from HTTP and return... Mychart /hello 0.1.0 1.0.0 Receive messages from HTTP and return...

Deploy from a remote repository

Now that we have deployed the Helm warehouse, uploaded the Chart to the warehouse, and added the warehouse to the local Helm warehouse via the Helm Repo Add, we use this warehouse to install the charts in this section. As shown below, we want to deploy INGRESS for the Hello application, so we override the default configuration with the parameter –set key=value or -f file.

$ cat > /tmp/values.yaml <<EOF
ingress:
  enabled: true
  hosts:
    - host: hello.app.zyl.io
      paths: [/]
EOF
$ helm -n demo install -f /tmp/values.yaml test mychart/hello

At this point, the Ingress object is deployed in the demo namespace, and we access the application through it:

$ oc get ing,pod
NAME                            CLASS    HOSTS              ADDRESS         PORTS   AGE
ingress.extensions/test-hello   <none>   hello.app.zyl.io   192.168.120.6   80      78s

NAME                                READY   STATUS    RESTARTS   AGE
pod/test-greeter-695ffd859d-5mkvk   1/1     Running   0          78s
pod/test-greeter-695ffd859d-hqqtc   1/1     Running   0          78s
pod/test-hello-779565bb5d-2z9jv     1/1     Running   0          78s

$  curl http://hello.app.zyl.io/?helloTo=helmRepo
Hello, helmRepo!

conclusion

In this article, we learned how to package complex applications with Helm. Although the example is not complicated, it has the capability. However, Helm is suitable for stateless applications, which have the capability to install the application on day one, but can’t do application maintenance operations (such as upgrade/backup) on day two. The author will describe operator techniques to handle this complex application later.


  1. Repo: All the code for this article is available from this repository ↩
  2. Chart: wrapped inhelmSaid tochart
  3. Crepo: helm project address on https://github.com/helm/chart… ↩