Helm is introduced

Helm is the package manager for Kubernetes. It is equivalent to CentOS yum and Ubuntu APT.

There are three concepts at helm:

  • Chart: The package format used by Helm is called chart. Chart is a collection of files that describe Kubernetes related resources. A single chart can be used to deploy something as simple as a Memcache POD, or some complex HTTP server and Web full-stack applications, databases, caches, and so on
  • Repo: Chart warehouse where the community’s Helm chart warehouse is locatedArtifact HubYou can also create and run your own private Chart repository
  • Release: An instance of Chart running in the Kubernetes cluster. A chart can typically be installed multiple times in the same cluster, with each installation creating a new release

Summary: Helm installs charts into the Kubernetes cluster and creates a new release with each installation. You can find the new Chart at Helm’s Chart Repositories.

Preparation stage

Have a Kubernetes cluster as follows:

Specific configuration:

type The IP address System version configuration
Master the Master node 192.168.19.136 CentOS Linux release 7.9.2009 (Core) 4 core 4 g
Work Indicates Work node 1 192.168.19.137 CentOS Linux release 7.9.2009 (Core) 4 core 4 g
Work Indicates Work node 2 192.168.19.138 CentOS Linux release 7.9.2009 (Core) 4 core 4 g

Choose Helm version

Helm version Supported versions of Kubernetes
X 3.5. 1.20 x to 1.17 x
X 3.4. 1.19 x to 1.16 x
X 3.3. 1.18 x to 1.15 x
X 3.2. 1.18 x to 1.15 x
X 3.1. 1.17 x to 1.14 x
X 3.0. 1.16 x to 1.13 x
X 2.16. 1.16 x to 1.15 x
X 2.15. 1.15 x to 1.14 x
X 2.14. 1.14 x to 1.13 x
X 2.13. 1.13 x to 1.12 x
X 2.12. 1.12 x to 1.11 x
X 2.11. 1.11 x to 1.10 x
X 2.10. 1.10 x to 1.9 x
X 2.9. 1.10 x to 1.9 x
X 2.8. 1.9 x to 1.8 x
X 2.7. 1.8 x to 1.7 x
X 2.6. 1.7 x to 1.6 x
X 2.5. 1.6 x to 1.5 x
X 2.4. 1.6 x to 1.5 x
X 2.3. 1.5 x to 1.4 x
X 2.2. 1.5 x to 1.4 x
X 2.1. 1.5 x to 1.4 x
X 2.0. 1.4 x to 1.3 x

Note: Helm 2 uses a client/server architecture with Helm client and Tiller server, while Helm3 removes Tiller. In other words, Helm3 simply needs Helm installed. The differences between Helm 2 and Helm 3 can be read: Helm Documentation

Helm 3.4.2 is chosen for installation in this article.

Download and install Helm 3.4.2

Visit github.com/helm/helm/r… Select the version and download it

Copy the data to the Master node in the cluster

Unpack the

Tar - ZXVF helm - v3.4.2 - Linux - amd64. Tar. GzCopy the code

Move to/usr/local/bin

mv linux-amd64/helm /usr/local/bin/helm
Copy the code

Check whether the installation is successful

helm version
Copy the code

Deploy the Consul cluster using Helm

Basic Usage of Helm

Before deploying Consul, let’s look at the basic use of helm.

Find the Charts:

Helm Search Hub # Look up and list helm Charts from Artifact Hub. Artifact Hub contains a number of different repositories. Helm Search repo # Look from the repository you added (using helm repo add) to your local helm client.Copy the code

Add HashiCorp Helm warehouse:

helm repo add hashicorp https://helm.releases.hashicorp.com
#View the list of added repositories
helm repo list
Copy the code

Search consul:

helm search repo consul
Copy the code

Consul required environment preparation

The namespace

Create a namespace from which all subsequent operations will be performed:

kubectl create namespace consul
Copy the code

storage

Because When consul is deployed, it creates a Pod that uses PVC: PersistentVolumeClaim, the application in the Pod persists data over PVC, which uses PV: PersistentVolume Performs the final persistence of the data. So we need to prepare storage resource supply, otherwise Consul -server will be in pending state because of failing to obtain storage resources, there are two options:

Solution 1: Manually create a static PV for storage supply

cat <<EOF | kubectl apply -f -
---
kind: PersistentVolume
apiVersion: v1
metadata:
  name: pv-volume-consul-0
  namespace: consul
  labels:
    type: local
spec:
  storageClassName: ""
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/consul/data"
---
kind: PersistentVolume
apiVersion: v1
metadata:
  name: pv-volume-consul-1
  namespace: consul
  labels:
    type: local
spec:
  storageClassName: ""
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/consul/data"
EOF
Copy the code

To view:

kubectl get pv -n consul -o wide
Copy the code

Solution 2: Realize dynamic volume supply through StorageClass

Install nfS-utils on all nodes:

yum install -y nfs-utils
Copy the code

Select a node and select work2 (192.168.19.138) as the NFS server:

#Creating an NFS Directory
mkdir -p /mnt/nfs
#Configuring NFS Permission
cat>/etc/exports<<EOF24 (/ MNT/NFS / 192.168.19.0 / insecure, rw, anonuid = 0, anongid = 0, all_squash, sync) EOF#Starting the NFS Service
systemctl start rpcbind.service
systemctl start nfs-server.service
#Set auto-start upon startup
systemctl enable rpcbind.service
systemctl enable nfs-server.service
#Configuration to take effect
exportfs -r
Copy the code

Use helm to install nfS-provisioner on the Master node:

#Adding a repository source
helm repo add azure http://mirror.azure.cn/kubernetes/charts/
#Search the NFS client -- provisioner
helm search repo nfs-client-provisioner
#NFS client -- provisionerHelm install nfs-storage azure/nfs-client-provisioner -n consul \ --set nfs.server=192.168.19.138 \ --set nfs.path=/mnt/nfs \ --set storageClass.name=nfs-storage \ --set storageClass.defaultClass=true#Check the StorageClass
kubectl get sc -n consul
Copy the code

At this point, when a PVC needs to apply for PV, StorageClass will automatically create PV for us.

The configuration file

Create config. Yaml:

Global: name: consul # set the prefix for all resources on consul chart UI: service: # set the service type for consul UI: Affinity: "" # Allow more Pod storage to run on each node: '10Gi' # Defines the disk size used to configure the server's StatefulSet storage. # Security context for server Pod, run as root fsGroup: 2000 runAsGroup: 2000 runAsNonRoot: false runAsUser: 0Copy the code

More configuration can refer to: www.consul.io/docs/k8s/he…

Start the installation

helm install hi-consul hashicorp/consul -n consul -f config.yaml
Copy the code

Hi-consul: You named the release name

Hashicorp/Consul: The name of the chart you want to install

-n: specifies the namespace

-f: passes the configuration file

After executing the installation command, we can monitor the status of pod:

kubectl get pods -n consul -o wide -w
Copy the code

Wait until all pods are READY and check the SVC status:

kubectl get svc -n consul -o wide
Copy the code

Browser to access http://master:30497/ui/ (already set hosts) or http://192.168.19.136:30497/ui/

digression

#View the list of installed helm
helm list -n consul
#uninstall
helm uninstall hi-consul -n consul
#More and more
helm help
Copy the code

The last

Thank you for reading, more share please pay attention to public number 【 searching Gopher】