This is the 25th day of my participation in Gwen Challenge

Twenty-two K8S package manager

Helm is a package management tool developed by Deis for Kubernetes applications, mainly for managing Charts. Similar to APT in Ubuntu or YUM in CentOS.

Helm Chart is a series of YAML files that encapsulate Kubernetes native applications. You can customize some of the application Metadata as you deploy the application to facilitate application distribution.

For application publishers, the Helm allows them to package applications, manage application dependencies, manage application versions, and publish applications to a software repository.

For users, there is no need to write complex application deployment files after using Helm. They can find, install, upgrade, roll back and uninstall applications on Kubernetes in a simple way.

22.1 Basic Concepts

  • Helm

Helm is a command-line client tool. It is mainly used for the creation, packaging, and publishing of Kubernetes application Chart as well as the creation and management of local and remote Chart repositories.

  • Tiller

Tiller is a server for Helm and is deployed in the Kubernetes cluster. Tiller is used to receive Helm’s request and generate Kubernetes deployment file (Helm called Release) according to Chart, and then submit to Kubernetes to create the application. Tiller also provides the upgrade, delete, rollback, and other functions of Release.

  • Chart

Helm software package, in TAR format. The APT DEB package or YUM RPM package contains a set of YAML files that define Kubernetes resources.

  • Repoistory

Helm’s software Repository, Repository is essentially a Web server that stores a series of Chart packages for users to download and provides a list file of Chart packages in the Repository for query. Helm can manage multiple different repositories simultaneously.

  • Release

Chart deployed in the Kubernetes cluster using the helm install command is called Release. The relationship between Chart and Release is similar to the relationship between classes and instances in object-oriented.

22.2 Working principle of Helm

  • Chart the Install process
1. The Helm parses Chart structure information from the specified directory or TAR file. 2. Helm will pass the specified Chart structure and Values information to Tiller through gRPC. 3. Tiller generates a Release based on Chart and Values. 4. Tiller sends releases to Kubernetes to generate releases.Copy the code
  • Chart the Update process
1. The Helm parses Chart structure information from the specified directory or TAR file. 2. Helm passes the Release name, Chart structure, and Values information that needs to be updated to Tiller. 3. Tiller generates a Release and updates the History of the Release with the specified name. 4. Tiller sends Release to Kubernetes to update Release.Copy the code
  • Chart the Rollback process
1. Pass the name of the Release whose Helm will be rolled back to Tiller. 2. Tiller looks up History based on the name of Release. 3. Tiller gets the last Release from History. 4. Tiller sends the previous Release to Kubernetes to replace the current Release.Copy the code
  • Chart handles dependency descriptions
When Tiller deals with Chart, he directly merges Chart and all Charts it depends on into a Release and passes it to Kubernetes at the same time. Tiller is therefore not responsible for managing the boot order between dependencies. The application in Chart needs to be able to handle dependencies on its own.Copy the code

22.3 deployment Helm

Official Github address: github.com/helm/helm

  • Download the binary version, unpack it, and install Helm
$ wget https://storage.googleapis.com/kubernetes-helm/helm-v2.13.1-linux-amd64.tar.gz
$ tar xf helm-v2.13.1-linux-amd64.tar.gz
$ mv helm /usr/local/bin/
Copy the code
  • The ~/. Kube directory is automatically read when tiller is initialized. Therefore, ensure that the config file exists and is authenticated successfully
  • Tiller configures RBAC, creates rbac-config.yaml, and applies it
https://github.com/helm/helm/blob/master/docs/rbac.md    # Find rbac-config.yaml on this page
Copy the code
$ kubectl apply -f tiller-rbac.yaml
Copy the code
  • The ~/. Kube directory is automatically read when tiller is initialized. Therefore, ensure that the config file exists and is authenticated successfully
$ helm init --service-account tiller
Copy the code
  • Add the incubator source
$ helm repo add incubator https://aliacs-app-catalog.oss-cn-hangzhou.aliyuncs.com/charts-incubator/
$ helm repo update
Copy the code
  • After the installation is complete, check the version
$ helm version
Copy the code
Client: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
Copy the code
  • Chart warehouse officially available at HELM
http://hub.kubeapps.com/
Copy the code
  • Basic Command Usage
completion 	Generate autocomplete scripts (bash or ZSH) for the specified shell
create     	Create a new chart with the given name
delete     	Delete release from Kubernetes with the specified name
dependency 	Manage chart dependencies
fetch      	Download chart from the repository and (optionally) unzip it to a local directory
get        	Download a named release
help       	# List all the help information
history    	Get the release history
home       	# display HELM_HOME position
init       	Initialize Helm on client and server
inspect    	Check chart details
install    	Install chart archive
lint       	Chart syntax check
list       	# list of releases
package    	Package chart directory as chart file
plugin     	Add a list or remove the HELM plugin
repo       	# Add list remove updates and index chart repository
reset      	Uninstall Tiller from the cluster
rollback   	Roll back the version to a previous version
search     	# Search for keywords in chart repository
serve      	Start the local HTTP web server
status     	Display the state of the specified release
template   	# Local render template
test       	# test a release
upgrade    	Upgrade a release
verify     	Verify that chart on the given path is signed and valid
version    	Print client/server version information
dep         # analyze Chart and download dependencies
Copy the code
  • Specify values. Yaml to deploy a chart
helm install --name els1 -f values.yaml stable/elasticsearch
Copy the code
  • Upgrade a Chart
helm upgrade --set mysqlRootPassword=passwd db-mysql stable/mysql
Copy the code
  • Roll back a chart
helm rollback db-mysql 1
Copy the code
  • Delete a release
helm delete --purge db-mysql
Copy the code
  • Only render and export the template, not install it
helm install/upgrade xxx --dry-run --debug
Copy the code

22.4 Chart file organization

myapp/                               # Chart directory├ ─ ─ charts# This Charts relies on other charts that are always installed├ ─ ─ Chart. YamlDescribe the information about this Chart, including name, description, version, etc├ ─ ─ templates# template directory│ ├ ─ ─ deployment. Yaml# Deployment controller Go template file│ ├ ─ ─ _helpers. TPL# files beginning with _ are not deployed to K8S and can be used to customize generic information│ ├ ─ ─ ingress. Yaml# ingress template file│ ├ ─ ─ NOTES. TXTSome information about the deployment of # Chart to the cluster, such as: How to use, list the default values│ ├ ─ ─ service. YamlGo template file for service│ ├ ─ ├ ─ sci-imp. ├ ─ sci-impThese values are applied to the GO template at installation time to generate the deployment file
Copy the code

22.5 Deploying EFK using Helm + Ceph

This article uses EFK running on a K8S cluster and a Ceph cluster as persistent storage for the ElasticSearch cluster.

Use knowledge: Storage Class, PVC, Helm, in addition, many service images need to climb the wall.

The helm Install blocking process may be slow to download images.

There are a lot of projects that can be customized in HELM. I won’t customize them here. Anyway, I have enough resources, so I’m too lazy to adjust them.

22.6 Storage Class

---
apiVersion: v1
kind: Secret
metadata:
  name: ceph-admin-secret
  namespace: kube-system
type: "kubernetes.io/rbd"
data:
  # ceph auth get-key client.admin | base64
  key: QVFER3U5TmMQNXQ4SlJBAAhHMGltdXZlNFZkUXAvN2tTZ1BENGc9PQ==


---
apiVersion: v1
kind: Secret
metadata:
  name: ceph-secret
  namespace: kube-system
type: "kubernetes.io/rbd"
data:
  # ceph auth get-key client.kube | base64
  key: QVFCcUM5VmNWVDdQCCCCWR1NUxFNfVKeTAiazdUWVhOa3N2UWc9PQ==


---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: ceph-rbd
provisioner: ceph.com/rbd
reclaimPolicy: Retain
parameters:
  monitors: 172.16100.9.: 6789
  pool: kube
  adminId: admin
  adminSecretName: ceph-admin-secret
  adminSecretNamespace: kube-system
  userId: kube
  userSecretName: ceph-secret
  userSecretNamespace: kube-system
  fsType: ext4
  imageFormat: "2"
  imageFeatures: "layering"
Copy the code

22.7 Helm Elasticsearch

  • Download chart from StatfullSet for ElasticSearch
helm fetch stable/elasticsearch
Copy the code
  • Edit values.yaml and modify storageClass to point to the storageClass created above
storageClass: "ceph-rbd"
Copy the code
  • Deploy ElasticSearch using the helm to specify values.yaml
helm install --name els1 -f values.yaml stable/elasticsearch
Copy the code
  • View it after installation and debug it until it is all READY
$ kubectl get pods
NAME                                         READY   STATUS    RESTARTS   AGE
els1-elasticsearch-client-55696f5bdd-qczbf   1/1     Running   1          78m
els1-elasticsearch-client-55696f5bdd-tdwdc   1/1     Running   1          78m
els1-elasticsearch-data-0                    1/1     Running   1          78m
els1-elasticsearch-data-1                    1/1     Running   1          56m
els1-elasticsearch-master-0                  1/1     Running   1          78m
els1-elasticsearch-master-1                  1/1     Running   1          53m
els1-elasticsearch-master-2                  1/1     Running   1          52m
rbd-provisioner-9b8ffbcc-nxdjd               1/1     Running   2          81m
Copy the code
  • You can also view the helm command
$ helm status els1 LAST DEPLOYED: Sun May 12 16:28:56 2019 NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/ConfigMap NAME DATA AGE els1-elasticsearch 4 88m els1-elasticsearch-test 1 88m ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE els1-elasticsearch-client-55696f5bdd-qczbf 1/1 Running 1 88m els1-elasticsearch-client-55696f5bdd-tdwdc 1/1 Running 1 88m els1-elasticsearch-data-0 1/1 Running 1 88m els1-elasticsearch-data-1 1/1 Running 1 66m els1-elasticsearch-master-0 1/1 Running 1 88m els1-elasticsearch-master-1 1/1 Running 1 63m els1-elasticsearch-master-2 1/1 Running 1 62m ==> v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ELs1-ElasticSearch - Client ClusterIP 10.98.197.185 < None > 9200/TCP 88m ELs1-ElasticSearch - Discovery ClusterIP None <none> 9300/TCP 88m ==> v1/ServiceAccount NAME SECRETS AGE els1-elasticsearch-client 1 88m els1-elasticsearch-data 1 88m  els1-elasticsearch-master 1 88m ==> v1beta1/Deployment NAME READY UP-TO-DATE AVAILABLE AGE els1-elasticsearch-client 2/2 2 2 88m ==> v1beta1/StatefulSet NAME READY AGE els1-elasticsearch-data 2/2 88m els1-elasticsearch-master 3/3 88m NOTES: The elasticsearch cluster has been installed. Elasticsearch can be accessed: * Within your cluster, at the following DNS name at port 9200: els1-elasticsearch-client.default.svc * From outside the cluster, run these commandsin the same shell:

    export POD_NAME=$(kubectl get pods --namespace default -l "app=elasticsearch,component=client,release=els1" -o jsonpath="{.items[0].metadata.name}")
    echo "Visit http://127.0.0.1:9200 to use Elasticsearch"
    kubectl port-forward --namespace default $POD_NAMEA 9200-9200Copy the code
  • Start a temporary container, resolve the cluster address, test the cluster information, and view the cluster nodes
$ kubectl run cirros1 --rm -it --image=cirros -- /bin/sh

/ # nslookup els1-elasticsearch-client.default.svcServer: 10.96.0.10 Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local Name: Els1 elasticsearch - client. Default. SVC Address 1:10.98.197.185 els1 - elasticsearch - client. Default. SVC. Cluster. The local /# curl els1-elasticsearch-client.default.svc.cluster.local:9200/_cat/nodes10.244.2.28 7 96 2 0.85 0.26 0.16 di - els1-elasticSearch -data-0 10.244.1.37 7 83 1 0.04 0.06 0.11 di - Els1-elasticsearch-data -1 10.244.2.25 19 96 2 0.85 0.26 0.16 i-els1-elasticSearch-client-55696f5ddD-tDWDC 10.244.2.27 28 96 2 0.85 0.26 0.16 mi * elasticSearch - Master -2 10.244.1.39 19 83 1 0.04 0.06 0.11i - Els1-els1-elasticsearch-client-55696f5bdd QCZBF 10.244.2.29 21 96 2 0.85 0.26 0.16 mi-els1-els1-master-1 10.244.1.38 23 83 1 0.04 0.06 0.11 Mi-els1-master-0 10.244.1.38 23 83 1 0.04 0.06 0.11 Mi-els1-master-0Copy the code

22.8 Helm fluentd – elasticsearch

  • Install kiwigrid source
helm repo add kiwigrid https://kiwigrid.github.io
Copy the code
  • Download fluentd – elasticsearch
helm fetch kiwigrid/fluentd-elasticsearch

Copy the code
  • Obtaining the Cluster Address
els1-elasticsearch-client.default.svc.cluster.local:9200

Copy the code
  • Modify values.yaml to specify the location of elasticSearch cluster
elasticsearch:
  host: 'els1-elasticsearch-client.default.svc.cluster.local'
  port: 9200

Copy the code
  • Modify the tolerance of smudges to tolerate smudges on Master nodes and also run on Master nodes to collect information
tolerations: 
  - key: node-role.kubernetes.io/master
    operator: Exists
    effect: NoSchedule

Copy the code
  • The prometheusRole rule should be enabled if monitoring with Prometheus
podAnnotations:
  prometheus.io/scrape: "true"
  prometheus.io/port: "24231"
  
service:
  type: ClusterIP
  ports:
    - name: "monitor-agent"
      port: 24231

Copy the code
  • Deploy Fluentd-ElasticSearch using the helm specifying values.yaml
helm install --name flu1 -f values.yaml kiwigrid/fluentd-elasticsearch

Copy the code
  • Check the running status of the HELM service flu1
[root@master fluentd-elasticsearch]# helm status flu1LAST DEPLOYED: Sun May 12 18:13:12 2019 NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/ClusterRole NAME AGE flu1-fluentd-elasticsearch 17m ==> v1/ClusterRoleBinding NAME AGE flu1-fluentd-elasticsearch  17m ==> v1/ConfigMap NAME DATA AGE flu1-fluentd-elasticsearch 6 17m ==> v1/DaemonSet NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE flu1-fluentd-elasticsearch 3 3 3 3 3 <none> 17m ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE flu1-fluentd-elasticsearch-p49fc 1/1 Running 1 17m flu1-fluentd-elasticsearch-q5b9k 1/1 Running 0 17m flu1-fluentd-elasticsearch-swfvt 1/1 Running 0 17m ==> v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE Flu1-fluentd-elasticsearch ClusterIP 10.106.106.209 < None > 24231/TCP 17m ==> v1/ServiceAccount NAME SECRETS AGE flu1-fluentd-elasticsearch 1 17m NOTES: 1. To verify that Fluentd has started, run: kubectl --namespace=default get pods -l"app.kubernetes.io/name=fluentd-elasticsearch,app.kubernetes.io/instance=flu1"

THIS APPLICATION CAPTURES ALL CONSOLE OUTPUT AND FORWARDS IT TO elasticsearch . Anything that might be identifying,
including things like IP addresses, container images, and object names will NOT be anonymized.
2. Get the application URL by running these commands:
  export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=fluentd-elasticsearch,app.kubernetes.io/instance=flu1" -o jsonpath="{.items[0].metadata.name}")
  echo "Visit http://127.0.0.1:8080 to use your application"
  kubectl port-forward $POD_NAMEA 8080-80Copy the code
  • Whether an index is generated, use the RESTfull API that accesses ElasticSearch directly.
$ kubectl run cirros1 --rm -it --image=cirros -- /bin/sh
/ # curl els1-elasticsearch-client.default.svc.cluster.local:9200/_cat/indicesGreen Open Logstash -2019.05.10 a2b-GyKsSLOZPqGKbCpyJw 5 1 158 0 84.2 KB 460B Green Open Logstash -2019.05.09 CwYylNhdRf-A5UELhrzHow 5 1 71418 0 34.3 MB 17.4 MB Green Open Logstash - 2019.05.125 qrfpv46rGG_bwc4XBSYva 5 1 34496 0 26.1 MB 13.2 MBCopy the code

22.9 Helm kibana

  • Download the stable/kibana
helm fetch stable/kibana

Copy the code
  • Edit values.yaml to change the address of elasticSearch to the elasticSearch cluster
elasticsearch.hosts: http://els1-elasticsearch-client.default.svc.cluster.local:920

Copy the code
  • Modify the working mode of the service so that it can be accessed from outside the cluster
service:
  type: NodePort

Copy the code
  • Deploy Kibana using the helm to specify values.yaml
helm install --name kib1 -f values.yaml stable/kibana

Copy the code
  • Obtaining service port
$kubectl get SVC NAME TYPE cluster-ip external-ip PORT(S) AGE els1-ElasticSearch-client ClusterIP 10.98.197.185 <none> 9200/TCP 4h51m els1-elasticsearch-discovery ClusterIP None <none> 9300/TCP 4h51m flu1-fluentd-elasticsearch ClusterIP 10.101.97.11 < None > 24231/TCP 157m kib1-kibana NodePort 10.103.7.215 < None > 443:31537/TCP 6m50S kubernetes ClusterIP 3 d4h 10.96.0.1 < none > 443 / TCPCopy the code
  • Since the service works in NodePort mode, it can be accessed outside the cluster
172.16.100.6:31537
Copy the code

other

Send your notes to: github.com/redhatxl/aw… Welcome one button three links