preface

I installed K8S, but no apps, only ingress-Nginx and Dashboard. After looking at HELM3, I decided to install ES with HelM3

Persistent storage

Because the created POD can be destroyed and rebuilt, the data stored temporarily will be lost. If the hostpath method is used to mount the data, when pod drifts, the data cannot be saved across nodes. K8s has the concept of PV and PVC, and PVC can create PV dynamically using storageclass.

Create an NFS storageclass

I started with nfS-server-provisioner that came with helm

Using NFS server — provisioner

Github.com/helm/charts…

Check out github and see how it works.

helm install storageclass-nfs stable/nfs-server-provisioner -f storageclass-config.yml
Copy the code
persistence:
  Enable persistent storage
  enabled: true
  storageClass: "-"
  ## Storage size 30G
  size: 30Gi

storageClass:
  ## Set to default storagecLassClass
  defaultClass: true

nodeSelector:
  Which node is installed on
  kubernetes.io/hostname: instance-8x864u54
Copy the code

This installation mode after installation

But it did not succeed, it requires us to provide a PV to do the storage volume, and this PV should be bound with the PVC automatically generated after installation,

apiVersion: v1
kind: PersistentVolume
metadata:
  name: data-nfs-server-provisioner-0
spec:
  capacity:
    storage: 30Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    ## Bind to node location
    path: /data/k8s/volumes/data-nfs-server-provisioner-0
  claimRef:
    namespace: default
    ## Automatically generated PVC name
    name: data-storageclass-nfs-nfs-server-provisioner-0
Copy the code

Perform the following

kubectl apply -f nfs-server-pv.yml
Copy the code

kubectl describe pod elasticsearch-01 -n elasticsearch
Copy the code

What, can’t be mounted out? Tried a lot of ways are not good, direct baidu, helpless, I can only execute a command in each node

yum -y install nfs-utils
systemctl restart rpcbind && systemctl enable rpcbind
systemctl restart nfs && systemctl enable nfs
Copy the code

I didn’t want to do so, but the data couldn’t be mounted without doing so. (I don’t know why) AFTER the change, I re-installed the chart and tested it (the scheme referenced on the Internet, link).

pvc:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-pvc
spec:
  storageClassName: "nfs"
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 100Mi
Copy the code

pv:

apiVersion: apps/v1
kind: Deployment
metadata:  
  name: busybox-test
  labels:
    app.kubernetes.io/name: busybox-deployment
spec:  
  replicas: 1  
  selector:
    matchLabels:    
      app.kubernetes.io/name: busybox-deployment
  template:    
    metadata:      
      labels:        
        app.kubernetes.io/name: busybox-deployment    
    spec:      
      containers:      
        - image: busybox        
          command:          
            - sh          
            - -c          
            - 'while true; do date > /mnt/index.html; hostname >> /mnt/index.html; sleep 5; done'        
          imagePullPolicy: IfNotPresent        
          name: busybox        
          volumeMounts:          
            - name: nfs            
              mountPath: "/mnt"      
      volumes:      
        - name: nfs        
          persistentVolumeClaim:          
            claimName: nfs-pvc
Copy the code

Kubectl apply -f: kubectl apply -f: kubectl apply -f

Look for it on the node I chose to mount

Visible data is already mounted

Using NFS client — provisioner

Nfs-server is to deploy an NFS service, and then create a PV (I use hostPath mode in bare computer) to bind it to NFS. All PVCS dynamically created using THE PVC lock of NFS Storageclass will be mounted under this PV.

Nfs-client is bound to an NFS service and creates the Storageclass.

I choose to deploy the NFS service on a cloud server with large disk space

yum -y install nfs-utils
systemctl restart rpcbind && systemctl enable rpcbind
systemctl restart nfs && systemctl enable nfs
mkdir -p /data/k8s
vim /etc/exports
# # to join
/data/k8s *(rw,no_root_squash,sync)

# Configuration takes effect
exportfs -r
# check effect
exportfs
Copy the code

Perform the deployment

helm install storageclass-nfs stable/nfs-client-provisioner -f nfs-client.yml
Copy the code

nfs-client.yml

storageClass:
  name: nfs
  defaultClass: true
nfs:
  server: ******* The IP address of your server
  path: /data/k8s  
Copy the code

Over !!!!

Create chart for ES

Since I use helM3, HELM2 is unused, but 3 is especially lightweight, I installed HELM directly on the MAC. Helm’s official website

Use the command locally (not using the template on helm because I want to play helm myself)

helm create elasticsearch
Copy the code

values.yml

ReplicaCount: 3 image: repository: ElasticSearch :7.5.2 pullPolicy: IfNotPresent ingress: host: es.xx.com Name: es-xx service:in:
    clusterIP: None
    port: 9300
    name: elasticsearch-in
  out:
    port: 9200
    name: elasticsearch-out
      

resources: 
  limits:
    cpu: 5
    memory: 5Gi
  requests:
    cpu: 1
    memory: 1Gi
Copy the code

deployment.yml

apiVersion: apps/v1
kind: StatefulSet
metadata:
  namespace: {{ include "elasticsearch.fullname" . }}
  name: {{ include "elasticsearch.fullname" . }}
  labels:
    {{- include "elasticsearch.labels" . | nindent 4 }}
spec:
  replicas: {{ .Values.replicaCount }}
  serviceName: {{ include "elasticsearch.name" .}}
  selector:
    matchLabels:
      {{- include "elasticsearch.selectorLabels" . | nindent 6 }}
  template:
    metadata:
      labels:
        {{- include "elasticsearch.selectorLabels" . | nindent 8 }}
    spec:
      containers:
        - name: {{ .Chart.Name }}
          image: "{{ .Values.image.repository }}"
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          env:
            - name: NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
          resources:
            {{- toYaml .Values.resources | nindent 12 }}
          ports:
            - containerPort: 9200
              name: es-http
            - containerPort: 9300
              name: es-transport
          volumeMounts:  Mount configuration and data
            - name: es-data
              mountPath: /usr/share/elasticsearch/data
            - name: elasticsearch-config
              mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
              subPath: elasticsearch.yml 
      volumes:
        - name: elasticsearch-config
          configMap:
            name: {{ include "elasticsearch.name" .}}
            items:
              - key: elasticsearch.yml
                path: elasticsearch.yml    
  volumeClaimTemplates: Allocate a PVC for each node
    - metadata:
        name: es-data
      spec:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 5Gi
        storageClassName: nfs     # Key to creating PVS dynamically
Copy the code

service.yml

apiVersion: v1
kind: Service
metadata:
  name: {{ .Values.service.out.name }}
  namespace: {{ include "elasticsearch.name" .}}
  labels:
    {{- include "elasticsearch.labels" . | nindent 4 }}
spec:
  ports:
    - port: {{ .Values.service.out.port }}
      protocol: TCP
      name: http
  selector:
    {{- include "elasticsearch.selectorLabels" . | nindent 4 }}

---

apiVersion: v1
kind: Service
metadata:
  name: {{ .Values.service.in.name }}
  namespace: {{ include "elasticsearch.name" .}}
  labels:
    {{- include "elasticsearch.labels" . | nindent 4 }}
spec:
  clusterIP: {{.Values.service.in.clusterIP}}
  ports:
    - port: {{ .Values.service.in.port }}
      protocol: TCP
      name: http
  selector:
    {{- include "elasticsearch.selectorLabels" . | nindent 4 }}
Copy the code

Ingress.yml (do not set this if es is provided for internal cluster use)

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: {{ .Values.ingress.name }}
  namespace: {{ include "elasticsearch.name" .}}
  annotations:
    kubernetes.io/ingress.class: "nginx"
spec:
  rules:
    - host: {{ .Values.ingress.host }}
      http:
        paths:
        - path: /
          backend:
            serviceName: {{ .Values.service.out.name }}
            servicePort: {{ .Values.service.out.port }}
Copy the code

Check helm package for obvious errors (local)

helm lint elasticsearch
Copy the code

Package into a compressed package

helm package elasticsearch
Copy the code

Upload to server, I use FTP tool Filezilla

The next step is to install in a cluster

Helm install elasticsearch. / elasticsearch - 0.1.0 from. TGZCopy the code

Check whether the installation is successful

kubectl get pods -A
Copy the code

kubectl describe pod [pod] -n [namespace]
Copy the code

You can use the describe command to find out which node was executed on. You can use the Docker logs command to view installation messages.

After the installation is successful, use the domain name configured by ingress to access it (also can be used within the K8S cluster).

Install kibana

After installing ES, ELk still had to try it, but I also had some problems after installing Kibana, note.

helm create kibana
Copy the code

The document structure

deployment.yml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ include "kibana.fullname" . }}
  labels:
    {{- include "kibana.labels" . | nindent 4 }}
  namespace: {{ .Values.namespace.name}}
spec:
  replicas: {{ .Values.replicaCount }}
  selector:
    matchLabels:
      {{- include "kibana.selectorLabels" . | nindent 6 }}
  template:
    metadata:
      labels:
        {{- include "kibana.selectorLabels" . | nindent 8 }}
    spec:
      containers:
        - name: {{ .Chart.Name }}
          image: {{ .Values.image.repository }}
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          env:
            - name: "ELASTICSEARCH_HOSTS" #es K8s internal access address
              value: {{ .Values.elasticsearch.host }}
            - name: "I18N_LOCALE"   # Chinese parameters
              value: "zh-CN"
          ports:
            - name: http
              containerPort: {{ .Values.service.port }}
              protocol: TCP
          resources:
            {{- toYaml .Values.resources | nindent 12 }}
      [root@localhost] [root@localhost] [root@localhost] [root@localhost] [root@localhost] [root@localhost
      # volumeMounts:
      # - name: kibana-config
      # mountPath: /usr/share/kibana/config/kibana.yml
      # subPath: kibana.yml
      # volumes: 
      # - name: kibana-config
      # configMap:
      # name: {{ include "kibana.name" .}}
      # items:
      # - key: kibana.yml
      # path: kibana.yml
Copy the code

values.yml

ReplicaCount: 1 image: repository: Kibana :7.5.2 pullPolicy: IfNotPresent Namespace: name: ElasticSearchUse the same namespace as es

service:
  port: 5601
  name: kibana
  
elasticsearch:
  host: http://elasticsearch-out:9200 Elasticsearch -out is the access SVC of es

ingress:
  name: kibana-xx  
  host: kibana.xx.com


resources:
  limits:
    cpu: 1
    memory: 1Gi
  requests:
    cpu: 1
    memory: 512Mi
Copy the code

service.yml

apiVersion: v1
kind: Service
metadata:
  name: {{ include "kibana.fullname" . }}
  namespace: {{ .Values.namespace.name }}
  labels:
    {{- include "kibana.labels" . | nindent 4 }}
spec:
  ports:
    - port: {{ .Values.service.port }}
      protocol: TCP
      name: http
  selector:
    {{- include "kibana.selectorLabels" . | nindent 4 }}
Copy the code

ingress.yml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: {{ .Values.ingress.name }}
  namespace: {{ .Values.namespace.name }}
  annotations:
    kubernetes.io/ingress.class: "nginx"
spec:
  rules:
    - host: {{ .Values.ingress.host }}
      http:
        paths:
        - path: /
          backend:
            serviceName: {{ include "kibana.fullname" . }}
            servicePort: {{ .Values.service.port }}
Copy the code

Docker log is waiting for ElasticSearch. I thought there was a problem with elasticSearch_hosts configuration. I changed it many times. It didn’t work. I suspected there was a problem with the ES installation, so I wrote a test with some data.

<dependency> <groupId>org.elasticsearch.client</groupId> <artifactId>elasticsearch-rest-client</artifactId> < version > 7.3.1 < / version > < / dependency > < the dependency > < groupId > org. Elasticsearch. Client < / groupId > < artifactId > elasticsearch - the rest - high - level - client < / artifactId > < version > 7.3.1 < / version > < / dependency > < the dependency > < the groupId > org. Elasticsearch < / groupId > < artifactId > elasticsearch < / artifactId > < version > 7.3.1 < / version > < / dependency >Copy the code
try {
            RestHighLevelClient client = new RestHighLevelClient(
                    RestClient.builder(
                            new HttpHost("es.xx.com")));
            /*Map<String, Object> jsonMap = new HashMap<>();
            jsonMap.put("user"."zhanghua");
            jsonMap.put("postDate", new Date());
            jsonMap.put("message"."trying out Elasticsearch");
            IndexRequest indexRequest = new IndexRequest("posts")
                    .id("1").source(jsonMap); IndexResponse response=client.index(indexRequest,RequestOptions.DEFAULT); System.out.println(response); */ GetRequest getRequest=new GetRequest("posts"."1");
            GetResponse getResponse=client.get(getRequest,RequestOptions.DEFAULT);
            System.out.println(getResponse);
            client.close();
        } catch (IOException e){
            e.printStackTrace();
        }
Copy the code

The returned data is

{"_index":"posts"."_type":"_doc"."_id":"1"."_version": 1,"_seq_no": 0."_primary_term": 1,"found":true."_source": {"postDate":"The 2020-01-29 T06: smote 136 z"."message":"trying out Elasticsearch"."user":"zhanghua"}}
Copy the code

Obviously there is no problem with ES. When I visit Kibana at this time, it suddenly becomes ok. I really don’t know what the reason is, maybe it is because ES has no initialization data. Still need to check

Visit the Kibana interface