Related components

The following components are required for an overall ISTIO environment

  • Prometheus — For K8s platform and ISTIO platform monitoring
  • Jaeger – for link tracing between services
  • Elasticsearch — store link tracking data (you can also deploy FlunTD and Kibana to collect and monitor K8s platform logs)
  • Istio — Platform ontology

Neither Prometheus nor Jaeger was integrated into istio deployment packages because of the need to be used in production environments, where Prometheus was reused by other services on the platform; The Jaeger integrated in IStio was in allinone mode (for testing) and could not meet production requirements, so it was pulled out separately.

Prometheus structures,

Prometheus is an open source monitoring alarm system and time sequence database (TSDB) developed by SoundCloud. Prometheus, developed in the Go language, is an open source version of Google’s BorgMon monitoring system.

In 2016, The Cloud Native Computing Foundation under the Linux Foundation, which was launched by Google, included Prometheus as its second open source project and was quite active in the open source community.

Prometheus and Heapster(Heapster is a subproject of K8S for obtaining cluster performance data.) Compared with the function more perfect, more comprehensive. Prometheus is also capable of supporting tens of thousands of clusters.

The basic principle is to capture the status of monitored components periodically through THE HTTP protocol. Any component can access monitoring as long as it provides the corresponding HTTP interface. No SDK or other integration process is required. This is ideal for hypervisor monitoring systems such as VM, Docker, Kubernetes, etc. An HTTP interface that outputs information about a monitored component is referred to as a exporter. At present, most of the components commonly used by Internet companies can be directly used by my exporter, such as Varnish, Haproxy, Nginx, MySQL, Linux system information (including disk, memory, CPU, network, etc.)

The deployment of

Prometheus here uses Kube-Prometheus from CoreOS, which defines a set of CRDS and provides operators for easy deployment.

Git Clone can be deployed mindlessly:

Kubectl create -f MANIFESTS /setup kubectl get servicemonitors --all-namespaces ; do date; sleep 1; echo ""; Compile kubectl create -f MANIFESTS /Copy the code

In Kube-Prometheus, the following CRDS are defined:

Commonly used # prometheusrules.monitoring.coreos.com - servicemonitors.monitoring.coreos.com is used to define the alarm rules - used to define the metrics for API # Not commonly used podmonitors.monitoring.coreos.com -- alertmanagers.monitoring.coreos.com -- for the alarm configuration for pod monitoring configuration Prometheuses.monitoring.coreos.com - used to create the adapterCopy the code

After the deployment is successful, the following POD and Service are generated in the Monitoring Namespace

# pod NAME READY STATUS RESTARTS AGE alertmanager-main-0 2/2 Running 2 4d17h alertmanager-main-1 2/2 Running 0 4d17h alertmanager-main-2 2/2 Running 0 4d17h grafana-5db74b88f4-sfwl5 1/1 Running 0 4d17h kube-state-metrics-54f98c4687-8x7b7  3/3 Running 0 4d17h node-exporter-5p8b8 2/2 Running 0 4d17h node-exporter-65r4g 2/2 Running 0 4d17h node-exporter-946rm  2/2 Running 0 4d17h node-exporter-95x66 2/2 Running 4 4d17h node-exporter-lzgv7 2/2 Running 0 4d17h prometheus-adapter-8667948d79-hdk62 1/1 Running 0 4d17h prometheus-k8s-0 3/3 Running 1 4d17h prometheus-k8s-1 3/3 Running 1 4d17h prometheus-operator-548c6dc45c-ltmv8 1/1 Running 2 4d17hCopy the code

Pod/Prometheus – K8S-0 refers to the periodic retrieval of K8S platform API to obtain metrics information. Through kubectl get – n monitoring servicemonitors.monitoring.coreos.com for serviceMonitor:

NAME AGE alertmanager 4d17h grafana 4d17h coredns 4d17h | kube-apiserver 4d17h | kube-controller-manager 4d17h | Kube - the scheduler 4 d17h | - > monitoring configuration of k8s related components kube - state - metrics 4 d17h | kubelet 4 d17h | node - exporter 4 d17h | Prometheus 4 d17h | - > component of monitoring configuration on Prometheus itself Prometheus - 4 d17h operator |Copy the code

Note: At this point we have not added the monitoring configuration for the ISITO components

elasticsearch

The role of ES in the environment is to store link tracing information. In the selection of link tracking, we choose Jaeger, and jaeger supports elasticSearch and Cassandra back-end storage, while ES has higher reusability.

Used here is the official image, GCR. IO/fluentd – elasticsearch/elasticsearch: v6.6.1

After successful deployment, service ElasticSearch-Logging is provided

You can query the es cluster status and data ElasticSearch Head in chrome

jaeger

At present, ES has been deployed. For jaeger deployment, you can refer to {% post_link Jaeger-istio Jaeger’s istio practice %} I wrote before, which will not be described here

Through deployment, we get the following three services under the Jaeger namespace:

NAME TYPE cluster-ip external-ip PORT(S) AGE Jaeger-Collector ClusterIP 10.233.15.246 < None > 14267 / TCP, 14268 / TCP, 9411 / TCP 2 d3h jaeger - query LoadBalancer 10.233.34.138 < pending > 80:31882 / TCP 2 d3h zipkin ClusterIP 2 d3h 10.233.53.53 < none > 9411 / TCPCopy the code

Zipkin service is used to collect link information using Zipkin data structure. Jaeger is compatible with Zipkin. Jaeger-collector Collects link information about data structures used by jaeger. Jaeger-query Is used to query information.

istio

Now that all the preparations are in place, ISTIO can finally be deployed. The version isTIO uses is 1.3.4. We choose by Helm chart template installation deployment, you first need to custom deployment parameters, modifying the install/kubernetes/Helm/values. The yaml

Grafana: enabled: false Prometheus: enabled: false tracing: enabled False report # configuration tracing link information service address tracer: zipkin: address: zipkin. Jaeger: 9411 # format for servicename. Namespace: portCopy the code

Run the following command to deploy

# add official helm warehouse helm repo add istio. IO # https://storage.googleapis.com/istio-release/releases/1.3.4/charts/ deployment istio CRD Helm, install the install/kubernetes/helm/istio - init - name istio - init -- the namespace istio - check out the resources system # needs to wait for the results of 23 kubectl get CRDS | grep 'istio. IO' | wc - l # istio deployment, pilot. TraceSampling configuration link tracking of sampling rate, 100% said 100; # kiali. XXX configuration kiali connect external jaeger and Prometheus helm, install the install/kubernetes/helm/istio - name istio -- the namespace istio-system --set pilot.traceSampling=100,kiali.dashboard.jaegerURL=http://jaeger-query.jaeger:80,kiali.prometheusAddr=http://prometheus-k 8s.monitoring:9090Copy the code

At this point, ISTIO is deployed and Jaeger is connected to ISTIO, but Prometheus has not configured metrics to collect ISTIO. We need to create our own serviceMonitor:

apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: istio-monitor labels: app: Prometry-istio spec: selector: matchLabels: # Collect /metrics app of pods containing the following tags: Since metrics of business service components in ISTIO are all summarized in Mixer, only this component can be collected. Interval: 10s # Collection interval namespaceSelector: matchNames: -IStio-system # ScopeCopy the code

In addition, since Prometheus does not have permission to collect endpoints other than Monitoring and Kube-System, rBAC needs to be added to increase its permission:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: prometheus
  namespace: istio-system
  labels:
    app: prometheus

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: prometheus-istio
  labels:
    app: prometheus
rules:
- apiGroups: [""]
  resources:
  - nodes
  - services
  - endpoints
  - pods
  - nodes/proxy
  verbs: ["get", "list", "watch"]
- apiGroups: [""]
  resources:
  - configmaps
  verbs: ["get"]
- nonResourceURLs: ["/metrics"]
  verbs: ["get"]

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: prometheus-istio
  namespace: istio-system
  labels:
    app: prometheus
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: prometheus-istio
subjects:
- kind: ServiceAccount
  name: prometheus-k8s
  namespace: monitoringCopy the code

At this point, the platform is set up and you can test it by deploying the IStio sample program bookinfo

kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml -n <your namespace>Copy the code

If you like, please visit my home page