This article is from Rancher Labs

In past articles, we have devoted considerable space to the topic of surveillance. This is because when you are managing a Kubernetes cluster, everything changes very quickly. Therefore, it is important to have a tool to monitor cluster health and resource metrics.

In Rancher 2.5, we introduced a new version of monitoring based on Prometheus Operator, which provides native Kubernetes deployment and management for Prometheus and related monitoring components. Prometheus Operator allows you to monitor the status and progress of cluster nodes, Kubernetes components, and application workloads. It also has the ability to define alarms and create custom dashboards from metrics collected by Prometheus, which can be easily visualized through Grafana. You can visit the following link for more details about the new monitoring component:

Rancher.com/docs/ranche…

The new version of monitoring also uses Promethee-Adapter, which developers can use to extend their workloads based on custom metrics and HPA.

In this article, we explore how to use the Prometheus Operator to grab custom metrics and leverage them for advanced workload management.

Install the Prometheus

Installing Prometheus in Rancher 2.5 is extremely simple. Simply go to Cluster Explorer -> Apps and install Rancher-Monitoring.

You need to know the following defaults:

  • Prometry-adapter will be enabled as part of the Chart installation

  • ServiceMonitorNamespaceSelector is empty, allowing Prometheus ServiceMonitors collection in all namespace

Once installed, we can access the monitoring components from the Cluster Explorer.

Deploy workload

Now let’s deploy a sample workload that exposes custom metrics from the application layer. This workload exposed a simple application that had been detected using the Prometheus Client_Golang library and provided some custom metrics at the/Metric endpoint.

It has two metrics:

  • http_requests_total

  • http_request_duration_seconds

The following MANIFEST deploys the workload, associated services, and the ingress that accesses the workload:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app.kubernetes.io/name: prometheus-example-app
  name: prometheus-example-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: prometheus-example-app
  template:
    metadata:
      labels:
        app.kubernetes.io/name: prometheus-example-app
    spec:
      containers:
      - name: prometheus-example-app
        image: gmehta3/demo-app:metrics
        ports:
        - name: web
          containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
  name: prometheus-example-app
  labels:
    app.kubernetes.io/name: prometheus-example-app
spec:
  selector:
    app.kubernetes.io/name: prometheus-example-app
  ports:
    - protocol: TCP
      port: 8080
      targetPort: 8080
      name: web
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
    name: prometheus-example-app
spec:
    rules:
    - host: hpa.demo
      http:
        paths:
        - path: /
          backend:
            serviceName: prometheus-example-app
            servicePort: 8080
Copy the code

Deploy ServiceMonitor

The ServiceMonitor is a custom resource definition (CRD) that allows us to declaratively define how to monitor a set of dynamic services.

You can view the complete ServiceMonitor specification at the following link:

Github.com/prometheus-…

Now we deploy the ServiceMonitor, which Prometheus uses to collect the pods that make up the Promethes-Example – App Kubernetes service.

kind: ServiceMonitor
metadata:
  name: prometheus-example-app
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: prometheus-example-app
  endpoints:
  - port: web
Copy the code

As you can see, users can now browse the ServiceMonitor in the Rancher monitor.

Soon, the new Service Monitor and the pod associated with the service should be reflected in the Prometheus service discovery.

We can also see metrics in Prometheus.

Deploy the Grafana dashboard

In Rancher 2.5, monitoring lets users store Grafana dashboards as ConfigMaps in the Cattle-Dashboards namespace.

Users or cluster administrators can now add more dashboards to this namespace to extend Grafana’s custom dashboards.

Dashboard ConfigMap Example
Copy the code
apiVersion: v1 kind: ConfigMap metadata: name: prometheus-example-app-dashboard namespace: cattle-dashboards labels: grafana_dashboard: "1" data: prometheus-example-app.json: | { "annotations": { "list": [ { "builtIn": 1, "datasource": "-- Grafana --", "enable": true, "hide": true, "iconColor": "rgba(0, 211, 255, 1)", "name": "Annotations & Alerts", "type": "dashboard" } ] }, "editable": true, "gnetId": null, "graphTooltip": 0, "links": [], "panels": [ { "aliasColors": {}, "bars": false, "dashLength": 10, "dashes": false, "datasource": null, "fieldConfig": { "defaults": { "custom": {} }, "overrides": [] }, "fill": 1, "fillGradient": 0, "gridPos": { "h": 9, "w": 12, "x": 0, "y": 0 }, "hiddenSeries": false, "id": 2, "legend": { "avg": false, "current": false, "max": false, "min": false, "show": true, "total": false, "values": false }, "lines": true, "linewidth": 1, "nullPointMode": "Null ", "percentage": false, "points": false, "renderer": "flot", "seriesOverrides": [], "spaceLength": 10, "stack": false, "steppedLine": false, "targets": [ { "expr": "rate(http_requests_total{code=\"200\",service=\"prometheus-example-app\"}[5m])", "instant": false, "interval": "", "legendFormat": "", "refId": "A" } ], "thresholds": [], "timeFrom": null, "timeRegions": [], "timeShift": null, "title": "http_requests_total_200", "tooltip": { "shared": true, "sort": 0, "value_type": "individual" }, "type": "graph", "xaxis": { "buckets": null, "mode": "time", "name": null, "show": true, "values": [] }, "yaxes": [ { "format": "short", "label": null, "logBase": 1, "max": null, "min": null, "show": true }, { "format": "short", "label": null, "logBase": 1, "max": null, "min": null, "show": true } ], "yaxis": { "align": false, "alignLevel": null } }, { "aliasColors": {}, "bars": false, "dashLength": 10, "dashes": false, "datasource": null, "description": "", "fieldConfig": { "defaults": { "custom": {} }, "overrides": [] }, "fill": 1, "fillGradient": 0, "gridPos": { "h": 8, "w": 12, "x": 0, "y": 9 }, "hiddenSeries": false, "id": 4, "legend": { "avg": false, "current": false, "max": false, "min": false, "show": true, "total": false, "values": false }, "lines": true, "linewidth": 1, "nullPointMode": "Null ", "percentage": false, "points": false, "renderer": "flot", "seriesOverrides": [], "spaceLength": 10, "stack": false, "steppedLine": false, "targets": [ { "expr": "rate(http_requests_total{code!=\"200\",service=\"prometheus-example-app\"}[5m])", "interval": "", "legendFormat": "", "refId": "A" } ], "thresholds": [], "timeFrom": null, "timeRegions": [], "timeShift": null, "title": "http_requests_total_not_200", "tooltip": { "shared": true, "sort": 0, "value_type": "individual" }, "type": "graph", "xaxis": { "buckets": null, "mode": "time", "name": null, "show": true, "values": [] }, "yaxes": [ { "format": "short", "label": null, "logBase": 1, "max": null, "min": null, "show": true }, { "format": "short", "label": null, "logBase": 1, "max": null, "min": null, "show": true } ], "yaxis": { "align": false, "alignLevel": null } } ], "schemaVersion": 26, "style": "dark", "tags": [], "templating": { "list": [] }, "time": { "from": "now-15m", "to": "now" }, "timepicker": { "refresh_intervals": [ "5s", "10s", "30s", "1m", "5m", "15m", "30m", "1h", "2h", "1d" ] }, "timezone": "", "title": "prometheus example app", "version": 1 }Copy the code

Users should now be able to access the Dashboard of the Prometheus Example app in Grafana.

User-defined HPA of indicators

This section assumes that you have installed Promethee-Adapter as part of your monitoring. In fact, by default, the monitor installer installs Prometheus-Adapter.

Users can now create an HPA Spec as follows:

apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
  name: prometheus-example-app-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: prometheus-example-app
  minReplicas: 1
  maxReplicas: 5
  metrics:
  - type: Object
    object:
        describedObject:
            kind: Service
            name: prometheus-example-app
        metric:
            name: http_requests
        target:
            averageValue: "5"
            type: AverageValue
Copy the code

You can check out the following links for more information about HPA:

Kubernetes. IO/docs/tasks /…

We will use a custom HTTP_requests_total metric to perform pod automatic scaling.

Now we can generate a sample load to see the HPA in action. I can do the same with hey.

hey -c 10 -n 5000 http://hpa.demo
Copy the code

Total knot

In this article, we explore the flexibility of new monitoring in Rancher 2.5. Developers and cluster administrators can leverage this stack to monitor their workloads, deploy visualizations, and take advantage of the advanced workload management features available within Kubernetes.