“This is the 10th day of my participation in the Gwen Challenge in November. Check out the details: The Last Gwen Challenge in 2021.”

introduce

Prometheus collects and stores real-time server data (such as CPU usage, hard disks, service response, and logs), and calculates various service performance indicators (such as CPU usage, API response time, and 500 API returns) through its computing functions, which can be used in the integrated alarm monitoring system.

Architecture diagram

The basic principle of

The basic principle of Prometheus is to periodically capture the status of monitored components through THE HTTP protocol, and any component that provides the corresponding HTTP interface can access monitoring. No SDK or other integration process is required.

The key process

  1. Prometheus Daemon is responsible for fetching metrics data from targets periodically, and each fetching target exposes an HTTP service interface for fetching metrics data periodically. Prometheus supports configuration files, text files, Zookeeper, Consul, and DNS SRV Lookup to specify capture targets. Prometheus monitors in a PULL fashion, in which servers Push data directly through target PULL data or indirectly through an intermediate gateway.
  2. Prometheus stores all captured data locally, cleans and organizes the data according to certain rules, and stores the results into new time series.
  3. Prometheus visually presents the collected data through PromQL and other apis. Prometheus supports diagram visualization in many ways, such as Grafana, its own Promdash, and its own template engine. Prometheus also provides HTTP API queries to customize the required output.
  4. PushGateway allows the Client to actively push metrics to PushGateway, whereas Prometheus periodically grabs data from the Gateway.
  5. Alertmanager, an independent component of Prometheus, supports Prometheus’ query statements and provides flexible alarm methods.

The concept of the Exporter

Half is the umbrella term for Prometheus’ data collection component, which is responsible for collecting data from a target and converting it to a format supported by Prometheus. Different from traditional data acquisition components, it does not send data to the central server, but waits for the central server to come to the capture, the default capture address is http://current_ip:9100/metrics

Node-exporter is officially recommended by Prometheus to be a friend, as well as others

  • HAProxy exporter
  • Collectd exporter
  • SNMP exporter
  • MySQL server exporter

.

run

1. Start node_exporter container Node – Exporter is officially recommended by Prometheus

docker run --name node_exporter -d -v "/proc:/host/proc" -v "/sys:/host/sys" -v "/:/rootfs" prom/node-exporter --path.procfs /host/proc --path.sysfs /host/sys --collector.filesystem.ignored-mount-points "^/(sys|proc|dev|host|etc)($|/)"
Copy the code

2. Edit the configuration file Prometheus/Prometheus. Yml

global:
  scrape_interval:     15s
  evaluation_interval: 15s
scrape_configs:
  - job_name: 'prometheus'
    static_configs:
    - targets: ['localhost:9090']
  - job_name: 'webservers'
    static_configs:
    - targets: ['<node exporter node IP>:9100']
Copy the code

3. Start Prometheus

docker run --name prometheus -d -p 9090:9090 -v ~/prometheus/prometheus.yml:/etc/prometheus/prometheus.yml prom/prometheus
Copy the code

4. Access Prometheus Webhttp://localhost:9090/