This is the 10th day of my participation in the August More Text Challenge

introduce

Prometheus monitors applications in a very simple way, requiring only a process to expose an HTTP access address for the current monitoring sample data. Such a program is called a Exporter, and an instance of a Exporter is called a Target. Prometheus periodically obtains monitoring data samples from these targets through rotation training, which exposes an HTTP access address containing monitoring data in a Metrics format.

The metrics Facade, a third-party implementation introduced on SpringBoot 2.x, is integrated by default with Micrometer, which has an implementation of Prometheus’s MeterRegistry specification. The integration of Spring Boot applications with Prometheus requires the addition of micrometer-Registrie-Prometheus dependencies and monitoring of springBoot status.

SpringBoot configuration

<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<! -- Micrometer Prometheus registry -->
<dependency>
    <groupId>io.micrometer</groupId>
    <artifactId>micrometer-registry-prometheus</artifactId>
</dependency>
Copy the code

The configuration file

management.endpoint.metrics.enabled=true
management.endpoints.web.exposure.include=*
management.endpoint.prometheus.enabled=true
management.metrics.export.prometheus.enabled=true
Copy the code

After adding the above dependencies, Spring Boot will automatically configure PrometheusMeterRegistry and CollectorRegistry to collect and export metric data in a format that Prometheus can capture (i.e., the Metrics format mentioned above).

All relevant data is exposed in the Actuator’s/Prometheus endpoint. Prometheus can grab this endpoint to obtain metric data periodically.

Prometheus modifies a configuration file

Added paw-kelk service for job Paw-Service machine localhost:8080 tag monitoring

# my global config
global:
  scrape_interval:     15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
  # scrape_timeout is set to the global default (10s).

# Alertmanager configuration
alerting:
  alertmanagers:
  - static_configs:
    - targets:
      # - alertmanager:9093

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
  # - "first_rules.yml"
  # - "second_rules.yml"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: 'prometheus'

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

    static_configs:
    - targets: ['localhost:9090']
  - job_name: paw-service
    metrics_path: /actuator/prometheus
    static_configs:
    - targets: ['localhost:8080']
      labels:
        applicaton: paw-kelk
        env: dev
Copy the code

Start the Prometheus service

./prometheus --config.file=prometheus.yml
Copy the code

http://localhost:9090/graph

Enter http_server_requests_seconds_count to display the GUI

Monitor microservice request HTTP status, JVM status, but also do their own buried points for monitoring.