preface

This article should be a friend’s request, let write a Loki log system, WE must be duty-bound to write well to open dry!

Real needs

The company’s container cloud runs an application or a node has a problem, solution ideas


The problem was first monitored by Prometheus

Metric is used to indicate that a current or historical value has been achieved

Alert Sets the metric to a specific cardinality

These logs alone are not going to solve the problem but look at the application logs

The basic unit of k8s is pod

Pod outputs logs to STdout and stderr

When the memory of a POD becomes very large

That triggered our alert

At this point the administrator

Go to the page to check which POD is the problem

Then confirm the reason for the larger POD memory

We also need to query the POD logs

If there is no logging system

Then we need to go to the page or use the command to query


If the application dies at this point, there is no way to query the relevant log

Alternative solutions to address this requirement

ELK

Advantage:

1, feature rich, allowing complex operations

Disadvantage:

1. Mainstream ELK (full text search) or EFK is heavier

2. Many of the complex search functions of ES are unusable, with complex scale, high resource occupation and difficult operation

Most queries focus on a time range and some simple parameters (host, service, etc.)

3. Switching between Kibana and Grafana affects user experience

4. The cost of shard and share of inverted index is high

Loki

1. Minimize switching costs for metrics and logging

Helps reduce response time to abnormal events and improve the user experience

There is a tradeoff between ease of operation and complexity of query languages

3. More cost effective

Loki component introduction

Promtail

  • Log collection tool used to send container logs to Loki or Grafana services

  • The tool mainly consists of discovering collection targets and attaching labels to log streams and sending them to Loki

  • Promtail’s service discovery is based on Prometheus’s service discovery mechanism

Loki

  • A log aggregation system inspired by Prometheus that can scale horizontally, be highly available, and support multi-tenancy

  • Using the same service discovery mechanism as Prometheus, tags are added to the log stream instead of building full-text indexes

  • The logs received from Promtail and the metrics metrics applied have the same set of tags

  • Not only does it provide better context switching between logs and metrics, but it also avoids full-text indexing of logs

Grafana

  • An open source platform for monitoring and visual observations

  • Support for very rich data sources

  • In the Loki stack it is dedicated to presenting time series data from data sources such as Prometheus and Loki

  • Can query, visualization, alarm and other operations

  • Can be used to create, explore, and share data dashboards

  • Encourage data drive

Loki architecture


1.

Loki uses the same label as Prometheus for its index

You can query log content and monitored data using labels

Not only is the cost of switching between the two queries reduced

It also greatly reduces the storage of log indexes

2,

Loki wrote PormTail using the same service discovery and tag re-tagging library as Prometheus

In K8S promtail runs on each node as daemonset

Wait for the correct metadata of the log via the Kubernetes API

And send them to Loki

Log storage architecture


Distributor

1.

Promtail collects the log and sends it to Loki

Distributor is the first component to receive logs

Loki enables batch processing and compression by building blocks of compressed data

2,

The component Ingester is a stateful component

Responsible for building and refreshing chunck

When chunks reach a certain number or time

Flush to storage

3,

Each stream log corresponds to an Ingester

When the log reaches Distributor

Figure out which ingester to go to based on metadata and hash algorithms


4. We copy it n times (3 by default) for redundancy and elasticity

Ingester

Ingester receives the log and starts building Chunk


The logs are compressed and attached to chunk

Once the chunk is “filled up” (a certain amount of data or a certain period of time has passed)

Ingester flushes it to the database

We use separate databases for blocks and indexes

Because they store different types of data


After refreshing a chunk

Ingester then creates a new empty chunk and adds the new entry to that chunk

Querier

1. It is Querier’s responsibility to give a time range and label selector

Querier looks at the index to determine which blocks match

The results are displayed through GREPS

It also gets the latest data from Ingester that hasn’t been refreshed yet

2,

For each query

A query will show you all the relevant logs

Query parallelization is realized

Providing distributed grep

Even large queries are sufficient

expanding

  • Loki index storage can be Cassandra/bigtable/dynamodb

  • Chuncks can be various object stores

  • Querier and Distributor are stateless components

  • Ingester is stateful, but when new nodes join or decrease chunks between whole nodes are redistributed to fit the new hash ring

Environment setup (using Docker choreography to achieve)

Install Loki, promtail, and Grafana

  • Write docker choreography configuration files
vim docker-compose.yaml

version: "3"

networks:
 loki:  services:  loki: Image: grafana/Loki: 1.5.0 ports:  - "3100:3100"  command: -config.file=/etc/loki/local-config.yaml  networks:  - loki   promtail: Image: grafana/promtail: 1.5.0 volumes:  - /var/log:/var/log  command: -config.file=/etc/promtail/docker-config.yaml  networks:  - loki   grafana:  image: grafana/grafana:latest  ports:  - "3000:3000"  networks:  - loki Copy the code
  • Start the installation
docker-compose up -d

Copy the code

docker-compose ps
Copy the code

  • Access the Grafana interface
http://localhost:3000
Copy the code

The default login account is admin/admin

Then add the Loki data source


Add the url http://loki:3100/


Click Explore to select Loki and select the appropriate Label

You can also query logs using regular expressions



The likI architecture, implementation principle and environment building process have been mentioned above

Is it over now? No that looks so outstanding haha

Let’s combine a specific case:

Use Loki to connect to pod logs under k8S

let’s go !

The specific process is as follows:

1. Nacos is the registry

2. User and order are two Springboot projects

3. Deploy user and order using K8S

4, access user interface, user access order print logs

5. The log is displayed through Loki

The 1-4 process was introduced in our previous article

K8S deploys Nacos microservice

You can read this article if you want to practice using k8S to deploy your project

The effect of deployment is

Nacos interface


Both the User and Order services are deployed on K8S


The IP shown here is the IP below the K8S cluster

  • Check the pod
kubectl get pod
Copy the code

  • Viewing service
kubectl get svc
Copy the code

Both are of nodePort type

So you can directly access the user service interface


  • View the log of the user pod
kubectl logs -f user-5776d4b64c-xsb5c
Copy the code

Now that Loki is available and k8S pod logs are available, let’s see how Loki and K8s can be associated to query K8s via Loki

All you need to do is implement promtail to access the K8S logs

Promtail provides direct access to the logs within the K8S cluster

It is also possible to mount logs from within the K8S cluster to the host machine and then access the logs from the host machine using promtail

Here are four implementations

So let’s do it separately

Note: This article first briefly introduces the ideas of the four ways, and in the next article, we will realize the corresponding effects for these four ways

Method 1: Change the default path var/logs/*log to /var/log/container

Promtail provides direct access to the logs within the K8S cluster

The first thing you need to know is that the default directory for logs generated by pods under the K8S cluster is /var/log/containers

Docker-compose: docker-compose: docker-compose: docker-compose: docker-compose: docker-compose: docker-compose: docker-compose: docker-compose

promtail:
Image: grafana/promtail: 1.5.0    volumes:
      - /var/log/container:/var/log/container
    command: -config.file=/etc/promtail/docker-config.yaml
 networks:  - loki Copy the code

The/etc/promtail docker – config. Yaml

Docker is an internal configuration file that is accessed

Let’s take a look inside Docker

A. View the container ID

docker ps |grep promtail
Copy the code

B. Enter the container

docker exec -it 71cb21c2d0a3 sh
Copy the code

To see the

job=varlogs

The log file path is /var/log/*log

Does it feel like deja vu


Job corresponding varlogs

Filenames is the log file under the corresponding log path /var/log/*log

Only inside the Promtail container, of course

You only need to change the /var/log/*log path to /var/log/container/*log

So how do you fix it

We know that the configuration file is inside the container

To modify this configuration file, you need to get a copy of it on the host and then modify the file on the host and replace the docker configuration



Restart docker-compose under deployment


See the interface, see the problem?

/var/log/container/*log: /var/log/container/*log: /var/log/container/*log: /var/log/container/*log

2. Why can’t the contents of a log file be displayed

These two questions will be answered in the next article (first bury a hole haha I am so bad)

Way 2

- replacement: /var/log/container/*.log
Copy the code

I’ll save that for the next article

Method 3: Mount the logs inside the K8S cluster to the host machine and access the logs on the host machine through promtail

This way I tried and also achieved the desired effect

1.

Start by adding log files to the SpringBoot project. Let’s use the User project as an example

  • Adding log files

  • The output directory for log files is /opt/logs

2. Generate docker image from Springboot project and upload it to Ali’s image library and reference this image library in k8S User profile

All of this was covered in detail in the previous article and will not be appended here

There are only differences


The areas circled in red are the added mappings

VolumeMounts This is the internal docker /opt/logs directory as the mount directory

Volumes this is the volumes that will host the machine

/Users/mengfanxiao/Documents/third_software/loki/log

The directory acts as a mapping directory to which the logs under /opt/logs in docker are mapped

Note:

I started with /opt/logs, but I could not map the docker log directory to the host in my MAC environment, but centos can

3, finally you just need to let promtail access/Users/mengfanxiao/Documents/third_software/Loki/log this directory is the following log line

How to get promtail to visit you in the next post

Promtail: the promtail file is read from the /var/log directory. Promtail: the promtail file is read from the /var/log directory

Possible problems

  • If the status of K8S is start fail, the service exposes that the NodePort cannot be accessed. Then, restart the K8S

You can refer to my previous article about the K8S principle and environment construction

This article is formatted using MDNICE