Set up the Node.js logging infrastructure using ElasticSearch, Fluentd, and Kibana.

By Abhinav Dhasmana

Translation: Crazy geek

Original text: blog. Bitsrc. IO/setting – up -…

Reproduced without permission

Setting up the right logging infrastructure can help you find problems, debug, and monitor your application. From the most basic point of view, we should get the following from the infrastructure:

  • The ability to search freely for text in our logs
  • Ability to search specific API logs
  • Can be based on all apisstatusCodeTo search
  • As we add more data to the log, the system should be extensible

architecture

Tip: Reuse JavaScript components

Use Bit (Github) to share and reuse JavaScript components across projects. Teams working together to share components can build applications faster. Letting the Bit do the heavy lifting allows you to easily publish, install, and update individual components without any overhead. Learn more here.

Local Settings

We will use Docker to manage the service.

Elastic search

Use the following command to start and run ElasticSearch

docker run -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node"- the name myES docker. Elastic. Co/elasticsearch/elasticsearch: 7.4.1Copy the code

To check that your container is up and running, run the following command

curl -X GET "localhost:9200/_cat/nodes? v&pretty"
Copy the code

Kibana

You can start Kibana and get it running with another Docker run command.

Docker run -- -link myES: elasticSearch -p 5601:5601 kibana:7.4.1Copy the code

Please note that we are using the –link command to link Kibana to the elastic search server

If go to http://localhost:5601/app/kibana, you will see our kibana dashboard.

Now you can use Kibana to run all queries against our elastic search cluster. We can navigate to

http://localhost:5601/app/kibana#/dev_tools/console? _g=()
Copy the code

And run the (slightly longer) query we ran earlier

Fluentd

Fluentd is where all the data is formatted.

Let’s start by building our Dockerfile. It does two things:

  • Install the necessary software packages
  • Copy the configuration file to the Docker file

Fluentd Dockerfile for fluentD:

FROM fluent/fluentd:latest MAINTAINER Abhinav Dhasmana <[email protected]> USER root RUN apk add --no-cache --update --virtual .build-deps \ sudo build-base ruby-dev \ && sudo gem install fluent-plugin-elasticsearch \ && sudo gem install fluent-plugin-record-modifier \ && sudo gem install fluent-plugin-concat \ && sudo gem install fluent-plugin-multi-format-parser \ && sudo gem sources --clear-all \ && apk del .build-deps \ && rm -rf /. / home/fluent gem/ruby / 2.5.0 / cache / *. The gem COPY fluent. Conf/fluentd/etc /Copy the code

Fluent configuration file:

# Recieve events over http from port 9880
<source>Type HTTP port 9880 bind 0.0.0.0</source>

# Recieve events from 24224/tcp
<source>Type forward port 24224 Bind 0.0.0.0</source>

# We need to massage the data before if goes into the ES
<filter* * >
  # We parse the input with key "log" (https://docs.fluentd.org/filter/parser)
  @type parser
  key_name log
  # Keep the original key value pair in the result
  reserve_data true
  <parse>
    # Use apache2 parser plugin to parse the data
    @type multi_format
    <pattern>
      format apache2
    </pattern>
    <pattern>
      format json
      time_key timestamp
    </pattern>
    <pattern>
      format none
    </pattern>
  </parse>
</filter>


# Fluentd will decide what to do here if the event is matched
# In our case, we want all the data to be matched hence **
<match* * >
# We want all the data to be copied to elasticsearch using inbuilt
# copy output plugin https://docs.fluentd.org/output/copy
  @type copy
  <store># We want to store our data to elastic search using out_elasticsearch plugin # https://docs.fluentd.org/output/elasticsearch. See Dockerfile for installation @type elasticsearch time_key timestamp_ms Host 0.0.0.0 port 9200 # Use Conventional index name format (logstash-% y. % M. %d) logstash_format true # We will Use this  when kibana reads logs from ES logstash_prefix fluentd logstash_dateformat %Y-%m-%d flush_interval 1s reload_connections false reconnect_on_error true reload_on_failure true</store>
</match>
Copy the code

Let’s get this Docker machine running

docker build -t abhinavdhasmana/fluentd .docker run -p 9880:9880  --network host  abhinavdhasmana/fluentd
Copy the code

Node. Js applications

I’ve created a small Node.js program for demonstration, which you can download at github.com/abhinavdhas… Found. This is a small Express application created using Express Generator. It uses Morgan to generate apache format logs. You can also use your own app. As long as the output stays the same, our infrastructure doesn’t care. Let’s build and run the Docker image.

docker build -t abhinavdhasmana/logging .
Copy the code

Of course, we can get all of the Docker containers from the single Docker Compose file given below.

Docker compose file for EFK setup:

version: "3"
services:
  fluentd:
    build: "./fluentd"
    ports:
      - "9880:9880"
      - "24224:24224"
    network_mode: "host"
  web:
    build: .
    ports:
      - "3000:3000"
    links:
      - fluentd
    logging:
      driver: "fluentd"
      options:
        fluentd-address: localhost:24224
  elasticsearch:
    image: Elasticsearch: 7.4.1
    ports:
      - "9200:9200"
      - "9300:9300"
    environment:
      - discovery.type=single-node
  kibana:
    image: Kibana: 7.4.1
    links:
      - "elasticsearch"
    ports:
      - "5601:5601"
Copy the code

That’s it. Our infrastructure is in place. You can now generate some logs by visiting http://localhost:3000.

Now let’s go to the Kibana dashboard again and define the indexes to use:

Note that logSTash_Prefix Fluentd is mentioned in our fluent.conf, so we use the same string here. Here are some basic Kibana Settings.

Elastic search uses dynamic mapping to guess the type of its index field. The screenshot below shows these:

Let’s examine how to meet the requirements mentioned at the beginning:

  • Free text search in logs: With the help of ES and Kibana, we can search on any field to get results.
  • Ability to search specific API logs:In the “Available Fields” section on the left side of Kibana, you can see the fieldspath. Apply a filter to it to find the API we’re interested in.
  • Can be based on all apisstatusCodeConduct a search:Same as above. usecodeField and apply filters.
  • As more data is added to the log, the system should be extensible:We use the following environment variablesdiscovery.type = single-nodeAn elastic search is initiated in single-node mode. We could start with the cluster model, add more nodes, or use a managed solution on whatever cloud provider we choose. I’ve tried AWS and it’s easy to set up. AWS also offers a hosted Kibana instance of Elasticsearch for free.

Welcome to pay attention to the front public number: front pioneer, free webpack from the entry to the full series of advanced tutorials.