1. Decompress the FileBeat file

Upload filebeat.tar to the server and decompress the file to the specified directory


2. Filebeat. Yml configuration

Open fileBeat and configure filebeat.yml

###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.

#=========================== Filebeat inputs =============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.


- type: log
  # Change to true to enable this input configuration.
  enabled: true
  # Paths that should be crawled and fetched. Glob based paths.
  paths:
  # Log actual path address
   - /data/learning/learning*.log
  fields:
  # log tag, which distinguishes different logs, will be used to create the index below
    type: "learning"
  fields_under_root: true
  # specify the encoding type of the file to be monitored. Use plain and UTF-8 to handle Chinese logs
  encoding: utf-8
  # Pattern matches the first line of the multi-line log
  multiline.pattern: ^ {
  # Whether to invert the pattern condition, set uninvert to true and invert to false. [Recommended setting to true]
  multiline.negate: true
  # After matching pattern, the log is merged with the previous (before) or the following (after) content
  multiline.match: after
  
- type: log
  # Change to true to enable this input configuration.
  enabled: true
  # Paths that should be crawled and fetched. Glob based paths.
  paths:
  # Log actual path address
   - /data/study/study*.log
  fields:
    type: "study"
  fields_under_root: true
  # specify the encoding type of the file to be monitored. Use plain and UTF-8 to handle Chinese logs
  encoding: utf-8
  # Pattern matches the first line of the multi-line log
  multiline.pattern: ^\s*\d\d\d\d-\d\d-\d\d
  # Whether to invert the pattern condition, set uninvert to true and invert to false. [Recommended setting to true]
  multiline.negate: true
  # After matching pattern, the log is merged with the previous (before) or the following (after) content
  multiline.match: after

#============================= Filebeat modules ===============================

#filebeat.config.modules:
  # Glob pattern for configuration loading
 # path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  #reload.enabled: true

  # Period on which files under path should be checked for changes
  #reload.period: 10s

#==================== Elasticsearch template setting ==========================

#================================ General =====================================

#============================== Dashboards =====================================

#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:              
  # kibanaIP address
  host: "localhost:5601"
  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"
  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

#============================= Elastic Cloud ==================================

#================================ Outputs =====================================

# Configure what output to use when sending the data collected by the beat.
setup.ilm.enabled: false
#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
  enabled: true
  # Array of hosts to connect to
  hosts: ["localhost:9200"]
# index: "logs-%{[beat.version]}-%{+yyyy.MM.dd}"
  indices:
       Service name + IP + --%{+ YYYy.mm. Dd}
    - index: "learning-%{+yyyy.MM.dd}"  
      when.contains:
      # tag, corresponding to log and index, corresponding to above
        type: "learning"
    - index: "study-%{+yyyy.MM.dd}"
      when.contains:
        type: "study"
        
  # Optional protocol and basic auth credentials.
  #protocol: "https"
  username: "#name"
  password: "#pwd"

#----------------------------- Logstash output --------------------------------

#================================ Processors =====================================

# Configure processors to enhance or manipulate events generated by the beat.

processors:
- drop_fields:
# remove redundant fields
     fields: ["agent.type"."agent.name"."agent.version"."log.file.path"."log.offset"."input.type"."ecs.version"."host.name"."agent.ephemeral_id"."agent.hostname"."agent.id"."_id"."_index"."_score"."_suricata.eve.timestamp"."agent.hostname"."cloud. availability_zone"."host.containerized"."host.os.kernel"."host.os.name"."host.os.version"]

#================================ Logging =====================================


#============================== Xpack Monitoring ===============================

#================================= Migration ==================================

This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true
Copy the code

3. Start filebeat

nohup  ./filebeat  -c  filebeat.yml  -e  >/dev/null  2> &1  &
Copy the code

4. Configuration kibana

If the installation starts properly, the configured index name should be visible in kibana background Settings.

4.1 Configuring the Index Pattern

Index Pattern: This points to one or more Elasticsearch indexes and tells Kibana which indexes you want to operate onCopy the code


Above we need to enter the corresponding index pattern according to the name of our index. It can point to a single index, or it can point to multiple indexes through wildcards.

4.2 check the discover