introduce

architecture

The target

  • Build ELK distributed log management in Centos7;
  • Familiar with Elasticsearch cluster configuration;
  • Familiar with Logstash+FileBeat data acquisition configuration;
  • Familiar with Kibana visual management configuration;

The environment

  • VMware
  • CentOS7
  • Jdk 1.8
  • Elasticsearch – 7.13.3 – Linux – x86_64. Tar. Gz
  • Kibana – 7.13.3 – Linux – x86_64. Tar. Gz
  • Logstash – 7.13.3 – Linux – x86_64. Tar. Gz
  • Filebeat – 7.13.3 – Linux – x86_64. Tar. Gz

Analysis of the

  • Elasticsearch cluster build

    node instructions
    192.168.157.120 ES cluster 1
    192.168.157.121 ES cluster 2
  • Logstash + FileBeat log capture node

    node instructions
    192.168.157.130 Log nodes
    192.168.157.131 Log nodes
    192.168.157.132 Log nodes
  • Kibana Visual Management

    node instructions
    192.168.157.122:5601 Visual address

convention

  • Download the storage directory: /mysoft
  • Elasticsearch -> Kibana-> Logstash + FileBeat
  • ElasticSearch agreed

    The module content
    Software installation location /usr/local/elasticsearch
    Cluster port 9200
    The name of the cluster ES
    Directory where data files are stored /usr/local/elasticsearch/data
    Log files are stored in the directory /usr/local/elasticsearch/logs
    User (not root) elastic
    Password (not root) elastic
  • Kibana agreed

    The module content
    Software installation location /usr/local/Kibana
    Local port 5601
    The machine address 192.168.157.122
    ES cluster node [” http://192.168.157.120:9200 “, “http://192.168.157.121:9200”]
    User (not root) kibana
    Password (not root) kibana
  • Logstash + fileBeat conventions

    The module content
    Logstash installation location /usr/local/elk/logstash
    FileBeat installation location /usr/local/elk/fileBeat
    Sending ES Node [” http://192.168.157.120:9200 “, “http://192.168.157.121:9200”]

ElasticSearch installation

  • Common operations
  1. Enter the directory

    cm /mysoft
  2. Unzip the package

    Tar ZXVF elasticsearch 7.13.3 - Linux - x86_64. Tar. Gz
  3. Move and change the name

    Elasticsearch - 7.13.3 mv/usr/local/elasticsearch
  4. JDK problem handling

    vi /usr/local/elasticsearch/bin/elasticsearch-env

  5. User and Authorization

    # # to create user useradd elastic password passwd elastic # given user permissions chown -r elastic: elastic/usr/local/elasticsearch /
  • Configure ES node -1

    # cluster.name: my-application # cluster.name: my-application # cluster.name: my-application # cluster.name: my-application # cluster.name: my-application # cluster.name: my-application # cluster.name: my-application Node - 1 # elasticsearch data file storage directory path. The data: / usr/local/elasticsearch/data # elasticsearch log file storage directory path. The logs: / usr/local/elasticsearch/logs # is open to all IP network. The host: 0.0.0.0 # HTTP port number. HTTP port: 9200 # ES cluster address discovery. Seed_hosts: ["192.168.157.120", "192.168.157.121"] # ES cluster node cluster.initial_master_nodes: ["node-1", "node-2"] #action.destructive_requires_name: true
  • Configure ES node -2

    # cluster.name: my-application # cluster.name: my-application # cluster.name: my-application # cluster.name: my-application # cluster.name: my-application # cluster.name: my-application # cluster.name: my-application Node - 2 # elasticsearch data file storage directory path. The data: / usr/local/elasticsearch/data # elasticsearch log file storage directory path. The logs: / usr/local/elasticsearch/logs # is open to all IP network. The host: 0.0.0.0 # HTTP port number. HTTP port: 9200 # ES cluster address discovery. Seed_hosts: ["192.168.157.120", "192.168.157.121"] # ES cluster node cluster.initial_master_nodes: ["node-1", "node-2"] #action.destructive_requires_name: true
  • Start the ES cluster

    # to use elastic authorized users start su elastic -l -c "/ usr/local/elasticsearch/bin/elasticsearch" # start debugging su elastic - l - c "/ usr/local/elasticsearch/bin/elasticsearch - d" # background
  • Check the status

    Netstat - NLTP view port number

  • Single node test

    http://192.168.157.120:9200/ http://192.168.157.121:9200/ # respectively access cluster nodes

    You can see the following message, indicating successful installation of the single node

  • Cluster Node Testing

    # access address Check the cluster node status at http://192.168.157.120:9200/_cat/nodes? V: http://192.168.157.120:9200/_cluster/health

    You can see the following message indicating that the cluster configuration was successfully installed


Take a break, go out…