Two, the start of installation

1. ElasticsearchInstallation and simple configuration of

  1. ElasticsearchIt’s very easy to set up environments on PCS

2. DownloadElasticsearchThe installation packageelasticsearch.cn/download/ 3. ElasticsearchAnd the officialDockerMirror image, we can do it inDockerIt’s easy to start it

2. ElasticsearchFile directory structure

directory The configuration file describe
bin Script startup, including startupElasticsearch, install the plug-in. Running statistics, etc
config elasticsearch.yml Cluster configuration file, user, role based configuration
JDK Java runtime environment
data path.data The data file
lib Java class library
logs path.log The log file
modules Contains all ES modules
plugins Contains all installed plug-ins

2.1 JVMconfiguration

  1. Modify theconfig/jvm.options.7.1Version of the default Settings1 GB
  2. XmxandXmsSet to the same
  3. XmxDo not exceed the machine memory50%
  4. Not more than30GB – [www.elastic.co/blog/a-heap…]

3. RunElasticsearch

3.1 Startup problems

max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]

  1. The maximum number of open files for each process is too small. You can run the following two commands to view the current number of open files
ulimit -Hn
ulimit -Sn
Copy the code
  1. Modify/etc/security/limits file, increase the configuration, take effect after the user login again
*               soft    nofile          65536
*               hard    nofile          65536
Copy the code

max number of threads [3818] for user [es] is too low, increase to at least [4096]

  1. Same problem, the maximum number of threads is too low. Modify the configuration file/etc/security/limits the conf (and question 1 is a file), and increase the configuration
*               soft    nproc           4096
*               hard    nproc           4096
Copy the code
  1. You can view the information by running commands
ulimit -Hu
ulimit -Su
Copy the code

max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

  1. Add vm.max_map_count=262144 to the /etc/sysctl.conf file
vi /etc/sysctl.conf
sysctl -p
Copy the code
  1. Run the sysctl -p command to take effect

3.2 start

The/opt/module/elasticsearch - 7.1.0 / bin/elasticsearchCopy the code
  • Open your browser and enter:http://192.168.37.130:9200/

For 4.ElasticsearchInstall plug-ins (plug-ins can be used to implement security policies to protect our stored data)

4.1 Viewing current InformationElasticsearchInstalled plug-ins

[root @ hadoop101 elasticsearch - 7.1.0] # bin/elasticsearch - the plugin listCopy the code

4.2 installationanalysis-icu(analysisIs an international word segmentation plug-in.

[root@hadoop101 elasticSearch -7.1.0]# bin/ elasticSearch -plugin install analysis-icuCopy the code

4.3 Viewing the installed plug-ins

http://192.168.37.130:9200/_cat/plugins

5. Run multiple on the development machineElasticsearchThe instance

  • ElasticsearchOne of the great features is that it runs in a distributed fashion, meaning you can run multiple files on multiple machinesElasticsearchInstances, which eventually form a large cluster
  • We are now going to run it natively in a multi-instance mode to see how it works

5.1 to modifyconfig/elasticsearch.yml

Bootstrap. memory_lock: false Discovery. seed_hosts: ["192.168.2.101:9301","192.168.2.101:9302","192.168.2.101:9303"] # Initial_master_nodes: ["node1","node2","node3"] ["node1","node2","node3"] Allow-origin: "*" true # # allow-origin: "*"Copy the code

5.2 to modifyconfig/jvm.options

  • Because I’m on a virtual machine, I don’t have a lot of memory, so this is going to beJVMTurn down the memory
-Xms512M
-Xmx512M
Copy the code

5.3 Creating data Directories and Log Directories

Create directory CD data; mkdir data1 data2 data3 cd logs; mkdir logs1 logs2 logs3Copy the code

5.4 start

Bin/elasticSearch -e node.name=node1 -e cluster.name=geektime -e path.data=data/ data1-e path.logs=logs/logs1 -E http.port=9201 -E transport.tcp.port=9301 -E node.master=true -E node.data=true bin/elasticsearch -E node.name=node2 -E cluster.name=geektime -E path.data=data/data2 -E path.logs=logs/logs2 -E http.port=9202 -E transport.tcp.port=9302 -E node.master=true -E node.data=false bin/elasticsearch -E node.name=node3 -E  cluster.name=geektime -E path.data=data/data3 -E path.logs=logs/logs3 -E http.port=9203 -E transport.tcp.port=9303 -E node.master=true -E node.data=falseCopy the code
  • Visit: http://192.168.37.130:9201/_cat/nodes, you can see the nodes in the cluster

6. KibanaInstallation and quick view of the interface

6.1 Download (All domestic communities)

  • elasticsearch.cn/download/

6.2 configuration

  • Modify theconfig/Kibana.yml
Host: "0.0.0.0" # es address for any host that can access server. Host: "0.0.0.0" # es [" http://192.168.37.130:9201 "]Copy the code

6.3 start

[els @ hadoop101 kibana - 7.1.0 - Linux - x86_64] $bin/kibanaCopy the code

6.4 Setting Chinese

  • Add a line of configuration to the Kibana.yml configuration filei18n.locale: "zh-CN"

6.4 Interview and brief introduction

  • Visit: http://192.168.37.130:5601/

In 7.DockerRun in containerElasticsearch KibanaandCerebro

7.1 Installing Docker and Docker Compose

Self-installation by Baidu

7.2 Docker Compose Commands

  • docker-compose upRun:
  • docker compose down
  • docker compose down -v
  • docker stop/rm containerID

7.3 runDocker-comose, local build development environment, intuitive understandingElasticsearchDistributed features, and integrationCerebroTo view cluster status

7.3.1 docker-compose.yamlfile

version: '2.2'
services:
  cerebro:
    image: Lmenezes/cerebro: 0.8.3
    container_name: cerebro
    ports:
      - "9000:9000"
    command:
      - -Dhosts.0.host=http://elasticsearch:9200
    networks:
      - es7net
  kibana:
    image: Docker. Elastic. Co/kibana/kibana: 7.1.0
    container_name: kibana7
    environment:
      - I18N_LOCALE=zh-CN
      - XPACK_GRAPH_ENABLED=true
      - TIMELION_ENABLED=true
      - XPACK_MONITORING_COLLECTION_ENABLED="true"
    ports:
      - "5601:5601"
    networks:
      - es7net
  elasticsearch:
    image: Docker. Elastic. Co/elasticsearch/elasticsearch: 7.1.0
    container_name: es7_01
    environment:
      - cluster.name=geektime
      - node.name=es7_01
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
      - discovery.seed_hosts=es7_01,es7_02
      - cluster.initial_master_nodes=es7_01,es7_02
    ulimits:
      memlock:
        soft: - 1
        hard: - 1
    volumes:
      - es7data1:/usr/share/elasticsearch/data
    ports:
      - 9200: 9200
    networks:
      - es7net
  elasticsearch2:
    image: Docker. Elastic. Co/elasticsearch/elasticsearch: 7.1.0
    container_name: es7_02
    environment:
      - cluster.name=geektime
      - node.name=es7_02
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
      - discovery.seed_hosts=es7_01,es7_02
      - cluster.initial_master_nodes=es7_01,es7_02
    ulimits:
      memlock:
        soft: - 1
        hard: - 1
    volumes:
      - es7data2:/usr/share/elasticsearch/data
    networks:
      - es7net


volumes:
  es7data1:
    driver: local
  es7data2:
    driver: local

networks:
  es7net:
    driver: bridge
Copy the code

7.3.2 Starting and Running

  • To enter adocker-compose.yamlDirectory of files to run
  • My directory here is in ‘ ‘
docker-compose up
Copy the code

8. LogstashInstall and import data

8.1 Download and Install

  • When downloading, we must make sure that ourLogstashThe version and oursElasticsearchThe same version of
  • elasticsearch.cn/download/

8.2 Downloading the LogStash Data File

  • D:\WorkSpace\rickying-geektime-ELK-master
  • And import the data into Linux/opt/module/logstash_data

8.2 to modifylogstash.confThe configuration file

Input {file {path => "/opt/module/logstash-7.1.0/bin/movies. CSV "start_position => "beginning" sincedb_path => "/dev/null" } } filter { csv { separator => "," columns => ["id","content","genre"] } mutate { split => { "genre" => "|" } remove_field => ["path", "host","@timestamp","message"] } mutate { split => ["content", "("] add_field => { "title" => "%{[content][0]}"} add_field => { "year" => "%{[content][1]}"} } mutate { convert => { "year" => "integer" } strip => ["title"] remove_field => ["path", "host","@timestamp","message","content"] } } output { elasticsearch { hosts => "http://localhost:9200" index => "movies"  document_id => "%{id}" } stdout {} }Copy the code

8.3 Specifying the Configuration file to runlogstash

./logstash -f logstash.conf
Copy the code

Elasticsearch Is a game about Elasticsearch. It’s a game about Elasticsearch.