ElasticSearch will be used in the project. This series of articles will cover the basics of ElasticSearch and go into more detail.

[bug Mc-10899] – I wrote an article about MySQL, now I’m going to play ElasticSearch in a series.

Install ElasticSearch, Kibana, and Logstash. Configure the ElasticSearch daemon to enable Kibana, ElasticSearch, and Logstash to import your demo data into ElasticSearch H.

Install ElasticSearch

Create an ElasticSearch environment from 0 and install it first.

ElasticSearch has been changing its official website for a long time, many people can’t find the download location.

After entering, click on the circled location, do not click on the left to download directly

After entering, you can see the historical version of the release, download the corresponding version according to your needs, here is the 7.1.0 version

If you are a Linux user, you can open developer mode, copy the address and download it directly using wget + address.

https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch7.1. 0-linux-x86_64.tar.gz
Copy the code

As you can see, ElasticSearch has been downloaded and unzipped

Do not start ElasticSearch as root for security reasons.

Add user

Run the useradd es command to add user ES

Grant permissions for ElasticSearch to es as user root

chown -R es ElasticSearch
Copy the code

Start the ElasticSearch

To enable elasticSearch, run the. /bin/elasticSearch command

Initialization keystore problems occurred during startup

Exception in thread "main" org.elasticsearch.bootstrap.BootstrapException: java.nio.file.AccessDeniedException: /usr/local/ elasticsearch/elasticsearch - 6.6.0 / config/elasticsearch keystore Likely root cause: java.nio.file.AccessDeniedException: /usr/local/ elasticsearch/elasticsearch - 7.1.0 / config/elasticsearch keystore ats sun.nio.fs.UnixException.translateToIOException(UnixException.java:84) at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214) at java.nio.file.Files.newByteChannel(Files.java:361) at java.nio.file.Files.newByteChannel(Files.java:407) at org.apache.lucene.store.SimpleFSDirectory.openInput(SimpleFSDirectory.java:77) at org.elasticsearch.common.settings.KeyStoreWrapper.load(KeyStoreWrapper.java:206) at org.elasticsearch.bootstrap.Bootstrap.loadSecureSettings(Bootstrap.java:224) at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:289) at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:159) at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:150) at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:124) at org.elasticsearch.cli.Command.main(Command.java:90) at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:115) at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92) Refer to thelog for complete error details.

Copy the code

This version requires the security authentication function to create the elasticSearch. keystore file, so enter the following command

./bin/elasticsearch-keystore create
Copy the code

Check whether the startup is successful

If you can access 127.0.0.1::9200 from your browser and see the following interface, the installation is successful

ElasticSearch is available on a centos VM.

Configure Internet access

Configure the steps of the Internet access is also very simple, follow the steps to set up three minutes

Problem a

max file descriptors [4096] for elasticsearch process is too low, increase to at least [65535]

Edit the/etc/security/limits. Conf, append the following content

es soft nofile 65536

es hard nofile 65536

Question 2

max number of threads [3782] for user [es] is too low, increase to at least [4096]

Maximum number of elasticSearch threads is too low

Modify/etc/security/limits. Conf

Add the following two lines at the end of the file:

es soft nproc 4096

es hard nproc 4096

Question 3

max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

Add a line at the end of the /etc/sysctl.conf file

vm.max_map_count=262144

Problem four

the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured

This means to configure at least one of the following

#[discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes]

Modify the elasticSearch. Yml configuration file in the Config directory of ElasticSearch

node.name: node-1
cluster.initial_master_nodes: ["node-1"] network. Host: 0.0.0.0Copy the code

Problem five

The above operations are finished, but the external network is still not accessible, need to see whether the firewall is closed

# Turn off fire protection
systemctl stop firewalld.service

# Set permanent shutdown
systemctl disable firewalld.service
Copy the code

Note that you need to restart the machine after the modification of questions 1 and 2, remember, remember, remember

Access ElasticSearch from the extranet

The IP address of the VM is http://192.168.253.129/

Next, try to access vm IP + port 9200 on the host to see if it is accessible

Three, install Kibana

Download the ElasticSearch software in the same way as the original version of ElasticSearch.

https://artifacts.elastic.co/downloads/kibana/kibana7.1. 0-linux-x86_64.tar.gz
Copy the code

So next you need to install Kibana version 7.1.0

Configure Kibana parameters

Copy to the end of the file

i18n.locale: "zh-CN"

server.host: "0.0.0.0"

elasticsearch.hosts: ["http://localhost:9200"]
Copy the code

Start the Kibana

Go to the Kibana directory and run./bin/ Kibana

Unable to start error reported as follows

[WARN ][o.e.c.c.ClusterFormationFailureHelper] [node-1] master not discovered or elected yet, an election requires a node with id [rEq_ExihQ927BnwBy3Iz7A], have discovered [] which is not a quorum; discovery will continueUsing [127.0.0.1 127.0.0.1:9300:9301, 127.0.0.1:9302, 127.0.0.1:9303, 127.0.0.1:9304, [: : 1) : 9300, [: : 1) : 9301, [: : 1) : 9302, [: : 1) : 9303, [::1]:9304] from hosts providers and [{node - 1} {DtZPMDK4S3qaSQF6mRhRqw} {lBAhaMvDTKmGkysihkwAqA} {192.168.122.130} {9300} 192.168.122.130: {ml. Machine_memory = 19077 44768, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 412, last-accepted version 16 in term 1
Copy the code

Solution:

Check whether the previous node data is in the data directory of ElasticSearch

How to close Kibana instance startup

View the current port to perform netstat anp | grep, 5601, the last side is the process ID, execute kill 9 process ID

Then restart it.

If Kibana is configured as Chinese

I’ve given you the parameters, which are i18N. locale: “zh-cn”

When you come in, you’ll see this page

Click on the toolbar to get access to ElasticSearch data, which is pretty nice

4. Quick browsing of Kibana interface

After entering the software, you can see the loaded data set

After coming in, you can see three sample data, namely log, e-commerce order, and flight data

Then click on the Dashboards and you’ll see the three sample sets of data you just added

Dev Tools is a very important tool for ElasticSearch in Kibana. This tool will be used a lot in the future

Daemon Kibana, ElasticSearch

After you start ElasticSearch and Kibana you will find that the terminal is directly launched on the current port. If you disable this terminal it will be closed.

The next step is to use Nohup to start the daemon process. Note that nohup has a lot of information about this on the web, so don’t be confused

Install nohup

When nohup does not exist, execute it

yum install -y coreutils
Copy the code

Normally it will be installed in

/usr/bin/nohup
Copy the code

Run which Nohup to confirm the installation location

To configure the nohup command globally, execute the command in the simplest way

#vi ~/.bash_profile 

Add this line of code
PATH=$PATH:$HOME/bin:/use/bin

export PATH
Copy the code

Then, save and the refresh takes effect

source ~/.bash_profile
Copy the code

Finally, verify that the installation is successful if the version information appears

nohup --version
Copy the code

Start the Kibana

Nohup./bin/kibana. This will make Kibana start up, but the following error will appear, and Kibana will be closed when you CTRL + C

The error message needs to be redirected to the Linux void

nohup ./bin/kibana > /dev/null 2>&1 &
Copy the code

This is the Kibana process ID. You can execute it if you forget

ps -ef | grep node
Copy the code

Check it out, because Kibana is from Node

Start the ElasticSearch

Do the same with starting Kibana

nohup ./bin/elasticsearch > /dev/null 2>&1 &
Copy the code

Install the Logstash and import the demo data into ElasticSearch

Download address

Wget HTTP: / / https://artifacts.elastic.co/downloads/logstash/logstash-7.1.0.zipCopy the code

The demo data is obtained from movielens, a recommendation system, so you don’t need to download the data, you just need to find the demo data

That kind of data can’t be shared, so…

Unpack the

Unzip logstash - 7.1.0. ZipCopy the code

Start the logstash

./bin/logstash -f logstash.conf
Copy the code

You’ll need to grab the demo data and put the logstash. Conf and movies.csv files in the first layer of the logstash directory

The fields circled in the image below could be where movies.csv is stored. Change this path to your own

Then you run into the first problem when you execute the start logstash command again

logstash could not find java; set JAVA_HOME or ensure java is in PATH
Copy the code

First you need to make sure you have Java installed

Verify the installation
java -version
Copy the code

The following picture shows that the installation is successful, if not installed, do not worry, click to give you detailed process

Install java8

portal

https://download.oracle.com/otn/java/jdk/8u311-b11/4d5417147a92418ea8b615e228bb6935/jdk-8u311-linux-x64.tar.gz
Copy the code

If you are not registered with Oracle, you will not be able to download the java8 installation package, so you need an Oracle account first.

First download the compressed package to the host, then use the artifact SCP command to transfer the file, which is fairly fast, so you can transfer the downloaded Java package to the server.

SCP JDK - 8 u311 - Linux - x64. Tar. Gz root: 192.168.17.128: /Copy the code

Create a Java directory under /usr/local

cd /usr/local

mkdir java
Copy the code

Move the Java zip to /usr/local/java

mv jdk-8u311-linux-x64.tar.gz /usr/local/java/
Copy the code

Unpack the

tar -zxf jdk-8u311-linux-x64.tar.gz jdk8
Copy the code

Configuring environment Variables

vim /etc/profile
Copy the code

At the end of the file (if you are in the same directory as kaka, don’t change it)

export JAVA_HOME=/usr/local/java/jdk8

export PATH=$PATH:$JAVA_HOME/bin

export CLASSPATH=.:/$JAVA_HOME/lib/tools.jar:$JAVA_HOME/lib/dt.jar
Copy the code

Set to take effect

source /etc/profile
Copy the code

The final verification is ok

Verify the installation
java -version
Copy the code

Run./bin/logstash -f logstash. Conf again

The following error still occurs, but the Java environment is true and has been added

Come to vim/logstash – 7.1.0 / bin/logstash lib. Sh, you’ll find that error was playing out here, the reason is JAVACMD no value

I searched the file for a lot of files, so I just reassigned the values on top of it, paying attention to the circled areas

Save, exit, and execute again

./bin/logstash -f logstash.conf
Copy the code

Now you’re done importing your data into ElasticSearch

Check to see if there is an index of movies in Kibana

7. Install Cerebro

download

Wget HTTP: / / https://github.com/lmenezes/cerebro/releases/download/v0.9.4/cerebro-0.9.4.tgzCopy the code

Unpack the

The tar - ZXF cerebro - 0.9.4. TGZCopy the code

To modify the configuration file, you only need to open the host configuration and write the IP address as your own

hosts = [
  # {
  # host = "http://192.168.17.128:9100"
  # name = "Localhost cluster"
  # headers-whitelist = [ "x-proxy-user", "x-proxy-roles", "X-Forwarded-For" ]
  #}
  # Example of host with authentication
  {
    host = "http://192.168.17.128:9200"
  # name = "Secured Cluster"
    #auth = {
     # username = "admin"
     # password = "admin"
    #}}]Copy the code

Start the

cd cerebro

./bin/cerebro
Copy the code

Enter IP address +9000 port

Eight, summary

ElasticSearch and Kibana have been installed and started successfully. For ElasticSearch, you need to configure it to be accessible from the Internet.

Have a brief understanding of the Kibana interface, most of the later exercises are on Kibana.

ElasticSearch and Kibana daemons for ElasticSearch and Kibana

MySQL series general directory

Insist on learning, insist on writing, insist on sharing is the belief that Kaka has been upholding since he started his career. May the article in the big Internet can give you a little help, I am kaka, see you next time.