Elasticsearch7.13.1 deploying elasticsearch in a single-node or cluster environment


Ready to file

X64.tar. gz –> JDK ElasticSearch — 7.13.1-linux-x86_64.tar.gz –> Installation package for ElasticSearch Elasticsearch -analysis- IK-7.13.1. zip –> Kibana-7.13.1-linux-x86_64.tar. gz –> Kibana

The JDK 8 u191 – Linux – x64. Tar. Gz from oracle’s official website to download www.oracle.com/java/techno…

Elasticsearch 插件 图 文 : github.com/medcl/elast… Elasticsearch installation package: www.elastic.co/cn/download… Kibana: www.elastic.co/cn/download…

Cluster Environment Construction

Ip Hostname
192.168.56.101 esnode1
192.168.56.102 esnode2
192.168.56.103 esnode3

All the following operations are performed under account testuser if no description is provided

Install JDK (3 machines,Root)

tar -xvf jdk-8u191-linux-x64.tar.gz -C /usr/local

cat >> /etc/profile << EOF

export JAVA_HOME=/usr/local/jdk18.. 0_191 /export PATH=\$JAVA_HOME/bin:\$PATH
exportCLASSPATH=.:\$JAVA_HOME/lib/dt.jar:\$JAVA_HOME/lib/tools.jar EOF source /etc/profile ## Test JDK installation java-versionCopy the code

Configure Linux parameters (on three machines, root permission required)

If no parameter is configured, Es startup is abnormal.

# # ##
cat >> /etc/security/limits.conf << EOF
* soft nofile 65536
* hard nofile 65536
* soft nproc 65536
* hard nproc 65536
EOF

####
sed -i 's/^* soft nproc 65536$/* soft nproc 65536/' /etc/security/limits.d/20-nproc.conf# # ##
cat >> /etc/sysctl.conf << EOF
vm.max_map_count=655360EOF sysctl -p then turn off xshell or secucRT and re-connect to enter into effect, through ulimit -n check the effect, see the number is65536Even if you succeed, try restarting the server. #### If the server fails to restart, use the final solution vim /etc/systemd/system.conf DefaultLimitNOFILE=65536
DefaultLimitNOPROC=65536Change these two configurations and restart the server againCopy the code

Install ElasticSearch (on 3 machines)

The following uses /home/testuser/ as an example

# # into/home/testuser with/directory tar XVF/home/testuser with/elasticsearch7.131.-linux-x86_64.tar.gz -c /home/testuser/# # Back to user testuser with enabled elasticsearch CD/home/testuser with/chown -r testuser with: testuser with/home/testuser with elasticsearch7.131./ # # to establish the corresponding data storage directory/home/CD testuser with/elasticsearch7.131./
mkdir data
Copy the code

Modify the configuration

/ home/testuser with/elasticsearch – 7.13.1 / config/elasticsearch yml

– configuration instructions — — — — — > > > > > cluster. The name: testuser with es — — — — – > > > the name of the cluster, the same name as a cluster node. Name: Node-1 —->>> Node name, each node name in cluster mode unique node.master: true —->>> Whether the current node can be elected as the master node, yes: true, no: false node.data: True —->>> Whether the current node is used to store data. Yes: true or no: false path.data: —->>> Location where index data is stored path.logs: —->>> Location where log files are stored Bootstrap. memory_lock —->>>> Physical memory needs to be locked. If yes, the value is true or false. True, no: False network.host —->>>> Listening address used to access the es network.publish_host —->>>> Can be set to the Intranet IP address, Port —->>>>es External HTTP port, default 9200 discovery.seed_hosts —->>>> ES7.x New configuration, write the device address of the candidate primary node, Cluster. Initial_master_nodes —->>>> Is added after es7.x. This configuration is required when initializing a new cluster to elect master http.cers. enabled —->>>> Whether cross-domain is supported. Yes True: this configuration is required when the HEAD plug-in is used. Http.coron. allow-origin —->>>>”*” indicates that all domain names are supported

1. Modify the configuration [192.168.56.101].

cat > /home/testuser/elasticsearch7.131./config/elasticsearch.yml << EOF
cluster.name: testuser-es
node.name: node- 1
node.master: true
node.data: true
path.data: /home/testuser/elasticsearch7.131./data
path.logs: /home/testuser/elasticsearch7.131./logs
network.host: 192.168. 56101.
http.port: 9200
transport.tcp.port: 9300
transport.tcp.compress: true
discovery.seed_hosts: ["192.168.56.101"."192.168.56.102"."192.168.56.103"]
cluster.initial_master_nodes: ["node-1"."node-2"."node-3"]
EOF
Copy the code

2. Modify the configuration [192.168.56.102].

cat > /home/testuser/elasticsearch7.131./config/elasticsearch.yml << EOF
cluster.name: testuser-es
node.name: node2 -
node.master: true
node.data: true
path.data: /home/testuser/elasticsearch7.131./data
path.logs: /home/testuser/elasticsearch7.131./logs
network.host: 192.168. 56102.
http.port: 9200
transport.tcp.port: 9300
transport.tcp.compress: true
discovery.seed_hosts: ["192.168.56.101"."192.168.56.102"."192.168.56.103"]
cluster.initial_master_nodes: ["node-1"."node-2"."node-3"]
EOF
Copy the code

3. Modify the configuration [192.168.56.103].

cat > /home/testuser/elasticsearch7.131./config/elasticsearch.yml << EOF
cluster.name: testuser-es
node.name: node- 3
node.master: true
node.data: true
path.data: /home/testuser/elasticsearch7.131./data
path.logs: /home/testuser/elasticsearch7.131./logs
network.host: 192.168. 56103.
http.port: 9200
transport.tcp.port: 9300
transport.tcp.compress: true
discovery.seed_hosts: ["192.168.56.101"."192.168.56.102"."192.168.56.103"]
cluster.initial_master_nodes: ["node-1"."node-2"."node-3"]
EOF
Copy the code

Chinese word segmentation Ik installation (three machines, optional)

Unzip elasticSearch-analysis-IK-7.13.1.zip and place it under plugins/ik/ (manually or by following the following command)

mkdir /home/testuser/elasticsearch7.131./plugins/ik
unzip -o -d /home/testuser/elasticsearch7.131./plugins/ik elasticsearch-analysis-ik7.131..zip ## bash: unzip: commandnotFound, 解 决 yum install unzip -yCopy the code

Port opening (performed on three machines)

If the firewall is enabled, open ports 9200 and 9300 to provide services. If the firewall is not enabled, skip this step.

firewall-cmd --zone=public --add-port=9200/tcp --permanent
firewall-cmd --zone=public --add-port=9300/tcp --permanent
firewall-cmd --reload
Copy the code

So far, the ES cluster of three machines has been set up.

Start elasticSearch on 3 machines (testuser)

CD/home/testuser with/elasticsearch – 7.13.1 / bin /. / elasticsearch – d

Viewing Cluster Status

The curl curl curl http://192.168.56.102:9200/_cat/nodes http://192.168.56.101:9200/_cat/nodes http://192.168.56.103:9200/_cat/nodes

Install Kibana (optional)

You can install any server or all three servers. The following uses 192.168.56.101 as an example

/home/testuser tar -zxvf kibana7.131.-linux-x86_64.tar.gz -C /home/testuser/
mv kibana7.131.-linux-x86_64 kibana ## configure vi kibana.yml server.host:"192.168.56.101"
elasticsearch.hosts: ["http://192.168.56.101:9200"] # # start/home/testuser with/kibana/bin/kibana & # #5601Port is firewall - CMD - zone =public --add-port=5601/ TCP --permanent firewall-cmd --reload/TCP --permanent firewall-cmd --reload/ / 192.168.56.101:5601 / app/dev_tools # / console
Copy the code

Setup of single machine environment

Single machine version, that is, just change the cluster configuration of three machines into one machine, that is, all the configuration is executed on 192.168.56.101, and change the configuration of discovery.seed_hosts, and cluster.initial_master_nodes to 1. For other steps, refer to the cluster deployment mode

cat > /home/testuser/elasticsearch7.131./config/elasticsearch.yml << EOF
cluster.name: testuser-es
node.name: node- 1
node.master: true
node.data: true
path.data: /home/testuser/elasticsearch7.131./data
path.logs: /home/testuser/elasticsearch7.131./logs
network.host: 192.168. 56101.
http.port: 9200
transport.tcp.port: 9300
transport.tcp.compress: true
discovery.seed_hosts: ["192.168.56.101"]
cluster.initial_master_nodes: ["node-1"]
EOF
Copy the code

Add delete change check test

1. Add index, default is 5 shards, 1 copy, change to 3 shards, 1 copy

PUT /indextest1

{
  "settings": {
    "number_of_shards": 3."number_of_replicas": 1}}Copy the code

2. Obtain the information of index just added

GET /indextest1/_settings

3. Add PUT. The ID needs to be specified

PUT /indextest1/_doc/1

{ “first_name”:”Jane”, “last_name”:”Smith”, “age”:32, “about”:”i like to collect rock albums”, “interests”:[“music”] }

4. Add POST, if id is not specified, it will be generated by es

POST /indextest1/_doc

{
  "first_name":"Douglas"."last_name":"Fir"."age":23."about":"i like to build cabinets"."interests": ["forestry"]}Copy the code

5. Query information by ID

GET /indextest1/_doc/xklYGHoB1iFjYcRaTFye

5. The specified field is returned based on the ID

GET /indextest1/_doc/1? _source=first_name,age

6. Update the document. PUT overwrites the previous document to create a new document

PUT /indextest1/_doc/1

{
  "first_name":"Jane"."last_name":"Smith"."age":36."about":"i like to collect rock albums"."interests": ["music"]}Copy the code

7. Update the document,POST is to modify the field

POST /indextest1/_doc/1

{
  "age":18."last_name":"hua1"
}
Copy the code

8. Delete records

DELETE /indextest1/_doc/1

9. Delete indexes

DELETE /indextest1