This article translated from: www.elastic.co/guide/en/el…

Elasticsearch this article is an introduction to Elasticsearch. You will learn how to install Elasticsearch in different environments.

The installation

At least Java 8 is required to install Elasticsearch. In the official recommendation, Oracle JDK version 1.8.0_131 is recommended.

java -version
echo $JAVA_HOMECopy the code

Once Java is set up, we can download and run Elasticsearch. Binaries are available from www.elastic.co/downloads as well as all past distributions. For each version, you can choose from zip or tar archives, DEB or RPM packages, or Windows MSI installation packages.

An example of installing with tar

For simplicity, we use tar files. Download Elasticsearch 5.6.1 tar as follows:

The curl - L - O https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.6.1.tar.gzCopy the code

Then extract:

The tar - XVF elasticsearch - 5.6.1. Tar. GzCopy the code

It then creates a bunch of files and folders in the current directory. Then we go to the bin directory, as follows:

cdElasticsearch - 5.6.1 / binCopy the code

Now we are ready to start our nodes and individual clusters:

./elasticsearchCopy the code
An example of installing with the MSI Win installation package

For Windows users, the MSI installation package is officially recommended. The package includes a graphical user interface (GUI) to guide you through the installation process. First of all, from artifacts. Elastic. Co/downloads/e… 5.6.1 MSI. Then double-click the downloaded file to launch the GUI. On the first screen, select the deployment directory:

Then choose whether to install Elasticsearch as a service or manually start Elasticsearch as needed. To align with the tar example, select Not install as a service:

For the configuration, just leave the defaults:

Again, as in the tar example, uncheck all plug-ins to install any:

Click the Install button to install Elasticsearch:

By default, Elasticsearch will be installed in % PROGRAMFILES % \ Elastic \ Elasticsearch. Navigate here to the bin directory as follows:

With command prompt:

cd %PROGRAMFILES%\Elastic\Elasticsearch\binCopy the code

PowerShell:

cd $env:PROGRAMFILES\Elastic\Elasticsearch\binCopy the code

Now we are ready to start our nodes and individual clusters:

.\elasticsearch.exeCopy the code
Successfully running the node

If all goes well with the installation, you should see a bunch of messages like this:

[2016-09-16T14:17:51:251][INFO][O.E.n.ode][] initializing... Mounts [2016-09-16T14:17:51:329][INFO][O.e.e.node environment][6-bjhwl] using [1] Data paths, mounts [[/ (/dev/sda1)]], mounts [/ (/dev/sda1)] Net usable_space [317.7 GB], NET total_space [453.6 GB], SPINS? [no], types [ext4] [2016-09-16T14:17:51:330][INFO][O.e.E.NodeEnvironment][6-bjhwL] Heap size [1.9 GB], compressed ordinary object pointers [true][2016-09-16T14:17:51:333][INFO][O.E.n.mode][6-bjhwl] Node Name [6-bjhwl] Node name [6-bjhwl];set[node.name] To override [2016-09-16T14:17:51:334][INFO][O.E.n.mode][6-bjhwl] version[5.6.1], PID [21261], Build [f5daa16/2016-09-16 T09: division. 346 z], OS Linux / 4.4.0-36 - generic/amd64]. JVM[Oracle Corporation/Java HotSpot(TM) 64-bit Server VM/1.8.0_60/25.60-b23] [2016-09-16T14:17:51:967][INFO ][o.E.P.luginsService][6-bjhwl] loaded Module [aggs-matrix-stats] [2016-09-16T14:17:51:967][INFO [o.e.p.luginsService][6-bjhwl] Loaded Module [ingest-common] [2016-09-16T14:17:51:967][INFO][o.e.p.luginsService] [6-bjhwl] loaded Module [lang-expression] [2016-09-16T14:17:51.967][INFO][O.e.P.luginsService][6-bjhwl] loaded Module [lang-expression] [2016-09-16T14:17:51.967][INFO][O.E.p.luginsService] loaded Module [6-bjhwl] loaded Module [lang-expression] [2016-09-16T14:17:51.967 Module [lang-groovy] [2016-09-16T14:17:51.967][INFO][O.E.P.luginsService][6-bjhwl] Loaded Module [lang- Mustache] [2016-09-16T14:17:51.967][INFO][o.e.p.luginsService][6-bjhwl] Loaded Module [lang-painless] [2016-09-16T14:17:51:967][INFO][o.e.p.luginsService][6-bjhwl] Loaded Module [percolator] [2016-09-16T14:17:51:968][INFO][O.e.p.luginsService][6-bjhwl] Loaded Module [reindex] [2016-09-16T14:17:51:968][INFO ][o.e.P.luginsService][6-bjhwl] Loaded Module [transport-netty3] [2016-09-16T14:17:51:968][INFO ][o.e.P.luginsService][6-bjhwl] Loaded Module [transport-netty4] [2016-09-16T14:17:51:968][INFO [o.e.n.mode][o.e.p.luginsService][6-bjhwl] Loaded plugin [2016-09-16T14:17:53,521][INFO][o.e.n.mode] Initialized [6-bjhwl] Initialized [2016-09-16T14:17:53,521][INFO][O.E.N.mode][6-bjhwl] Starting... [2016-09-16T14:17:53.671][INFO][O.e.t.ransportService][6-bjhwL] publish_address {192.168.8.112:9300}, Bound_addresses {{192.168.8.112:9300} [2016-09-16T14:17:53.676][WARN][O.E.B. BootstrapCheck][6-bjhwl] Max virtual memory areas vm.max_map_count [65530] likely too low, Increase to at least [262144] [2016-09-16T14:17:56,731][INFO][O.E.H.ttpServer][6-bjhwL] publish_address Bound_addresses {192.168.8.112:9200}, {[: : 1) : 9200}. {192.168.8.112:9200} [2016-09-16T14:17:56:732][INFO][O.E.G. Goatewayservice][6-bjhwL] Recovered [0] indices into Cluster_state [2016-09-16T14:17:56.748][INFO][O.E.N.mode][6-bjhwL] StartedCopy the code

Without too much detail, we can see that our node named “6-bjhwl” (which in your case would be a different set of characters) has been started and selected as main in a single cluster. Don’t worry about what’s happening at this point in time. The important point here is that we have started a node in a cluster.

As mentioned earlier, we can override cluster or node names. This can be done from the command line when Elasticsearch is started, as follows:

./elasticsearch -Ecluster.name=my_cluster_name -Enode.name=my_node_nameCopy the code

Also notice the line marked AS HTTP, which contains information about the HTTP address (192.168.8.112) and port (9200). By default, Elasticsearch uses port 9200 to provide access to its REST API. This port is configurable if required.

Explore your cluster

REST API

Now that our nodes (and clusters) are up and running, the next step is to learn how to communicate with them. Fortunately, Elasticsearch provides a very comprehensive and powerful REST API that you can use to interact with clusters. There are a few things you can do with the API:

  • Check your cluster, node and index health, status and statistics
  • Manages cluster, node and index data and metadata
  • Perform CRUD (Create, read, update and delete) and search operations on your index
  • Performs advanced search operations such as paging, sorting, filtering, scripting, aggregation and many more