Elastic Stack 8.0 was finally released recently. In my previous article “Elastic Stack 8.0 Installation – Securing your Elastic Stack is now easier than ever” I described in detail how to deploy Elasticsearch and Kibana locally. The easiest way to set up Elasticsearch is to create a managed deployment using Elasticsearch Service on Elastic Cloud. If you prefer to manage your own test environment, you can install and run Elasticsearch using Docker. In today’s demo, I’ll use Docker to install Elastic Stack 8.0 and demonstrate its use.

Run the Elasticsearch

We need to have Docker Desktop installed on our computers. Then we run the following command:

1. The docker network create elastic 2. Docker pull docker. Elastic. Co/elasticsearch/elasticsearch: 8.0.0 3. The docker run -- the name Es - node01 -.net elastic - p, 9200:9200-9300: p. 9300 - it docker. Elastic. Co/elasticsearch/elasticsearch: 8.0.0Copy the code

When we start Elasticsearch for the first time, the following security configuration occurs automatically:

  • Generate certificates and keys for the transport and HTTP layers.
  • Transport Layer Security (TLS) configuration Settings write elasticSearch.yml.
  • Generate passwords for elastic users.
  • Generate a registration token for Kibana.
1. -------------------------------------------------------------------------------- 2. -> Elasticsearch security features have been automatically configured! 3. -> Authentication is enabled and cluster connections are encrypted. 5. -> Password for the elastic user (reset with `bin/elasticsearch-reset-password -u elastic`): 6. 21=0VbI9nz+kjR69l1WT 8. -> HTTP CA certificate SHA-256 fingerprint: 9. 05661cff7bef5f59ae84442e25d9f1821662f82ed958b1b1da6147950943ecc3 11. -> Configure Kibana to use this cluster: 12. * Run Kibana and click the configuration link in the terminal when Kibana starts. 13. * Copy the following enrollment token and paste it into Kibana in your browser (valid for the next 30 minutes): 14. eyJ2ZXIiOiI4LjAuMCIsImFkciI6WyIxNzIuMjQuMC4yOjkyMDAiXSwiZmdyIjoiMDU2NjFjZmY3YmVmNWY1OWFlODQ0NDJlMjVkOWYxODIxNjYyZjgyZWQ5 NThiMWIxZGE2MTQ3OTUwOTQzZWNjMyIsImtleSI6Ilc5TWktMzRCTVZaSFJZRHZXc3piOmk5b1RXWkdFUV95VVoxeUFtU0N0bFEifQ== 16. -> Configure other nodes to join this cluster: 17. * Copy the following enrollment token and start new Elasticsearch nodes with `bin/elasticsearch --enrollment-token <token>` (valid for the next 30 minutes): 18. eyJ2ZXIiOiI4LjAuMCIsImFkciI6WyIxNzIuMjQuMC4yOjkyMDAiXSwiZmdyIjoiMDU2NjFjZmY3YmVmNWY1OWFlODQ0NDJlMjVkOWYxODIxNjYyZjgyZWQ5 NThiMWIxZGE2MTQ3OTUwOTQzZWNjMyIsImtleSI6IlhkTWktMzRCTVZaSFJZRHZXc3pvOnRoQWxhcmxYU3Myb0ZHX2g5czA1Z1EifQ== 20. If you're running in Docker, copy the enrollment token and run: 21. ` docker run - e "ENROLLMENT_TOKEN = < token >" docker. Elastic. Co/elasticsearch/elasticsearch: 8.0.0 ` 22. --------------------------------------------------------------------------------Copy the code

You may need to scroll up to see the Password and enrollment token in Terminal.

Copy the generated password and registration token and save it in a secure location. These values are only displayed when you first start Elasticsearch. You will use these to register Kibana into your Elasticsearch cluster and log in.

Note: To reset the passwords of elastic users or other built-in users, run the ElasticSearch-reset-password tool. To generate a new registration token for the Kibana or Elasticsearch node, run the Elasticsearch -create-enrollment- Token tool. These tools are available in the Elasticsearch bin directory.

If we need access to Elasticsearch, please refer to my previous article “Elastic Stack 8.0 Installation – Securing your Elastic Stack is now easier than ever”.

Install and run Kibana

To use an intuitive UI to analyze, visualize, and manage Elasticsearch data, install Kibana.

In a new terminal session, run:

1. The docker pull docker. Elastic. Co/kibana/kibana: 2 8.0.0 docker run - name kib - 01 -.net elastic -p 5601:5601 Docker. Elastic. Co/kibana/kibana: 8.0.0Copy the code

Note that we used –net to define a network above. Use the same network as above in the Elasticsearch installation.

When our command above runs successfully, we can see the following output:

On it, it takes me to the address http://0.0.0.0:5601/? Code =915472 to configure Kibana. Open the above address in your browser:

Copy the Kibana Enrollment Token at Elasticsearch startup into the input box above and click Configure Elastic:

After the configuration is complete, we enter the previously generated Elastic superuser password into the following login page:

Click Log in to access the Kibana interface:

Above, the default is to add Integrations. This is used to ingest data into Elasticsearch. I’m not going to do that in today’s presentation. We’ll click Explore on my own:

So we have successfully entered the Kibana interface.

So far we have successfully launched Elasticsearch and Kibana via Docker. You can run the following command to check:

docker ps
Copy the code
1. $ docker ps 2. CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 3. 7ad78365a6a4 Docker. Elastic. Co/kibana/kibana: 8.0.0 "/ bin/observatory - / usr/l..." 13 minutes ago Up 13 minutes 0.0.0.0:5601->5601/ TCP kib-01 4.438986615AF6 Docker. Elastic. Co/elasticsearch/elasticsearch: 8.0.0 "/ bin/observatory - / usr/l..." 24 minutes ago Up 24 minutes 0.0.0.0:9200->9200/ TCP, 0.0.0.0:9300->9300/ TCP ES-node01Copy the code

We can see that two containers named KIb-01 and ES-Node01 are already running successfully.

Send the request to Elasticsearch

You use the REST API to send data and other requests to Elasticsearch. This allows you to interact with Elasticsearch using any client that sends an HTTP request, such as curl. You can also send requests to Elasticsearch using the Kibana console.

For example, we could try the following command to see the result of the request:

Many developers want to know how to use curl to retrieve the requested data. We can’t get data like we did before:

curl -X GET http://localhost:9200/
Copy the code

The reason for this is that Elasticsearch has an HTTPS connection enabled and we must use the certificate to access Elasticsearch. Elasticsearch’s access address is https://localhost:9200, not http://localhost:9200. So how do we get this certificate?

To get the name of the Elasticsearch container, run the following command:

docker ps
Copy the code

We can log in to the container with the following command:

docker exec -it es-node01 /bin/bash
Copy the code

You can find the location of Elasticsearch certificate by using the following method:

Above, you can see the certificate generated by Elasticsearch. This certificate is called http_ca.crt. We can copy the certificate as follows:

docker cp es-node01:/usr/share/elasticsearch/config/certs/http_ca.crt .
Copy the code
1. $ pwd 2. /Users/liuxg/data/elastic8 3. $ docker cp es-node01:/usr/share/elasticsearch/config/certs/http_ca.crt . 4. $  ls 5. http_ca.crtCopy the code

Using curl, you can access Elasticsearch using curl:

curl -X GET --cacert ./http_ca.crt -u elastic:21=0VbI9nz+kjR69l1WT https://localhost:9200/
Copy the code

In the command above, remember to replace the user name and password in -u above. Also note that the address we visited was https://localhost:9200/ instead of http://localhost:9200.

The command output is as follows:

1. $ curl -X GET --cacert ./http_ca.crt -u elastic:21=0VbI9nz+kjR69l1WT https://localhost:9200/ 2. { 3. "name" : "438986615af6", 4. "cluster_name" : "docker-cluster", 5. "cluster_uuid" : "STCnb4M6SMmvzEmFf6bwmw", 6. "version" : {7. "number" : "8.0.0", 8 "build_flavor" : "default", 9 "build_type" : "docker," 10. "build_hash" : "1 b6a7ece17463df5ff54a3e1302d825889aa1161", 11. "build_date" : "the 2022-02-03 T16:47:57. 507843096 z", 12 "build_snapshot" : False, 13. "lucene_version" : "9.0.0", 14. "minimum_wire_compatibility_version" : "Minimum_index_compatibility_version" : "7.0.0" 16.}, 17. "tagline" : "You Know, for Search" 18.}Copy the code

Add data

You add data to Elasticsearch as a JSON object called a document. Elasticsearch stores these documents in searchable indexes.

For time series data, such as logs and metrics, you typically add documents to a data stream consisting of multiple automatically generated supporting indexes.

The data flow needs an index template that matches its name. Elasticsearch uses this template to configure the backup index for the flow. Documents sent to the data stream must have an @TIMESTAMP field.

Adding a single document

Submit the following index request to add a single log entry to the logs-my_app-default data stream. Since logs-my_app-default does not exist, the request automatically creates it using the built-in logs-index template.

{3. "@timestamp": "2099-05-06t16:21:15.000z ", 4. "event": {5. "original": "192.0.2.42 - - [06/May/2099:16:21:15 +0000] \"GET /images/bg.jpg HTTP/1.0\" 200 24736" 6.} 7.}Copy the code

The response includes metadata generated by Elasticsearch for the document:

  • Contains document support _index. Elasticsearch automatically generates supported index names.
  • The unique _id of the document in the index.

Add multiple documents

Use the _BULK endpoint to add multiple documents to a request. Bulk data must be newline-delimited JSON (NDJSON). Each line must end with a newline character (\n), including the last line.

{"create": {}} 3. {"@timestamp": "2099-05-07T16:24:32.000z ", "event": { "original": "192.0.2.242 - - [07/May/ 2020:16:24:32-0500] \"GET /images/ hm_ngg.jpg HTTP/1.0\" 304 0"}} 4. {"create": 5. {}} {" @ timestamp ":" the 2099-05-08 T16:25:42. 000 z ", "the event" : {" the original ": "192.0.2.255 - - [08/May/2099:16:25:42 +0000] \"GET /favicon.ico HTTP/1.0\" 200 3638"}}Copy the code

If we use the following command to view the current index:

GET _cat/indices
Copy the code

We will find:

Yellow open. ds-logs-my_app-default-2022.02.15-000001 a4I6Rzj_S6yPC0Dww6OzTg 1 1 3 0 8.3 KB 8.3 KBCopy the code

We can see that we don’t have the logs-my_app-default index name we wanted before, because our index name matches the built-in index template logs– :

For more on index templates, see my previous article, “Elastic: Data Stream for Index lifecycle Management.”

Search data

Indexed documents can be used for near real-time searches. The following search matches all log entries in logs-my_app-default and sorts them in descending order @TIMESTAMP.



1.  GET logs-my_app-default/_search
2.  {
3.    "query": {
4.      "match_all": { }
5.    },
6.    "sort": [
7.      {
8.        "@timestamp": "desc"
9.      }
10.    ]
11.  }


Copy the code

By default, the HITS portion of the response contains at most the first 10 documents that match the search. Each hit _source contains the original JSON object submitted during indexing.

1. { 2. "took" : 2, 3. "timed_out" : false, 4. "_shards" : { 5. "total" : 1, 6. "successful" : 1, 7. "skipped" : 0, 8. "failed" : 0 9. }, 10. "hits" : { 11. "total" : { 12. "value" : 3, 13. "relation" : "eq" 14. }, 15. "max_score" : Null, 16. "hits" : [17. {18. "_index" : "ds - logs - my_app - default - 2022.02.15-000001", 19. "_id" : "ctNa-34BMVZHRYDvfszg", 20. "_score" : null, 21. "_source" : { 22. "@timestamp" : "2099-05-08T16:25:42.000z ", 23. "event" : {24. "original" : ""192.0.2.255 - - [08/May/2099:16:25:42 +0000] "GET /favicon.ico HTTP/1.0" 200 3638"" 25. [28.4081940742000 29.] 30.}, 31. {32. "_index" : ".ds-logs- my_APP-default-2022.02.15-000001 ", 33. "cdNa-34BMVZHRYDvfszg", 34. "_score" : null, 35. "_source" : { 36. "@timestamp" : "2099-05-07T16:24:32.000z ", 37. "event" : {38. "original" : "" "192.0.2.242 - [07 / May / 2020:16:24:32-0500] "GET/images/hm_nbg. HTTP / 1.0 JPG" 304 "is" "39. 40.}}, 41." sort ": [42.4081854272000 43.] 44.}, 45. {46. "_index" : ".ds-logs- my_APP-default-2022.02.15-000001 ", 47. "bdNY-34BMVZHRYDv98zk", 48. "_score" : null, 49. "_source" : { 50. "@timestamp" : "2099-05-06T16:21:15.000z ", 51. "event" : {52. "original" : "" "192.0.2.42 - [6 / May / 2099:16:21:15 + 0000] "GET/images/bg. HTTP / 1.0 JPG" 24736 "200" "53. 54}.}, 55," sort ": [56.4081767675000 57.] 58.} 59.] 60.Copy the code

Get a specific field

For large documents, parsing the entire _source is cumbersome. To exclude it from the response, set the _source parameter to false. Instead, use the fields parameter to retrieve the fields you want.



1.  GET logs-my_app-default/_search
2.  {
3.    "query": {
4.      "match_all": { }
5.    },
6.    "fields": [
7.      "@timestamp"
8.    ],
9.    "_source": false,
10.    "sort": [
11.      {
12.        "@timestamp": "desc"
13.      }
14.    ]
15.  }


Copy the code

The response contains the value of each hit field as an array.

1. { 2. "took" : 4, 3. "timed_out" : false, 4. "_shards" : { 5. "total" : 1, 6. "successful" : 1, 7. "skipped" : 0, 8. "failed" : 0 9. }, 10. "hits" : { 11. "total" : { 12. "value" : 3, 13. "relation" : "eq" 14. }, 15. "max_score" : Null, 16. "hits" : [17. {18. "_index" : "ds - logs - my_app - default - 2022.02.15-000001", 19. "_id" : "ctNa-34BMVZHRYDvfszg", 20. "_score" : null, 21. "fields" : { 22. "@timestamp" : [23. "2099-05-08T16:25:42.000z" 24.] 26. "sort" : [27.4081940742000 28Copy the code

Search the date range

To search within a specific time or IP range, use a range query.



1.  GET logs-my_app-default/_search
2.  {
3.    "query": {
4.      "range": {
5.        "@timestamp": {
6.          "gte": "2099-05-05",
7.          "lt": "2099-05-08"
8.        }
9.      }
10.    },
11.    "fields": [
12.      "@timestamp"
13.    ],
14.    "_source": false,
15.    "sort": [
16.      {
17.        "@timestamp": "desc"
18.      }
19.    ]
20.  }


Copy the code

The command output is as follows:

1. { 2. "took" : 2, 3. "timed_out" : false, 4. "_shards" : { 5. "total" : 1, 6. "successful" : 1, 7. "skipped" : 0, 8. "failed" : 0 9. }, 10. "hits" : { 11. "total" : { 12. "value" : 2, 13. "relation" : "eq" 14. }, 15. "max_score" : Null, 16. "hits" : [17. {18. "_index" : "ds - logs - my_app - default - 2022.02.15-000001", 19. "_id" : "cdNa-34BMVZHRYDvfszg", 20. "_score" : null, 21. "fields" : { 22. "@timestamp" : [23. "2099-05-07T16:24:32.000z" 24.] 25. ". Ds-logs - my_APP-default-2022.02.15-000001 ", 32. "_id" : "bDNY-34BMVZHryDV98ZK ", 33. "_score" : null, 34. "fields" : {35. "@ timestamp" : [36 "act the T16:2099-05-06. 000 z" 37] 38.}, 39. "sort" : [40.4081767675000 41.] 42.} 43.] 44.Copy the code

You can use date mathematics to define relative time ranges. The following query searches for data from the past day that does not match any log entries in Logs-my_app-default.



1.  GET logs-my_app-default/_search
2.  {
3.    "query": {
4.      "range": {
5.        "@timestamp": {
6.          "gte": "now-1d/d",
7.          "lt": "now/d"
8.        }
9.      }
10.    },
11.    "fields": [
12.      "@timestamp"
13.    ],
14.    "_source": false,
15.    "sort": [
16.      {
17.        "@timestamp": "desc"
18.      }
19.    ]
20.  }


Copy the code

Extract fields from unstructured content

You can extract runtime fields from unstructured content, such as log messages, during a search.

Extract the source. IP runtime field from Event.original using the following search. To include it in the response, add source. IP to the fields parameter.

1. GET logs-my_app-default/_search 2. { 3. "runtime_mappings": { 4. "source.ip": { 5. "type": "ip", 6. "script": """ 7. String sourceip=grok('%{IPORHOST:sourceip} .*').extract(doc[ "event.original" ].value)? .sourceip; 8. if (sourceip ! = null) emit(sourceip); 9. """ 10. } 11. }, 12. "query": { 13. "range": { 14. "@timestamp": { 15. "gte": "2099-05-05", 16. "lt": "2099-05-08" 17. } 18. } 19. }, 20. "fields": [ 21. "@timestamp", 22. "source.ip" 23. ], 24. "_source": false, 25. "sort": [ 26. { 27. "@timestamp": "desc" 28. } 29. ] 30. }Copy the code

The result of running the command above is:

1. { 2. "took" : 1, 3. "timed_out" : false, 4. "_shards" : { 5. "total" : 1, 6. "successful" : 1, 7. "skipped" : 0, 8. "failed" : 0 9. }, 10. "hits" : { 11. "total" : { 12. "value" : 2, 13. "relation" : "eq" 14. }, 15. "max_score" : Null, 16. "hits" : [17. {18. "_index" : "ds - logs - my_app - default - 2022.02.15-000001", 19. "_id" : "cdNa-34BMVZHRYDvfszg", 20. "_score" : null, 21. "fields" : { 22. "@timestamp" : [23. "the T16 2099-05-07: burn. 000 z" 24.], 25. ". The source IP ": [26." 192.0.2.242 "27.] 28}, 29." sort ": [30.4081854272000 31.] 32.}, 33. {34. "_index" : ".ds-logs- my_APP-default-2022.02.15-000001 ", 35. "bdNY-34BMVZHRYDv98zk", 36. "_score" : null, 37. "fields" : { 38. "@timestamp" : [39. "act the T16:2099-05-06. 000 z". 40], 41. ". The source IP ": [42." 192.0.2.42 "43.] 44)}, 45." sort ": [46.4081767675000 47.] 48.} 49.Copy the code

In the above command, it is used to runtime fields to advance the fields we need. For more information about Runtime Fields see the Runtime Fields article in “Elastic: A Hands-on Guide for Developers.”

Combine queries

You can combine multiple queries using bool queries. The following search combines two range queries: one on @timestamp and one on the source.ip runtime field.

1. GET logs-my_app-default/_search 2. { 3. "runtime_mappings": { 4. "source.ip": { 5. "type": "ip", 6. "script": """ 7. String sourceip=grok('%{IPORHOST:sourceip} .*').extract(doc[ "event.original" ].value)? .sourceip; 8. if (sourceip ! = null) emit(sourceip); 9. """ 10. } 11. }, 12. "query": { 13. "bool": { 14. "filter": [ 15. { 16. "range": { 17. "@timestamp": { 18. "gte": "2099-05-05", 19. "lt": "2099-05-08" 20. } 21. } 22. }, 23. { 24. "range": { 25. "source.ip": { 26. "gte": "192.0.2.0", 27. "lte" : "192.0.2.240" 28. 29.}} 30.} 31] 32. 33}.}, 34. "fields" : [ 35. "@timestamp", 36. "source.ip" 37. ], 38. "_source": false, 39. "sort": [ 40. { 41. "@timestamp": "desc" 42. } 43. ] 44. }Copy the code

The command output is as follows:

1. { 2. "took" : 2, 3. "timed_out" : false, 4. "_shards" : { 5. "total" : 1, 6. "successful" : 1, 7. "skipped" : 0, 8. "failed" : 0 9. }, 10. "hits" : { 11. "total" : { 12. "value" : 1, 13. "relation" : "eq" 14. }, 15. "max_score" : Null, 16. "hits" : [17. {18. "_index" : "ds - logs - my_app - default - 2022.02.15-000001", 19. "_id" : "bdNY-34BMVZHRYDv98zk", 20. "_score" : null, 21. "fields" : { 22. "@timestamp" : [23. "act the T16:2099-05-06. 000 z" 24.], 25. ". The source IP ": [26." 192.0.2.42 "27.] 28}, 29." sort ": [30.4081767675000 31.] 32.} 33.Copy the code

Aggregate data

Use aggregation to aggregate data into metrics, statistics, or other analysis.

The following search uses aggregation to calculate average_response_size using the http.response.body.bytes runtime field. Aggregation runs only on documents that match the query.

1. GET logs-my_app-default/_search 2. { 3. "runtime_mappings": { 4. "http.response.body.bytes": { 5. "type": "long", 6. "script": """ 7. String bytes=grok('%{COMMONAPACHELOG}').extract(doc[ "event.original" ].value)? .bytes; 8. if (bytes ! = null) emit(Integer.parseInt(bytes)); 9. """ 10. } 11. }, 12. "aggs": { 13. "average_response_size":{ 14. "avg": { 15. "field": "http.response.body.bytes" 16. } 17. } 18. }, 19. "query": { 20. "bool": { 21. "filter": [ 22. { 23. "range": { 24. "@timestamp": { 25. "gte": "2099-05-05", 26. "lt": "2099-05-08" 27. } 28. } 29. } 30. ] 31. } 32. }, 33. "fields": [ 34. "@timestamp", 35. "http.response.body.bytes" 36. ], 37. "_source": false, 38. "sort": [ 39. { 40. "@timestamp": "desc" 41. } 42. ] 43. }Copy the code

The command above runs as follows:

1. { 2. "took" : 1, 3. "timed_out" : false, 4. "_shards" : { 5. "total" : 1, 6. "successful" : 1, 7. "skipped" : 0, 8. "failed" : 0 9. }, 10. "hits" : { 11. "total" : { 12. "value" : 2, 13. "relation" : "eq" 14. }, 15. "max_score" : Null, 16. "hits" : [17. {18. "_index" : "ds - logs - my_app - default - 2022.02.15-000001", 19. "_id" : "cdNa-34BMVZHRYDvfszg", 20. "_score" : null, 21. "fields" : { 22. "@timestamp" : [23. "2099-05-07T16:24:32.000z" 24.], 25. "http.response.body.bytes" : [26.0 27.] [30.4081854272000 31.] 32.}, 33. {34. "_index" : ".ds-logs- my_APP-default-2022.02.15-000001 ", 35. "bdNY-34BMVZHRYDv98zk", 36. "_score" : null, 37. "fields" : { 38. "@timestamp" : [39. "2099-05-06T16:21:15.000z" 40.], 41. "http.response.body.bytes" : [42.24736 43.] 44. [ 46. 4081767675000 47. ] 48. } 49. ] 50. }, 51. "aggregations" : { 52. "average_response_size" : { 53. "value" : 54.} 55.} 56.}Copy the code

Learn more about search options

To continue exploring, index more data into your data stream and check out common search options.

Clean up the

When we have finished our exercise, we can use the following command to clean up:

DELETE _data_stream/logs-my_app-default
Copy the code

The above command will delete your test data stream and its associated supporting index.

Of course, we can even use the following command to delete your test environment completely to save storage space:



1.  docker stop es-node01
2.  docker stop kib-01


Copy the code

The above two commands will stop Elasticsearch and Kibana containers. We can use the following command to remove the container and network completely:



1.  docker network rm elastic
2.  docker rm es-node01
3.  docker rm kib-01


Copy the code