In this article, I will share my experience on how to use Elasticsearch for development in Golang. By the way, just in case you’ve never heard of Elasticsearch:

Elasticsearch is a highly extensible open source full-text search and analysis engine. It allows you to store, search and analyze large amounts of data quickly and in near real time. It is often used as the base engine/technology to support applications with complex search capabilities and requirements.

If you want to learn more about Elasticsearch, you can refer to my previous article “Introduction to Elasticsearch”.

For Elasticsearch support for Golang, you can visit official Elastic at github github.com/elastic/go-… .

 

The premise condition

  1. You need to install Golang on your computer, and both $GOPATH and $GOROOT need to be exported to the bash configuration file. You can usego versiongo envCommand to verify that Golang is installed and the correct path is set.
  2. You will need docker 18.03.0-CE or above installed

 

Create a Golang project

We create a directory on our computer as follows:

mkdir go-elasticsearch
cd go-elasticsearch
Copy the code

Then we create a file called main.go in this directory. You can use your favorite editor, for example:

vi main.go
Copy the code

Above we used the vi editor to create the main.go file.

The Golang driver for Elasticsearch (Go-ElasticSearch) must be installed in $GOPATH on the server. Use Git to clone the repository to $GOPATH, as shown in the following example:

git clone --branch master https://github.com/elastic/go-elasticsearch.git $GOPATH/src/github.com/elastic/go-elasticsearch
Copy the code

When compiling the Go application, you sometimes get an error message that the library cannot be downloaded from Github. We need to enter the following command in terminal:

export GO111MODULE=on
export GOPROXY=https://goproxy.io
Copy the code

You can also use the following methods to install Go-ElasticSearch. We need to create a file called go.mod in the go-ElasticSearch directory. It reads as follows:

go.mod

require github.com/elastic/go-elasticsearch/v7 master
Copy the code

Elasticsearch Major version Compatible with Elasticsearch Major version: To connect to Elasticsearch 7.x, use the client version 7.x. To connect to Elasticsearch 6.x, use the client version 6.x.

X require github.com/elastic/go-elasticsearch/v7 7.0.0Copy the code

Multiple versions of the client can be used in a single project:

// go.mod github.com/elastic/go-elasticsearch/v6 6.x github.com/elastic/go-elasticsearch/v7 7.x // main.go import ( elasticsearch6 "github.com/elastic/go-elasticsearch/v6" elasticsearch7 "github.com/elastic/go-elasticsearch/v7" ) // ...  es6, _ := elasticsearch6.NewDefaultClient() es7, _ := elasticsearch7.NewDefaultClient()Copy the code

 

Install Elasticsearch and Kibana

If you’ve never installed Elasticsearch or Kibana before. You can read my previous post “Elastic: A Beginner’s Guide” to install it. In this exercise we will use Docker to install Elasticsearch and Kibana. We’ll start by creating a file called docker-comemage.yml:

docker-compose.yml

- version: "3" services: elasticsearch: image: docker. Elastic. Co/elasticsearch/elasticsearch: 7.10.0 container_name: es01 environment: - node.name=es01 - cluster.name=docker-cluster - bootstrap.memory_lock=true - "ES_JAVA_OPTS=-Xms512m -Xmx512m" - discovery.type=single-node ulimits: memlock: soft: -1 hard: -1 volumes: - esdata:/usr/share/elasticsearch/data ports: Kibana: 9200-9200 - image: docker. Elastic. Co/kibana/kibana: 7.10.0 ports: - 5601:5601 depends_on: - elasticsearch volumes: esdata: driver: localCopy the code

Above, we used the Elastic Stack 7.10.0 release as the experimental release. In your actual use, you can modify it according to your own version requirements.

We must start docker first and then execute from the command line:

docker-compose up
Copy the code

The above command must be executed in the same directory as the docker-comemage. yml file.

It will launch Elasticsearch in http://localhost:9200 and Kibana in http://localhost:5601. You can verify this by opening the link in your browser.

 

Test elasticsearch package

The ElasticSearch package bundles together two separate packages for calling the ElasticSearch API and transferring data over HTTP: esAPI and estransport.

Use elasticsearch. NewDefaultClient () function creates the client with the default Settings.

main.go

package main import ( "log" // Import the Elasticsearch library packages "github.com/elastic/go-elasticsearch/v7" ) func  main() { es, err := elasticsearch.NewDefaultClient() if err ! = nil { log.Fatalf("Error creating the client: %s", err) } res, err := es.Info() if err ! = nil { log.Fatalf("Error getting response: %s", err) } defer res.Body.Close() log.Println(res) }Copy the code

We run it using the following command:

go run main.go
Copy the code

The command above shows the result:

$ go run main.go
go: finding github.com/elastic/go-elasticsearch latest
2020/12/24 10:56:23 [200 OK] {
  "name" : "es01",
  "cluster_name" : "docker-cluster",
  "cluster_uuid" : "ZYQ9cGOdS06uZvxOvjug8A",
  "version" : {
    "number" : "7.10.0",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "51e9d6f22758d0374a0f3f5c6e8f3a7997850f96",
    "build_date" : "2020-11-09T21:30:33.964949Z",
    "build_snapshot" : false,
    "lucene_version" : "8.7.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}
Copy the code

Note: It is important to close and use the response body in order to reuse the persistent TCP connection in the default HTTP transport. If you are not interested in the response Body, call io.copy (ioutil.discard, res.body).

When you export the ELASTICSEARCH_URL environment variable, it will be used to set the cluster endpoint. Separate multiple addresses with commas.

To set up the cluster endpoint programmatically, pass the configuration object to the elasticSearch.newClient () function.

cfg := elasticsearch.Config{
  Addresses: []string{
    "http://localhost:9200",
    "http://localhost:9201",
  },
  // ...
}
es, err := elasticsearch.NewClient(cfg)
Copy the code

To set a user name and password, include them in the endpoint URL or use the appropriate configuration options.

cfg := elasticsearch.Config{
  // ...
  Username: "foo",
  Password: "bar",
}
Copy the code

To set up a custom certificate authority for signing certificates on cluster nodes, use the CACert configuration option.

cert, _ := ioutil.ReadFile(*cacert)

cfg := elasticsearch.Config{
  // ...
  CACert: cert,
}
Copy the code

Insert the document into the index

In this section, I will provide step-by-step instructions on how to use the Go-ElasticSearch driver to import documents into ElasticSearch.

Create a Go script and import the package

Now that we have ensured that everything we need is properly installed and set up, we are ready to use the Go script. Edit the previous main.go file and place the main package at the top. Be sure to import all the necessary packages and libraries, as shown in the following example:

package main

import (
"context"
"encoding/json"
"fmt"
"log"
"reflect"
"strconv"
"strings"

// Import the Elasticsearch library packages
"github.com/elastic/go-elasticsearch/v7"
"github.com/elastic/go-elasticsearch/v7/esapi"
)
Copy the code

Above, we used v7, which corresponds to the release of Elastic Stack 7.x. In previous deployments, we used version 7.10.

 

Creates a structural data type for the fields of the Elasticsearch document

We will use the Golang struct data type to create a framework for the Elasticsearch document to be indexed and the corresponding fields of the index:

// Declare a struct for Elasticsearch fields
type ElasticDocs struct {
  SomeStr string
  SomeInt int
  SomeBool bool
}
Copy the code

Declares a function that converts Elasticsearch structured data to a JSON string

Next, let’s look at a simple function that converts an Elasticsearch Struct document instance to a JSON string. The code shown below may seem complicated, but what’s really happening is simple — all the functionality does is convert the structure to a string literal, which is then passed to Golang’s Json.marshal () method to return the JSON encoding of the string:

// A function for marshaling structs to JSON string func jsonStruct(doc ElasticDocs) string { // Create struct instance of the Elasticsearch fields struct object docStruct := &ElasticDocs{ SomeStr: doc.SomeStr, SomeInt: doc.SomeInt, SomeBool: doc.SomeBool, } fmt.Println("\ndocStruct:", docStruct) fmt.Println("docStruct TYPE:", reflect.TypeOf(docStruct)) // Marshal the struct to JSON and check for errors b, err := json.Marshal(docStruct) if err ! = nil { fmt.Println("json.Marshal ERROR:", err) return string(err.Error()) } return string(b) }Copy the code

Declare the main() function and create a new Elasticsearch Golang client instance

In our Go script, all API method calls must be inside the main() function or from within another function. Let’s create a new context object for the API call and a map object for the Elasticsearch document:

func main() {

	// Allow for custom formatting of log output
	log.SetFlags(0)
	
	// Create a context object for the API calls
	ctx := context.Background()
	
	// Create a mapping for the Elasticsearch documents
	var (
		docMap map[string]interface{}
	)

	fmt.Println("docMap:", docMap)
	fmt.Println("docMap TYPE:", reflect.TypeOf(docMap))
Copy the code

Instantiate the Elasticsearch client configuration and the Golang client instance

In this step, we will instantiate a new Elasticsearch configuration object. Be sure to pass the correct host and port information and any user names or passwords to its “Adressess” property.

// Declare an Elasticsearch configuration cfg := elasticsearch.Config{ Addresses: []string{ "http://localhost:9200", }, Username: "user", Password: "pass", } // Instantiate a new Elasticsearch client object instance client, err := elasticsearch.NewClient(cfg) if err ! = nil { fmt.Println("Elasticsearch connection error:", err) }Copy the code

Check if the Golang client for Elasticsearch returns any errors when connecting to the cluster

Next, we’ll check to see if the connection to Elasticsearch was successful or if any errors were returned:

// Have the client instance return a response res, err := client.Info() // Deserialize the response into a map. if err ! = nil { log.Fatalf("client.Info() ERROR:", err) } else { log.Printf("client response:", res) }Copy the code

Create the Elasticsearch structure document and place it in the array

We will declare an empty string array to store the Elasticsearch document currently represented as a JSON string. The following code shows some examples of Elasticsearch documents that will be used for indexing. To set the values of its fields, all you need to do is modify the properties of the structure instance:

We’ll pass these document instances to the jsonStruct() function we declared earlier and have them return a JSON string representing each document. We’ll then use Golang’s append() function to add the JSON string to the string array:

 

Iterate over the Elasticsearch document array and call the Golang client IndexRequest() method

Now that we’ve built an array of documents, we’ll iterate over it and make API requests to the Elasticsearch cluster as we go. These API calls will index documents by calling the Golang driver’s esapi.IndexRequest() method:

	// Iterate the array of string documents
	for i, bod := range docs {
		fmt.Println("\nDOC _id:", i+1)
		fmt.Println(bod)
		
		// Instantiate a request object
		req := esapi.IndexRequest {
			Index: "some_index",
			DocumentID: strconv.Itoa(i + 1),
			Body: strings.NewReader(bod),
			Refresh: "true",
		}

		fmt.Println(reflect.TypeOf(req))
Copy the code

It is important to note above that we set Refresh to true. This is not recommended in practical use because refresh is done every time a write is written. When we are faced with large amounts of data, this kind of operation can cause inefficiency.

 

Check whether the IndexRequest() API method call returns any errors

The final step in iterating over the document array is to get the response from the API call and check for errors:

// Return an API response object from request res, err := req.Do(ctx, client) if err ! = nil { log.Fatalf("IndexRequest ERROR: %s", err) } defer res.Body.Close()Copy the code

In the code shown below, if no error is returned, we parse the result object returned by the API response:

if res.IsError() { log.Printf("%s ERROR indexing document ID=%d", res.Status(), i+1) } else { // Deserialize the response into a map. var resMap map[string]interface{} if err := json.NewDecoder(res.Body).Decode(&resMap); err ! = nil { log.Printf("Error parsing the response body: %s", err) } else { log.Printf("\nIndexRequest() RESPONSE:") // Print the response status and indexed document version. fmt.Println("Status:", res.Status()) fmt.Println("Result:", resMap["result"]) fmt.Println("Version:", int(resMap["_version"].(float64))) fmt.Println("resMap:", resMap) fmt.Println("\n") } } } }Copy the code

Each document iteration should print a map[string] interface{} object response, as follows:

resMap: map[_id:1 _index:some_index _primary_term:1 _seq_no:32 _shards:map[failed:0 successful:1 total:2] _type:_doc _version:2 forced_refresh:true result:updated]

Up here, we talked a lot about code. I’ve posted the entire main.go code for your practice:

main.go

package main import ( "context" "encoding/json" "fmt" "log" "reflect" "strconv" "strings" // Import the Elasticsearch library packages "github.com/elastic/go-elasticsearch/v7" "github.com/elastic/go-elasticsearch/v7/esapi" ) // Declare a struct for Elasticsearch fields type ElasticDocs struct { SomeStr string SomeInt int SomeBool bool } // A function for marshaling structs to JSON string func jsonStruct(doc ElasticDocs) string { // Create struct instance of the Elasticsearch fields struct object docStruct := &ElasticDocs{ SomeStr: doc.SomeStr, SomeInt: doc.SomeInt, SomeBool: doc.SomeBool, } fmt.Println("\ndocStruct:", docStruct) fmt.Println("docStruct TYPE:", reflect.TypeOf(docStruct)) // Marshal the struct to JSON and check for errors b, err := json.Marshal(docStruct) if err ! = nil { fmt.Println("json.Marshal ERROR:", err) return string(err.Error()) } return string(b) } func main() { // Allow for custom formatting of log output log.SetFlags(0) // Create a context object for the API calls ctx := context.Background() // Create a mapping for the Elasticsearch documents var ( docMap map[string]interface{} ) fmt.Println("docMap:", docMap) fmt.Println("docMap TYPE:", reflect.TypeOf(docMap)) // Declare an Elasticsearch configuration cfg := elasticsearch.Config{ Addresses: []string{ "http://localhost:9200", }, Username: "user", Password: "pass", } // Instantiate a new Elasticsearch client object instance client, err := elasticsearch.NewClient(cfg) if err ! = nil { fmt.Println("Elasticsearch connection error:", err) } // Have the client instance return a response res, err := client.Info() // Deserialize the response into a map. if err ! = nil { log.Fatalf("client.Info() ERROR:", err) } else { log.Printf("client response:", res) } // Declare empty array for the document strings var docs []string // Declare documents to be indexed using struct  doc1 := ElasticDocs{} doc1.SomeStr = "Some Value" doc1.SomeInt = 123456 doc1.SomeBool = true doc2 := ElasticDocs{} doc2.SomeStr = "Another Value" doc2.SomeInt = 42 doc2.SomeBool = false // Marshal Elasticsearch document struct objects to JSON string docStr1 := jsonStruct(doc1) docStr2 := jsonStruct(doc2) // Append the doc strings to an array docs = append(docs, docStr1) docs = append(docs, docStr2) // Iterate the array of string documents for i, bod := range docs { fmt.Println("\nDOC _id:", i+1) fmt.Println(bod) // Instantiate a request object req := esapi.IndexRequest { Index: "some_index", DocumentID: strconv.Itoa(i + 1), Body: strings.NewReader(bod), Refresh: "true", } fmt.Println(reflect.TypeOf(req)) // Return an API response object from request res, err := req.Do(ctx, client) if err ! = nil { log.Fatalf("IndexRequest ERROR: %s", err) } defer res.Body.Close() if res.IsError() { log.Printf("%s ERROR indexing document ID=%d", res.Status(), i+1) } else { // Deserialize the response into a map. var resMap map[string]interface{} if err := json.NewDecoder(res.Body).Decode(&resMap); err ! = nil { log.Printf("Error parsing the response body: %s", err) } else { log.Printf("\nIndexRequest() RESPONSE:") // Print the response status and indexed document version. fmt.Println("Status:", res.Status()) fmt.Println("Result:", resMap["result"]) fmt.Println("Version:", int(resMap["_version"].(float64))) fmt.Println("resMap:", resMap) fmt.Println("\n") } } } }Copy the code

Run the code above and we should see the following output:

$ go run main.go go: finding github.com/elastic/go-elasticsearch latest docMap: map[] docMap TYPE: map[string]interface {} client response:%! (EXTRA *esapi.Response=[200 OK] { "name" : "es01", "cluster_name" : "docker-cluster", "cluster_uuid" : "ZYQ9cGOdS06uZvxOvjug8A", "version" : {"number" : "7.10.0", "build_flavor" : "default", "build_type" : "docker", "build_hash" : "51e9d6f22758d0374a0f3f5c6e8f3a7997850f96", "build_date" : "2020-11-09T21:30:33.964949z ", "build_snapshot" : false, "lucene_version" : "Minimum_index_compatibility_version" : "6.8.0", "minimum_index_compatibility_version" : "6.0.0-beta1"}, "tagline" : "You Know, for Search"}) docStruct: &{Some Value 123456 true} docStruct TYPE: *main.ElasticDocs docStruct: &{Another Value 42 false} docStruct TYPE: *main.ElasticDocs DOC _id: 1 {"SomeStr":"Some Value","SomeInt":123456,"SomeBool":true} esapi.IndexRequest IndexRequest() RESPONSE: Status: 200 OK Result: updated Version: 4 resMap: map[_id:1 _index:some_index _primary_term:1 _seq_no:36 _shards:map[failed:0 successful:1 total:2] _type:_doc _version:4 forced_refresh:true result:updated] DOC _id: 2 {"SomeStr":"Another Value","SomeInt":42,"SomeBool":false} esapi.IndexRequest IndexRequest() RESPONSE: Status: 200 OK Result: updated Version: 18 resMap: map[_id:2 _index:some_index _primary_term:1 _seq_no:37 _shards:map[failed:0 successful:1 total:2] _type:_doc _version:18  forced_refresh:true result:updated]Copy the code

We can use the following command in Kibana to view the imported document:

GET some_index/_search
Copy the code

 

Search for documents

Next we search for documents that have already been created. We next search for documents that have Another in the SomeStr field. Add the following code to main.go:

// Search for the indexed document // Build the request body var buf bytes.Buffer query := map[string]interface{}{ "query": map[string]interface{}{ "match": map[string]interface{}{ "SomeStr": "Another", }, }, } if err := json.NewEncoder(&buf).Encode(query); err ! = nil { log.Fatalf("Error encoding query: %s", err) } // Perform the search request. res, err = client.Search( client.Search.WithContext(context.Background()), client.Search.WithIndex("some_index"), client.Search.WithBody(&buf), client.Search.WithTrackTotalHits(true), client.Search.WithPretty(), ) if err ! = nil { log.Fatalf("Error getting response: %s", err) } defer res.Body.Close() if res.IsError() { var e map[string]interface{} if err := json.NewDecoder(res.Body).Decode(&e); err ! = nil { log.Fatalf("Error parsing the response body: %s", err) } else { // Print the response status and error information. log.Fatalf("[%s] %s: %s", res.Status(), e["error"].(map[string]interface{})["type"], e["error"].(map[string]interface{})["reason"], ) } } var r map[string]interface{} if err := json.NewDecoder(res.Body).Decode(&r); err ! = nil { log.Fatalf("Error parsing the response body: %s", err) } // Print the response status, number of results, and request duration. log.Printf( "[%s] %d hits; took: %dms", res.Status(), int(r["hits"].(map[string]interface{})["total"].(map[string]interface{})["value"].(float64)), int(r["took"].(float64)), ) // Print the ID and document source for each hit. for _, hit := range r["hits"].(map[string]interface{})["hits"].([]interface{}) { log.Printf(" * ID=%s, %s", hit.(map[string]interface{})["_id"], hit.(map[string]interface{})["_source"]) }Copy the code

Also, since we are using the bytes module, we need to add at the beginning of the document:

import ( "context" "encoding/json" "fmt" "log" "reflect" "strconv" "strings" "bytes" // Import the Elasticsearch library  packages "github.com/elastic/go-elasticsearch/v7" "github.com/elastic/go-elasticsearch/v7/esapi" )Copy the code

Run the code above. We can see the following new additions:

[200 OK] 1 hits; took: 1ms

* ID=2, map[SomeBool:%!s(bool=false) SomeInt:%!s(float64=42) SomeStr:Another Value]

 

Delete the document

Deleting a document is very easy. In the main.go file, we add the following code to delete the document with document ID 1:

// Set up the request object. req := esapi.DeleteRequest{ Index: "some_index", DocumentID: strconv.Itoa(1), } res, err = req.Do(context.Background(), client) if err ! = nil { log.Fatalf("Error getting response: %s", err) }Copy the code

Rerun the main.go application. Let’s go to Kibana to check:

This query shows that only one document exists. The document with ID 2 was also imported, but then deleted.

For your convenience, I put the code on Github: github.com/liu-xiao-gu…