Search engine Solr: build, maintain and use from 0

Introduction and introduction to Solr

1.1 What is a Search engine

1.1 What is a Search engine?

  • Professional perspective:
    • Search Engine is a system that collects information from the Internet by using specific computer programs according to certain policies, organizes and processes the answers, provides Search services for users, and displays relevant information to users. Search engines include full-text index, directory index, metasearch engine, vertical search engine, integrated search engine, portal search engine and free link list, etc.
    • The so-called search engine, is according to the user needs and a certain algorithm, using a specific strategy to retrieve information from the Internet feedback to the user of a search technology. Search engine relies on a variety of technologies, such as web crawler technology, search sorting technology, web page processing technology, big data processing technology, natural language processing technology, etc., to provide fast and high correlation information service for information retrieval users. The core modules of search engine technology generally include crawler, index, retrieval and sorting, etc. At the same time, a series of other auxiliary modules can be added to create a better network using environment for users.
    • A search engine consists of searcher, indexer, searcher and user interface. The function of a searcher is to roam the Internet, find and gather information. The indexer’s function is to understand the information searched by the searcher, extract index items from it, and use them to represent documents and generate index tables for document libraries. The function of the searcher is to quickly check out the documents in the index base according to the user’s query, evaluate the correlation between the documents and the query, sort the output results, and realize some user correlation feedback mechanism. The user interface is used to enter user queries, display query results, and provide a mechanism for user relevance feedback.
  • In layman’s terms, a search engine is a tool that assists users to fulfill their search requirements for specified content. It has excellent query performance, excellent query adaptation function, and can meet the ability to quickly search target data in big data.
  • Common search: Baidu, Google and so on. They are the final product, and the internal search engine is the core. Google search uses ElasticSearch at its core. Solr, a more established search engine, is also worth learning from.

1.2 What is Solr

  1. Solr is an open source search platform for building search applications. It is built on top of Lucene, a full-text search engine. Solr is enterprise-class, fast and highly extensible. Applications built with Solr are complex and provide high performance.
  2. Solr is a developing, evolving and optimized product. It is old, but still with The Times. For example, with Netty container, do not need to continue to install Tomcat to run. Support Solr cluster, support parallel query, etc.
  3. Solr is a Restful full-text search engine, so it can run in any language. It is an independent enterprise-level search application server using Java full-text search service developed based on Lucene. It provides API interface similar to Web Service externally. Users can submit XML files in a certain format to search engine servers through HTTP requests to generate indexes. You can also make a lookup request through an Http Get operation and Get the result back in XML format.
  4. Solr (pronounced “Solar”) is the open source enterprise search platform for the Apache Lucene project. Its main functions include full text retrieval, hit marking, faceted search, dynamic clustering, database integration, and rich text (such as Word, PDF) processing. Solr is highly extensible and provides distributed search and index replication. Solr is the most popular enterprise search engine, and Solr4 also adds NoSQL support.
  5. Solr is a standalone full-text search server written in Java and running in a Servlet container such as Apache Tomcat or Jetty. Solr uses the Lucene Java search library as the core of full-text indexing and searching, and has REST-like HTTP/XML and JSON apis. Solr’s powerful external configuration capabilities allow it to be adapted to suit many types of applications without Java coding. Solr has a plug-in architecture to support more advanced customization.
  6. Due to the 2010 merger of Apache Lucene and Apache Solr, both projects were developed and implemented by the same Apache Software Foundation development team. Lucene/Solr or Solr/Lucene is the same when it comes to technology or products.

1.3 Comparison between Solr and other products

  1. Compared to databases:
    • Inverted indexes, pre-segmentation, and joins and likes are inherently performance killers compared to databases, so they are faster to query in most scenarios.
    • Geolocation queries, highlighted searches, advanced aggregated queries, it can accomplish some of the database does not do well or can not do.
    • Based on Restful style, high compatibility, strong scalability, easier to use than MySQL.
  2. Contrast with ES:
    • Solr will cause I/O blocking while ES will not, because ES query performance is higher than Solr.
    • When data is added dynamically, solr’s retrieval efficiency will become low, while ES has no change.
    • Solr uses ZooKeeper for distributed management, while ES has distributed system management function. Solr is usually deployed on a Web server, such as Tomcat (the latest version has its own Web server and can be decompressed to start it). The association between Tomcat and Solr must be configured when Tomcat is started. Solr is a Dynamic Web project at heart
    • Solr supports more format data [XML, JSON, CSV, etc.], whereas ES only supports JSON file format.
    • Solr is a powerful solution for traditional search applications, but ES is more suitable for emerging real-time search applications.

Solr has a better efficiency than ES in simple retrieval of existing data. – Solr’s official website provides more functions, while ES itself pays more attention to core functions, with third-party plug-ins for advanced functions. Es is more suitable for dynamic data processing and mass data processing. Solr is better suited for complex, multi-case processing because it offers more functionality natively. Overall, ES is better than Solr, but Solr is also a good choice.

1.4 Solr’s characteristics, advantages and disadvantages, and application scenarios

  • Features: 1. Standards-based open interface: Solr search server supports query and obtain results through XML, JSON and HTTP. 2. Easy management: Solr can be managed through HTML pages, and Solr configuration is completed through XML. 3. Scalability: Ability to efficiently replicate to another Solr search server. 4. Flexible plug-in system: New functions can be easily added to Solr server in the form of plug-ins. 5. Powerful data import capabilities: Databases and other structured data sources can now be imported, mapped, and transformed.
  • The advantages and disadvantages:
    • Advantages:
    1. The software is mature, there are broad user groups, active communities, updated and iterative teams.
    2. Supports indexes in various formats, such as HTML, PDF, Microsoft Office, JSON, XML, and CSV.
    3. It is a powerful solution for traditional search applications with rich functions to meet various business scenarios. It supports GPS latitude and longitude query (can achieve nearby data, latitude and longitude range of data, etc.), highlighting query, all kinds of aggregated query, etc
    • Disadvantages:
    1. During index creation, search efficiency decreases, and real-time index efficiency is not high.
    2. Compared with ES, the competitiveness is slightly weaker.
    3. Solr becomes less efficient as the amount of data increases, while ElasticSearch doesn’t change significantly.
  • Usage Scenarios:
  1. A powerful assistant of traditional search applications, such as multi-condition housing search of rental platform (housing in the neighborhood, housing in the business circle, by rent, keywords, house type, orientation, unit number, etc.)
  2. It can serve scenarios where the data volume is not particularly large and the update frequency is not particularly fast
  3. In addition, according to the advantages and disadvantages of Solr, the advantages are suitable scenarios, while the disadvantages are not suitable scenarios.

W3c solr documentation: www.w3cschool.cn/solr_doc/

Ii. Solr construction and maintenance

2.1 Setting up Solr on Windows

  1. Go to the official website to download a suitable version. You can download a relatively new version. The following uses Solr 7.7.2 as an example
  2. Decompress the package, enter CMD, go to bin, and run the solr start command. The command output is as follows: The startup is successful. The default port is 8983.
  3. Open your browser and type:
    http://localhost:8983/solr
    Copy the code

    The installation is successful if the following information is displayed:

Windows requires JDK installation

2.2 Solr tutorial on Linux

  1. Verify that the server has Jdk and other environment support.
  2. Download Solr:
    wget http:/ / mirror.bit.edu.cn/apache/lucene/solr/7.6.0/solr-7.6.0.tgz
    Copy the code
  3. Unpack the solr:
    tar -xzvf solr-7.6. 0.tgz
    Copy the code
  4. Enter the command to run Solr:
    bin/solr start -force
    Copy the code
  5. Enter the command to stop running Solr:
    bin/solr stop -force
    Copy the code

    You can also kill the process directly

  6. Enter the command to restart solr:
    bin/solr restart -force
    Copy the code

2.3 Use of Solr management interface

2.3.1 Dashboard

  • It is a monitoring interface, you can intuitively see the JVM running parameters, virtual machine memory usage, Solr version, system memory and other information:

2.3.2 Logging

  • Solr: Solr: Solr: Solr: Solr: Solr: Solr: Solr: Solr

2.3.3 Core Admin (Core Management)

  • It is the core of the library. In this:
    • Add Core: To Add a new library (also called Core, or Core)
    • Reload: Reloads the specified library
    • Unload: Unload libraries (use 0.0 with caution)
    • Rename: renames a library
    • Optimize: Optimize the index library

It also has some other functions and displays some other information:

  • When adding an index library, note the following names:
    1. Name: The name given to core
    2. InstanceDir: the same as the name of the core folder we created in solr_HOME when we configured solr to Tomcat;
    3. DataDir: When you confirm Add Core, a folder named data is generated in the new_core directory
    4. Config: config configuration file under conf under new_core (solrconfig.xml)
    5. Schema: schema file in conf under new_core (schema.xml)

When you confirm Add Core, the data folder is generated under new_core, along with the core.properties file.

2.3.5 Method for adding index libraries in Core

  • Windows:
  1. Open the DOS command window, switch the directory to ${solr.home}\bin, then type: solr create -c corename and press Enter;
  2. Open the solr installation file and a new folder corename will appear under /server/solr.
  3. Open the browser, enter the solr access path: http://localhost:8983/solr, you will see the new core
  • Linux under:
  1. Create a folder named newCore under /server/solr as the newly created core.
  2. Find the conf folder under /server/solr/configsets/_default and copy it to the /server/solr/newcore directory node.
  3. Then follow the following figure

2.3.6 Java Properties

  • As the name implies, it shows some Java configuration:

2.3.7 Thread Dump

2.4 the Core Selector

  • Core Selector is to manipulate, analyze, and so on the data in Solr. This chapter will explain the various menus of this module:

  • After clicking the specific Core, the following interface appears:

  • Select the coreAs you can see, house_core is a core we defined ourselves. Each Core in Solr represents an index library, which contains index data and its configuration information.

Solr can have multiple cores, that is, manage multiple index libraries at the same time, just like mysql can have multiple databases. We can use solr’s syntax to access the data in core. Data in Solr Core can be added, deleted, checked and modified. Examples will be given in the next section;

  • Overview: Enables you to view our running instance information

  • Analysis (n.): Using Analysis, we can analyze our statement parsing in real time. For example, if you use ik word segmentation, the detection content: “I am Chinese” word segmentation results are shown in the following figure:
  • Dataimport (Importing data from the database): If related configurations have been configured, you can use this function to import databases. Interested in baidu can understand. This step is temporary.
    • Comman options: full_import: full import; Delta_import: incremental import.

Delta-import is the import of fields that have been added or modified to a database (possibly a file, etc.). The main principle is to use the dataimPort.properties file generated under solr.home\conf each time we import, which contains information about the last import. This file looks like this: #Tue Jul 19 10:15:50 CST 2016 advertiser.last_index_time=2016-07-19 10:15:49 last_index_time=2016-07-19 10:15:49 Last_index_time indicates the time of the last index (full-import or delta-import). Comparing this time with the TIMESTAMP column in our database table shows which changes or additions were made later. – Verbose: Clean: indicates whether to delete the index before the index construction starts. The default value is true. Commit: indicates whether to Commit the index after the index construction is complete. Default is true Optimize: whether to Optimize the index after it is finished. Default: True Debug: Whether to run in Debug mode, applicable to interactive development mode. Please note that if you run in debug mode, there will be no automatic commit by default, so add the parameter “commit=true” Entity: Entity is the label under document (data-config.xml). Use this parameter to optionally execute one or more entities. Using multiple Entity parameters enables multiple entities to run simultaneously. If this parameter is not selected then all will be run. Start,Rows: paging Custom Parameters: import conditions Excute: perform the import. Refresh Status: Data changes are displayed only after the Refresh. If the data is still 0 after the Refresh, it indicates that the data has not been imported.

  • Document (companyName) index operations, such as adding, modifying, deleting, etc.
  1. Add the related field to solr’s D:\solr_home\mycore1\conf schema. XML
    	<field name="companyName" type="text_ik" indexed="false" stored="true" multiValued="false" />
    Copy the code

    If a new version of Solr may not have schema.xml, look for the managed-schema file in the corresponding conf of the core you want to configure

  2. useCore Admin -> ReloadCommand to reload or restart the solr server and refresh the index:
  3. On the following page, select/Update, document format select JSON, and submit. So the index goes up. Update and add are /update and delete are /delete. After success, we can query the data in query to find the data we just added.

> If advertiserId and advertiserType are not configured with fields, you can configure them according to later tutorials or remove them and try to add, modify, or delete them.

  • File: Displays solr files on the server and can be configured as follows:Request-handler (qt): Operation to be performed (update\delete)

Document Type: indicates the Type, including JSON and XML. Document(s): indicates the content, which is manually written. Commit Within: Commits Overwrite internally. If the value is true, the previous value will be overwritten if the ID is the same. If the value is false, the previous value is not overwritten if the ID is repeated. Boost: What version is it? Haven’t you used it

  • Ping: Network latency, click to see if the Core is still alive and the response time.

    Clicking the Ping button sends a request to the corresponding Core and gets a reply. The corresponding HTTP API is/admin/ping (http://localhost:8983/solr//admin/ping), the back end processing logic is PingRequestHandler class

  • Plugins/Stats: Information and statistics on some of Solr’s built-in plug-ins and our installed plug-ins. The Plugins page displays statistics and status information for various plug-ins running under Core. For example, you can find Solr Cache performance statistics, Solr Searcher status, Request Handler and Search Component configuration information, and more.
  • Query: A simple structured Query tool, described in the next section
  • Replication: Displays a copy of your current Core with disable/enable functionality. In a single-machine environment, you can configure between instances. Replication of data between instances ensures data availability and improves system service capability.

    SolrCloud provides a page for managing and viewing replicas of master-slave replicas. SolrCloud has replaced most of the functions of this page, and the standalone version does not need to pay attention to this function. (If SolrCloud is used, do not use the DisableReplication function on the page.)

  • Schema: Displays the shema data of the Core. If the ManagedSchema is used, you can modify, add, and delete fields of the schema on this page. (A more practical improvement than Solr4.)
  • Segments info: displays Segments information about underlying Lucence. Displays the underlying Lucence index segments, including the size of each segment (byte size and number of data entries) and other basic metadata information, most notably the number of Deleted Documents. Move the mouse over the segment to see more data information. This information can help administrators make performance optimizations and optimize the Settings of the merged segments of the dataset.

2.4 Query details

Q: Query string (required). *:* indicates all queries. Keyword: View the query by keyword Fq: filter query Filters the query. Filter Query can make full use of Filter Query Cache and improve the retrieval performance. For example, q=mm&fq=date_time:[20081001 TO 20091031], find keyword mm, and date_time is between 20081001 and 20091031. Sort: sorting. The format is as follows: Sort by field name; For example, advertiserId desc indicates that the query results are sorted by ID in descending order. Start,rows: indicates the number of rows to be displayed in the query result. Fl: field list. Specifies which fields are returned as a result of the query. Separate multiple entries with Spaces or commas (,). If no parameter is specified, all returns are returned by default. Df: default field Specifies the default query field. Raw Query Parameters: wt: Write type. Specify the format of the query output. Json format and XML format are commonly used. Query output formats are defined in solrconfig. XML: XML, JSON, Python, Ruby, PHP, PHPS, custom. Text-indent: whether the results returned by the indentation, the default closed, with indent = true | on open, general debugging json, PHP, PHPS, ruby output is necessary to use this parameter. DebugQuery: Sets whether debugging information is displayed in the returned result. Dismax: edismax: HL: High Light Highlights. Hl =true highlights HL.fl: a list of fields separated by Spaces or commas (specifying the highlighted fields). To enable the highlight function for a field, ensure that the field is stored in the schema. If this parameter is not given, the default fields are highlighted and the standard handler uses the df parameter and the dismax field uses the qf parameter. You can use asterisks to easily highlight all fields. If you use wildcards, consider enabling the HL. requiredFieldMatch option. Hl.simple. pre: hl.requirefieldMatch: If set to true, the field will be highlighted unless the query result is not empty. Its default value is false, which means it may match one field but highlight a different one. If HL.fl uses wildcards, enable this parameter. However, if your query is for an all field (possibly using the copy-field directive), set it to false so that the search results indicate which field’s query text was not found: If a query contains a phrase (enclosed in quotes), it is guaranteed that the phrase must match exactly before it is highlighted. Hl.highlightmultiterm: If you use wildcards and fuzzy search, you ensure that the term matching the wildcard is highlighted. The default is false, and hL.usephrasehighlighter should be true. Facet: Group statistics. While searching for keywords, facet fields can be grouped and counted. Facet. Query: Facet Query provides more flexible facets using syntax similar to Filter Query. With the facet. Query parameter, arbitrary fields can be filtered. Facet. Field: Multiple fields that require group statistics. Facet. Prefix: The prefix that represents a facet field value. For example, facet. Field =cpu&facet. Prefix =Intel. Spatial: spellcheck.

2.5 Using Solr Configuration files

  • schema.xmlThe basic purpose of this configuration file is to tell Solr how to build indexes through configuration. inThe older version was called schema.xmlIn theIn newer versions of SolrIs called:managed-schema, pay attention to the distinction.

2.6 Word segmentation Configuration of Solr

2.7 Solr Positioning Configuration

Introduction and basic use of Solr backend

3.1 Introducing the SpringBoot project tutorial

  1. In project engineeringpom.xml (Maven)Configuration dependencies:
     <dependency>
            <groupId>org.apache.solr</groupId>
            <artifactId>solr-solrj</artifactId>
            <version>8.52.</version>
        </dependency>
    Copy the code

    Solr supports Restful STYLE Http requests, so when deployed successfully, we can access them directly as normal requests. The introduction of SolRJ makes it possible to operate Solr in an object-oriented way by using some methods encapsulated by third parties. Please note that the version of Solrj may be different, resulting in a change in the query method.

  2. inapplication.ymlAdd configuration to:
    Solr: url: HTTP:// IP address :8983/solr # solr connection addressCore_house: XXXX # library, pass whatever core you created initialSize:5MaxIdleSize:5MaxActiveSize:10# connTimeOut:60# Wait timeCopy the code
  3. Introducing configuration classes:
    1. Kotlin code:
      import gw.commonkotlin.exception.BaseException
      import org.apache.solr.client.solrj.impl.HttpSolrClient
      import org.slf4j.LoggerFactory
      import org.springframework.beans.factory.annotation.Value
      import org.springframework.stereotype.Service
      import java.util.*
      import javax.annotation.PostConstruct
      
      / * * *@author zhanglei
       * @data: 2020/11/2 * /
      @Service
      class SolrConfig {
          // Set of free connections
          private val freeConnection: MutableList<HttpSolrClient> = Vector()
      
          // Active join collection
          privateval activeConnection: MutableList<HttpSolrClient? > = Vector()// Solr connection address
          @Value("\${solr.url:}")
          private val url: String? = null
      
          // Initialize the connection number
          @Value("\${solr.initialSize:1}")
          private val initialSize = 0
      
          // Maximum number of idle connections
          @Value("\${solr.maxIdleSize:1}")
          private val maxIdleSize = 0
      
          // Maximum number of active connections
          @Value("\${solr.maxActiveSize:1}")
          private val maxActiveSize = 0
      
          // Wait time
          @Value("\${solr.connTimeOut:30}")
          private val connTimeOut = 0
      
          / / library
          @Value("\${solr.core_house:}")
          private val coreHouse: String? = null
      
          private val lock = Object()
      
      
          / / initialization
          @PostConstruct
          fun init(a) {
              try {
                  for (i in 0 until initialSize) {
                      val newConnection = newConnection()
                      freeConnection.add(newConnection)
                  }
              } catch (e: Exception) {
                  logger.error("Solr initialization failed", e)
                  throw BaseException("Failed to initialize Solr, please check configuration parameters!")}}// Create a new Connection
          private fun newConnection(a): HttpSolrClient {
              val client: HttpSolrClient
              client = try {
                  HttpSolrClient.Builder("$url/$coreHouse").build()
              } catch (e: Exception) {
                  logger.error("Failed to create Solr client!", e)
                  throw BaseException("Failed to create Solr client!")
              }
              connCount++
              return client
          }// If the value is greater than the maximum active connection, wait to get the connection again
          // Recursively call the getConnection method
      // Create new connections
      
      
          // Active connections can be used
          fun connection(a): HttpSolrClient {
              var connection: HttpSolrClient? =null
              try {
                  if (connCount < maxActiveSize) {
                      // Active connections can be used
                      connection = if (freeConnection.size > 0) {
                          freeConnection.removeAt(0)}else {
                          // Create new connections
                          newConnection()
                      }
                      if (availableFlag(connection)) {
                          activeConnection.add(connection)
                      } else {
                          connCount--
                          connection = connection()
                      }
                  } else {
                      synchronized(lock) {
                          // If the value is greater than the maximum active connection, wait to get the connection again
                          lock.wait(connTimeOut.toLong())
                      }
                      // Recursively call the getConnection method
                      connection = connection()
                  }
              } catch (e: Exception) {
                  e.printStackTrace()
              }
              return connection!!
          }
      
          // Determine whether the connection is available
          fun availableFlag(connection: HttpSolrClient?): Boolean {
              returnconnection ! =null
          }
      
          // Close the connection
          fun close(connection: HttpSolrClient) {
              try {
                  if (availableFlag(connection)) {
                      // Check whether the number of free connections exceeds the maximum number
                      if (freeConnection.size < maxIdleSize) {
                          freeConnection.add(connection)
                      } else {
                          // The number of free connections is full
                          connection.close()
                          connCount--
                      }
                      activeConnection.remove(connection)
                      synchronized(lock) { lock.notifyAll() }
                  }
              } catch (e: Exception) {
                  logger.error("Error deleting connection", e)
                  throw BaseException("Error deleting connection!")
              }
          }
      
          companion object {
              private val logger = LoggerFactory.getLogger(SolrConfig::class.java)
      
              // Record the total number of connections
              private var connCount = 0}}Copy the code
    2. Java version code:
      import java.util.List;
      import java.util.Vector;
       
      import org.apache.solr.client.solrj.impl.HttpSolrClient;
       
      /** * solr connection pool **@author a_bo
       * @versionCreation time: Apr. 1, 2019 10:24:05 */
      public class SolrConnectionPool {
       
      	// Set of free connections
      	private List<HttpSolrClient> freeConnection = new Vector<HttpSolrClient>();
      	// Active join collection
      	private List<HttpSolrClient> activeConnection = new Vector<HttpSolrClient>();
      	// Record the total number of connections
      	private static int connCount;
      	// Solr connection address
      	private String url;
      	// Initialize the connection number
      	private int initialSize;
      	// Maximum number of idle connections
      	private int maxIdleSize;
      	// Maximum number of active connections
      	private int maxActiveSize;
      	// Wait time
      	private int connTimeOut;
       
      	public static int getConnCount(a) {
      		return connCount;
      	}
       
      	public static void setConnCount(int connCount) {
      		SolrConnectionPool.connCount = connCount;
      	}
       
      	public String getUrl(a) {
      		return url;
      	}
       
      	public void setUrl(String url) {
      		this.url = url;
      	}
       
      	public int getInitialSize(a) {
      		return initialSize;
      	}
       
      	public void setInitialSize(int initialSize) {
      		this.initialSize = initialSize;
      	}
       
      	public int getConnTimeOut(a) {
      		return connTimeOut;
      	}
       
      	public void setConnTimeOut(int connTimeOut) {
      		this.connTimeOut = connTimeOut;
      	}
       
      	public int getMaxIdleSize(a) {
      		return maxIdleSize;
      	}
       
      	public void setMaxIdleSize(int maxIdleSize) {
      		this.maxIdleSize = maxIdleSize;
      	}
       
      	public int getMaxActiveSize(a) {
      		return maxActiveSize;
      	}
       
      	public void setMaxActiveSize(int maxActiveSize) {
      		this.maxActiveSize = maxActiveSize;
      	}
       
      	/ / initialization
      	public void init(a) {
      		try {
      			for (int i = 0; i < initialSize; i++) {
      				HttpSolrClient newConnection = newConnection();
      				if(newConnection ! =null) {
      					// Add to free connection...freeConnection.add(newConnection); }}}catch (Exception e) {
      			e.getStackTrace();
      			throw new RuntimeException("Failed to initialize Solr, please check configuration parameters!"); }}// Create a new Connection
      	private HttpSolrClient newConnection(a) {
      		HttpSolrClient client = null;
      		try {
      			HttpSolrClient.Builder builder = new HttpSolrClient.Builder(url);
      			client = builder.build();
      		} catch (Exception e) {
      			e.getStackTrace();
      			throw new RuntimeException("Failed to create Solr client!");
      		}
      		connCount++;
      		return client;
      	}
       
      	public HttpSolrClient getConnection(a) {
      		HttpSolrClient connection = null;
      		try {
      			if (connCount < maxActiveSize) {
      				// Active connections can be used
      				if (freeConnection.size() > 0) {
      					connection = freeConnection.remove(0);
      				} else {
      					// Create new connections
      					connection = newConnection();
      				}
      				if (isAvailable(connection)) {
      					activeConnection.add(connection);
      				} else{ connCount--; connection = getConnection(); }}else {
      				synchronized (this) {
      					// If the value is greater than the maximum active connection, wait to get the connection again
      					wait(connTimeOut);
      				}
      				connection = getConnection();// Recursively call the getConnection method}}catch (Exception e) {
      			e.printStackTrace();
      		}
      		return connection;
      	}
       
      	// Determine whether the connection is available
      	public boolean isAvailable(HttpSolrClient connection) {
      		// Check whether the connection is available
      		if (connection == null) {
      			return false;
      		}
      		return true;
      	}
       
      	// Close the connection
      	public void close(HttpSolrClient connection) {
      		try {
      			if (isAvailable(connection)) {
      				// Check whether the number of free connections exceeds the maximum number
      				if (freeConnection.size() < maxIdleSize) {
      					freeConnection.add(connection);
      				} else {
      					// The number of free connections is full
      					connection.close();
      					connCount--;
      				}
      				activeConnection.remove(connection);
      				synchronized (this) { notifyAll(); }}}catch (Exception e) {
      			e.printStackTrace();
      			throw new RuntimeException("Error deleting connection!"); }}}Copy the code
    > If you are using Java, use the connection pool below. The two codes may differ in a few parts, which can be ignored due to language differences.Copy the code

3.2 Basic query methods: Add, delete, search and modify

  • The test code is as follows:
    	
    import com.yzz.house.config.SolrConfig;
    import org.apache.solr.client.solrj.SolrQuery;
    import org.apache.solr.client.solrj.SolrServerException;
    import org.apache.solr.client.solrj.beans.Field;
    import org.apache.solr.client.solrj.impl.HttpSolrClient;
    import org.apache.solr.client.solrj.response.QueryResponse;
    import org.apache.solr.client.solrj.response.UpdateResponse;
    import org.apache.solr.common.SolrDocument;
    import org.apache.solr.common.SolrDocumentList;
    import org.springframework.beans.factory.annotation.Autowired;
    import org.springframework.stereotype.Service;
    import java.io.IOException;
    import java.io.Serializable;
    
    / * * *@authorCSDN - dark * /
    public class Student implements Serializable {
    
        @Field("name")
        private String name;
    
        @Field("age")
        private Integer age;
    
        public String getName(a) {
            return name;
        }
    
        public void setName(String name) {
            this.name = name;
        }
    
        public Integer getAge(a) {
            return age;
        }
    
        public void setAge(Integer age) {
            this.age = age; }}@Service
    class Test {
    
        @Autowired
        private SolrConfig solrConfig;
    
        /** * Test add/modify * If there is no data with this ID, add, if there is data with this ID, modify. * *@returnBoolean Status of success *@throws IOException
         * @throws SolrServerException
         */
        public Boolean testInsert(a) throws IOException, SolrServerException {
            Student student = new Student();
            student.setName("Zhang");
            student.setAge(15);
            HttpSolrClient connection = solrConfig.connection();
            connection.addBean(student);
            connection.close();
            solrConfig.close(connection);
            return true;
        }
    
        /** * Test delete *@returnBoolean Indicates whether the status value * was successfully deleted@throws IOException
         * @throws SolrServerException
         */
        public Boolean testDelete(a) throws IOException, SolrServerException {
            HttpSolrClient connection = solrConfig.connection();
            UpdateResponse updateResponse = connection.deleteById("1");
            return true;
        }
    
        /** * query */
        public SolrDocument testGetById(a) throws IOException, SolrServerException {
            HttpSolrClient connection = solrConfig.connection();
            SolrDocument solrDocument = connection.getById("1");
            return solrDocument;
        }
    
        /** * query *@return* /
        public SolrDocumentList testQuery(a) throws IOException, SolrServerException {
            HttpSolrClient connection = solrConfig.connection();
            SolrQuery solrQuery= new SolrQuery();
            solrQuery.set("q".: "* *");
            QueryResponse query = connection.query(solrQuery);
            SolrDocumentList results = query.getResults();
            returnresults; }}Copy the code

    First, you need to configure index in the Solr configuration file. Then define a compliant Field object. Assign to this object, save and update to Solr, and finally close solrj’s connection stream and our configured stream. Our own solrConfig can be configured without connection pooling or optional. The code here is just a few examples. If you can read it, you are almost ready to use Solr. In your projects, you can use demo to implement or encapsulate it in your own style. Examples of the blogger’s own wrapped Solrj code will be shared below;

3.3 Precautions

  1. When adding or modifying objects using Solrj, add to each variable of the transferred object@FieldNotes, the specific use of baidu.
  2. add@FieldThe variable must correspond to the index in Solr, that is, the field name, and the type is corresponding.
  3. The primary key ID must be a string in the call to SolrjgetById()Methods anddeleteById()All are deleted by ID.
  4. Each version of Solrj may have different connection methods and methods, subject to actual conditions.
  5. In the SolrjaddBean()Method can be added and modified. If an object with this ID exists, it is added; if not, it is modified.

Solr advanced Syntax

4.1 word

  • Use solr’s own text participle, which splits a sentence into individual words. If it is English environment can be adapted. But they don’t fit in the Chinese context. So we can use a participle. Solr comes with its own word splitter, but the blogger suggests using the IK. First, it has better word segmentation effect, supports sensitive word filtering, custom vocabulary and other expansibility functions.
  • Introduction to IK word divider
  • IK word segmentation installation steps:
    1. Download Ik segmentation resources github.com/magese/ik-a…
    2. Upload the IK-Analyzer-Solr7-7.X. jar package to $SOLR_INSTALL_HOME/server/solr-webapp/webapp/ web-INF /lib
    3. Create the classes directory under $SOLR_INSTALL_HOME/server/solr-webapp/webapp/ web-INF and upload the source files from IK classifier to this directory, as shown in the figure below:

      Ext.dic custom words such as sand sculpture are not a word in Chinese, it is just a web language that can be configured to make it a word

Ikanalyzer.cfg. XML configures ik configuration files do not change Jar: Jar package to import if you want to use IK segmentation 4. Managed-sahma: ~~~ Java ~~~ > useSmart: false: managed-sahma: ~~~ Java ~~~ > useSmart: True: The granularity of word segmentation is fine. There are many words in a sentence. So do we need a lot of keywords when we import? Make the index a little smaller. Our granularity is large: false; Do we need a lot of keywords in our search? We want to cover as much as possible, and our granularity is fine: true 5. Restart Solr and test it on the interface, as shown in the figure:

  • Steps to use your own word divider:
  1. Jar package for word segmentation in solr-webApp directory:
    cp $SOLR_INSTALL_HOME/contrib/analysis-extras/lucene-libs/lucene-analyzers-smartcn-7.72..jar $SOLR_INSTALL_HOME//server/solr-webapp/webapp/WEB-INF/lib/
    Copy the code
  2. Edit the $SOLR_INSTALL_HOME/server/solr/my_core/conf/managed-schema file and add the following at the bottom of the file:
    <! Solr --> <fieldType name="text_smartcn" class="solr.TextField" positionIncrementGap="0">
    	<analyzer type="index">
    		<tokenizer class="org.apache.lucene.analysis.cn.smart.HMMChineseTokenizerFactory"/>
     	</analyzer>
    	<analyzer type="query">
    		<tokenizer class="org.apache.lucene.analysis.cn.smart.HMMChineseTokenizerFactory"/>
    	</analyzer>
    </fieldType>
    Copy the code
  3. Restart Solr and test it on the interface, as shown in the figure:

4.2 highlight

4.3 grouping

4.4 statistical

4.5 Other Syntax

Solr optimization

5.1 Solr Querying connection Pools

  • Refer to article 3.1

5.2 Solr Query encapsulation -Kotlin

  • SolrApi :
import com.yzz.house.solr.service.SolrService
import com.yzz.house.solr.service.SolrWrapper
import gw.commonkotlin.respond.PageResult
import org.springframework.beans.factory.annotation.Autowired
import org.springframework.stereotype.Controller

@Controller
class SolrApi {

    @Autowired
    private lateinit var solrService: SolrService

    /** ** */
    fun <T> list(solrWrapper: SolrWrapper, clazz: Class<T>): PageResult<T> {
        return solrService.list(solrWrapper, clazz)
    }

    /**
     * 查询单个
     */
    fun <T> getById(id: String, clazz: Class<T>): T? {
        return solrService.getById(id, clazz)
    }

    /** * Add or modify */
    fun saveOrUpdate(field: Any): Boolean {
        return solrService.saveOrUpdate(field)
    }

    /** * Batch add or modify */
    fun saveOrUpdateBatch(fields: List<Any>): Boolean {
        return solrService.saveOrUpdateBatch(fields)
    }

    /** * delete */
    fun removeById(id: String): Boolean {
        return solrService.removeByIds(listOf(id))
    }

    /** * Delete */ in batches
    fun removeByIds(ids: List<String>): Boolean {
        return solrService.removeByIds(ids)
    }

    /** * delete */ based on search criteria
    fun remove(solrWrapper: SolrWrapper): Boolean {
        return solrService.remove(solrWrapper)
    }

    /** * delete */ based on search criteria
    fun remove(query: String): Boolean {
        return solrService.remove(query)
    }
}
Copy the code
  • SolrService :

import com.xxx.house.config.SolrConfig
import com.xxx.house.solr.SolrApi
import gw.commonkotlin.respond.PageResult
import gw.commonkotlin.utils.issEmpty
import org.slf4j.Logger
import org.slf4j.LoggerFactory
import org.springframework.beans.factory.annotation.Autowired
import org.springframework.stereotype.Service

/ * * *@author:Anyu-csdn
 * @date: 2020/11/19 | * /
@Service
class SolrService : AbstractSolrService(a){

    private final val logger: Logger = LoggerFactory.getLogger(SolrApi::class.java)

    @Autowired
    private lateinit var solrConfig: SolrConfig

    fun <T> list(solrWrapper: SolrWrapper, clazz: Class<T>): PageResult<T> {
        // 1. Establish a connection using the previous connection pool
        val connection = solrConfig.connection()
        // 2. Create query request parameters and query
        val query = connection.query(solrWrapper.buildSolrQuery())
        // 3. Get the result
        val results = query.results
        // 4. Close or return the connection to the thread pool
        solrConfig.close(connection)
        return PageResult(query.getBeans(clazz), results.numFound)
    }

    fun <T> getById(id: String, clazz: Class<T>): T? {
        return try {
            // 1. Establish a connection
            val connection = solrConfig.connection()
            // 2. Query information
            val solrDocument = connection.getById(id)
            solrConfig.close(connection)
            // 3. Encapsulate
            getBean(solrDocument, clazz)
        } catch (e: Exception) {
            logger.error("Query failed", e)
            null}}fun saveOrUpdate(field: Any): Boolean {
        return try {
            // 0. Check parameters
            if (field.issEmpty()) return false
            // 1. Establish a connection
            val connection = solrConfig.connection()
            // 2. Add or modify the configuration
            connection.addBean(field)
            / / 3. Submit
            connection.commit()
            // 4. Close the connection or return it to the connection pool
            solrConfig.close(connection)
            true
        } catch (e: Exception) {
            logger.error("Solr added or modified failed", e)
            false}}fun saveOrUpdateBatch(fields: List<Any>): Boolean {
        return try {
            // 0. Check parameters
            if (fields.issEmpty()) return false
            // 1. Establish a connection
            val connection = solrConfig.connection()
            // 2. Add or modify the configuration
            connection.addBeans(fields)
            / / 3. Submit
            connection.commit()
            // 4. Close the connection or return it to the connection pool
            solrConfig.close(connection)
            true
        } catch (e: Exception) {
            logger.error("Solr added or modified failed", e)
            false}}fun removeByIds(ids: List<String>): Boolean {
        return try {
            // 0. Check parameters
            if (ids.issEmpty()) return false
            // 1. Establish a connection
            val connection = solrConfig.connection()
            // delete data
            connection.deleteById(ids)
            // 3. Confirm submission
            connection.commit()
            // 4. Close the connection or return it to the connection pool
            solrConfig.close(connection)
            true
        } catch (e: Exception) {
            logger.error("Delete failed", e)
            false}}/** * delete */ based on search criteria
    fun remove(solrWrapper: SolrWrapper): Boolean {
        return try {
            // 0. Check parameters
            if (solrWrapper.issEmpty()) return false
            // 1. Establish a connection
            val connection = solrConfig.connection()
            // delete data
            connection.deleteByQuery(solrWrapper.buildSolrQuery().query)
            // 3. Confirm submission
            connection.commit()
            // 4. Close the connection or return it to the connection pool
            solrConfig.close(connection)
            true
        } catch (e: Exception) {
            logger.error("Delete failed", e)
            false}}/** * delete */ based on search criteria
    fun remove(query: String): Boolean {
        return try {
            // 0. Check parameters
            if (query.issEmpty()) return false
            // 1. Establish a connection
            val connection = solrConfig.connection()
            // delete data
            connection.deleteByQuery(query)
            // 3. Confirm submission
            connection.commit()
            // 4. Close the connection or return it to the connection pool
            solrConfig.close(connection)
            true
        } catch (e: Exception) {
            logger.error("Delete failed", e)
            false}}}Copy the code
  • SolrWrapper :

import com.yzz.house.dto.OrderBy
import gw.commonkotlin.utils.gtZero
import gw.commonkotlin.utils.issEmpty
import gw.commonkotlin.utils.issNotEmpty
import org.apache.commons.lang3.StringUtils
import org.apache.solr.client.solrj.SolrQuery
import org.slf4j.LoggerFactory.getLogger
import java.math.BigDecimal
import java.util.*
import java.util.stream.Collectors


/ * * *@author:anyu-csdn
 * @date: 2020/11/19
 * @desc: Solr query concatenation class, similar to a simple version of Mybatis- Plus, based on Solrj's further encapsulation of it. * /
class SolrWrapper : AbstractSolrService(a){

    / * * * the encapsulation support query: and and or or sort sorting limit range range paging * /
    private val logger = getLogger(SolrWrapper::class.java)

    // eq stores the data set
    private var eqMap = IdentityHashMap<String, Any>()

    // range Queries the stored data set
    private var rgMap = IdentityHashMap<String, String>()

    // Or the data set stored
    private var orMap = IdentityHashMap<String, Any>()

    / / sort order
    private var orderMap = IdentityHashMap<String, SolrQuery.ORDER>()

    // Returns an argument for the specified field
    private var selectSet = hashSetOf<String>()

    / / geo data query field | latitude and longitude
    private var solrGeo: SolrGeo? = null

    private var pageIndex: Int = 0

    private var pageSize: Int = 10

    private var q: String = ""


    /** * default custom query q */
    fun q(query: String): SolrWrapper {
        q = query
        return this
    }

    /** * default custom query q */
    fun q(boolean: Boolean, query: String): SolrWrapper {
        if (boolean) {
            q = query
        }
        return this
    }

    /** * matches the query :equals */
    fun eq(key: String, value: Any = "*"): SolrWrapper {
        eqMap[key] = value
        return this
    }

    /** * or query */
    fun or(key: String, value: Any = "*"): SolrWrapper {
        orMap[key] = value
        return this
    }

    /** * or query: or */
    fun or(boolean: Boolean, key: String, value: Any = "*"): SolrWrapper {
        if (boolean) {
            orMap[key] = value
        }
        return this
    }

    fun orList(boolean: Boolean, key: String = "*", values: List<Any> = listOf("*")): SolrWrapper {
        if (boolean) {
            values.forEach {
                orMap[key] = it
            }
        }
        return this
    }


    /** * matches the query :equals */
    fun eq(boolean: Boolean, key: String, value: Any = "*"): SolrWrapper {
        if (boolean) {
            eqMap[key] = value
        }
        return this
    }

    /** * match query: pass in list */
    fun eqList(boolean: Boolean, key: String = "*", values: List<Any> = listOf("*")): SolrWrapper {
        if (boolean) {
            val stringBuffer = StringBuffer()
            values.forEach {
                val value = if (StringUtils.isEmpty(it.toString())) "*" else it.toString()
                if (stringBuffer.isEmpty()) stringBuffer.append(value) else stringBuffer.append(" OR $value")}if (StringUtils.isNotEmpty(stringBuffer.toString())) eqMap[key] = "($stringBuffer)"
        }
        return this
    }


    /** ** paging query: page */
    fun pg(boolean: Boolean, pageIndex: Int, pageSize: Int): SolrWrapper {
        if (boolean) {
            this.pageIndex = pageIndex
            this.pageSize = pageSize
        }
        return this
    }

    /** ** paging query: page */
    fun pg(pageIndex: Int, pageSize: Int): SolrWrapper {
        this.pageIndex = pageIndex
        this.pageSize = pageSize
        return this
    }

    /** * range query: range */
    fun rg(boolean: Boolean = true, key: String, start: Any = "*", end: Any = "*"): SolrWrapper {
        if (boolean) {
            rgMap[key] = "$start TO $end"
        }
        return this
    }

    /** * range query: range */
    fun rg(key: String, start: Any = "*", end: Any = "*"): SolrWrapper {
        rgMap[key] = "[$start TO $end]"
        return this
    }

    /** * range query: range */
    fun rgList(boolean: Boolean, key: String, values: List<String> = listOf("* TO *")): SolrWrapper {
        if (boolean) {
            val stringBuffer = StringBuffer()
            values.forEach {
                val value = if (StringUtils.isEmpty(it)) "[* TO *]" else it
                if (stringBuffer.isEmpty()) stringBuffer.append(value) else stringBuffer.append(" OR $value")
            }
            if (StringUtils.isNotEmpty(stringBuffer.toString())) rgMap[key] = "($stringBuffer)"
        }
        return this
    }


    /** * sort: sort */
    fun orderByAsc(boolean: Boolean, key: String): SolrWrapper {
        if (boolean) {
            orderMap[key] = SolrQuery.ORDER.asc
        }
        return this
    }

    /** * sort: sort in order */
    fun orderByAsc(key: String): SolrWrapper {
        orderMap[key] = SolrQuery.ORDER.asc
        return this
    }

    /** * sort: sort backwards */
    fun orderByDesc(key: String): SolrWrapper {
        orderMap[key] = SolrQuery.ORDER.desc
        return this
    }

    /** * sort: sort backwards */
    fun orderByDesc(boolean: Boolean, key: String): SolrWrapper {
        if (boolean) {
            orderMap[key] = SolrQuery.ORDER.desc
        }

        return this
    }

    /** * sort by object */
    fun orderByList(orderBy: List<OrderBy>): SolrWrapper {
        orderBy.forEach {
            if (StringUtils.isNotEmpty(it.field) && (StringUtils.equals(it.orderBy, "asc") || StringUtils.equals(it.orderBy, "desc"))) {
                orderMap[it.field] = if (SolrQuery.ORDER.asc.name == it.orderBy) SolrQuery.ORDER.asc else SolrQuery.ORDER.desc
            }
        }
        return this
    }

    /** * sets location search */
    fun geo(boolean: Boolean, key: String, latitude: BigDecimal, longtude: BigDecimal, range: Int): SolrWrapper {
        if (boolean) {
            // If the key is not empty, the latitude and longitude is greater than 0, and the range is greater than 0, the query condition is added
            if (StringUtils.isNotEmpty(key) && latitude > BigDecimal.ZERO && longtude > BigDecimal.ZERO && range > 0) { val sg = SolrGeo() sg.fieldName = key sg.latitude = latitude.toString() sg.longtude = longtude.toString() sg.range =  range solrGeo = sg } }return this
    }

    /** * sets location search */
    fun geo(key: String, latitude: BigDecimal, longtude: BigDecimal, range: Int): SolrWrapper {
        // If the key is not empty, the latitude and longitude is greater than 0, and the range is greater than 0, the query condition is added
        if (StringUtils.isNotEmpty(key) && latitude > BigDecimal.ZERO && longtude > BigDecimal.ZERO && range > 0) { val sg = SolrGeo() sg.fieldName = key sg.latitude = latitude.toString() sg.longtude = longtude.toString() sg.range =  range solrGeo = sg }return this
    }

    /**
     * 获取排序集合
     */
    private fun orderMap(a): Map<String, SolrQuery.ORDER> {
        return orderMap
    }

    /** * returns the argument */ for the specified field
    fun select(vararg key: String): SolrWrapper {
        key.filter { StringUtils.isNotEmpty(it) }.forEach { selectSet.add(it) }
        return this
    }

    /** * returns the argument */ for the specified field
    fun select(boolean: Boolean, list: List<String>): SolrWrapper {
        if (boolean) {
            selectSet.addAll(list.filter { StringUtils.isNotEmpty(it) })
        }
        return this
    }

    private fun rgMapSQL(a): String {
        val rgBuffer = StringBuffer()
        // Assemble other conditional statements
        rgMap.forEach { (key, value) ->
            run {
                val range = "$key:$value"
                if (rgBuffer.isEmpty()) {
                    rgBuffer.append(range)
                } else {
                    rgBuffer.append(" AND $range")}}}return rgBuffer.toString()
    }

    private fun eqMapSQL(a): String {
        val eqBuffer = StringBuffer()
        // Assemble other conditional statements
        eqMap.forEach { (key, value) ->
            run {
                if (eqBuffer.isEmpty()) {
                    eqBuffer.append("($key:$value)")}else {
                    eqBuffer.append(" AND ($key:$value)")}}}return eqBuffer.toString()
    }

    private fun orMapSQL(a): String {
        val orBuffer = StringBuffer()
        // Assemble other conditional statements
        orMap.forEach { (key, value) ->
            run {
                if (orBuffer.issEmpty()) {
                    orBuffer.append("$key:$value")}else {
                    orBuffer.append(" OR $key:$value")}}}return orBuffer.toString()
    }


    private fun allSQL(a): String {
        / / encapsulated SQL
        val mergeSolrSQL = mergeSolrSQL(eqMapSQL(), rgMapSQL(), orMapSQL())
        return if (StringUtils.isEmpty(q)) {
            // If the encapsulated query is empty, the default query is all
            if (StringUtils.isEmpty(mergeSolrSQL)) : "* *" else mergeSolrSQL
        } else {
            // If the encapsulated query is not empty, it determines whether the encapsulated query is empty.
            if (StringUtils.isEmpty(mergeSolrSQL)) q else "($mergeSolrSQL) AND ($q)"}}fun buildSolrQuery(a): SolrQuery {
        val solrQuery = SolrQuery()
        // 1. Splice query parameters
        val allSQL = this.allSQL()
        solrQuery.set("q", allSQL)
        // 2. Add latitude and longitude parameters
        if (solrGeo.issNotEmpty()) {
            solrQuery.set("d", solrGeo!! .range) .set("pt"."${solrGeo!! .latitude},${solrGeo!! .longtude}")
                    .set("geodist()"."geodist() asc")
                    .set("sfield"."houseLocation")
                    .set("fq"."{! geofilt}")
                    .set("fl"."*,dist:geodist()")}// 3. Splice the paging parameters
        if (this.pageIndex.gtZero() && this.pageSize.gtZero()) {
            solrQuery.start = (this.pageIndex * this.pageSize) - (this.pageSize)
            solrQuery.rows = this.pageSize
        }
        // 4. Concatenate sort parameters
        if (orderMap.issNotEmpty()) {
            this.orderMap().forEach { (key, value) -> solrQuery.setSort(key, value) }
        }
        // Determine the return value
        if (selectSet.issNotEmpty()) {
            solrQuery.set("fl", selectSet.stream().collect(Collectors.joining(",")))
        }
        logger.info("\r\nSolr query: \r\n$allSQL\r\n")
        return solrQuery
    }

}
Copy the code
  • AbstractSolrService AbstractSolrService

import com.alibaba.fastjson.JSON
import com.alibaba.fastjson.JSONArray
import com.baomidou.mybatisplus.core.toolkit.ObjectUtils
import org.apache.commons.lang3.StringUtils

/ * * *@author:anyu
 * @date: 2020/11/19 * /
abstract class AbstractSolrService{

    /** * convert an object to another object */
    fun <T> getBean(any: Any, clazz: Class<T>): T? {
        return if(! ObjectUtils.isEmpty(any)) { val solrResult = JSON.parseObject(JSON.toJSONString(any), clazz) solrResult }else {
            null}}/** * converts a collection to the specified collection */
    fun <T> getArrayBean(any: Any, clazz: Class<T>): List<T>? {
        return if(! ObjectUtils.isEmpty(any)) { val solrResult = JSONArray.parseArray(JSON.toJSONString(any), clazz) solrResult }else {
            return null}}/** ** Add content to automatically exclude empty parts of the List */
    fun mergeSolrSQL(vararg objs: String?): String {
        val mergeBuffer = StringBuffer()
        for (obj in objs) {
            if (StringUtils.isNotEmpty(obj)) {
                if (mergeBuffer.isEmpty()) {
                    mergeBuffer.append("($obj) ")}else {
                    mergeBuffer.append(" AND ($obj)")}}}return mergeBuffer.toString()
    }
}
Copy the code
  • Other incidental reference classes: SolrGeo:

/ * * *@author:anyu
 * @date: 2020/11/19
 * @desc: GPS encapsulates objects */
class SolrGeo {

    /** * The name of the field to search for */
    var fieldName: String=""

    /** * latitude */
    var latitude: String=""

    /** ** longitude */
    var longtude: String=""

    /** * to query the kilometer range, take the integer */ for convenience
    var range: Int=0

}
Copy the code

5.3 Other Solr optimizations

6. Extension