1. Install MongoDB on Linux

1.1 Decompressing compressed Packages

Tar - ZXVF mongo - Linux - x86_64-4.1.3. TGZCopy the code

1.2 Default Boot Mode

CD/usr/local/mongo/mongo - Linux - x86_64-4.1.3 / bin, / mongodCopy the code

Exception in initAndListen: NonExistentPath: Data directory/Data /db not found

1.3 Creating a Configuration File

The configuration file needs to be created by yourself

Vi mongo.conf dbpath=/data/mongo/ / database directory, default /data/db port=27017 // listen on the port, default 27017 bind_ip=0.0.0.0 // listen on the IP address, Fork =true // Log path = /data/mongo/MongoDB. Log // Log path logAppEnd =true // Whether to append logs auth=false // Yes Enable user password loginCopy the code

Esc key to exit the editing mode.

1.4 Create and start dbPath

You can create multiple files by mkdir -p

mkdir -p /data/mongo

./bin/mongod -f mongo.conf
Copy the code

1.5 Viewing the Startup Status

ps -ef|grep mongo

Copy the code

2. Directing a command

2.1 Basic MongoDB Operations

Query database show DBS; If no corresponding database exists, create the use database name. Db. createCollection(" set name ") query set show tables; show collections; Db.getcollection (" XXX ").find() db.xxx.find() Delete set DB. Collection name. The drop (); DropDatabase db.dropdatabase ();Copy the code

2.2 MongoDB set data operation

2.2.1 Adding Data

(1) Add a single item

Db.user. insert({name:' birthday ', birthday:new ISODate('1999-12-12'), salary:2500, city:'bj'})Copy the code

(2) Multiple entries are added

Db.user. insert([{name:' zhang 3 ', birthday:new ISODate('1999-12-12'), salary:2500, city:'bj'},{name:' zhang 4 ', birthday:new ISODate('1999-12-12'), salary:2500, city:'bj'}, Birthday :new ISODate('1998-12-12'), salary:3500, city:'bj'},{name:' bj', birthday:new ISODate('1995-02-24'), salary:5500, city:'cq' } ])Copy the code

2.2.2 Data Query

(1) Comparison condition query

operation Conditional formatting
Is equal to the {key:value}
Is greater than {key:{$gt:value}}
Less than {key:{$lt:value}}
Greater than or equal to {key:{$gte:value}}
Less than or equal to {key:{$lte:value}}
Is not equal to {key:{$ne:value}}
  • Query data whose name is Joe
Db. The user. The find ({name: 'Joe'}) the user. The find ({name: {$eq: 'Joe'}})Copy the code
  • Select * from salary > 5000
db.user.find({salary:{$gt:5000}})
Copy the code

(2) Logical condition query

MongoDB's find() method can pass in multiple keys separated by commas, the AND condition DB of regular SQL. Find ({key1:value1, key2:value2}).pretty() or condition db. The collection name. The find ({$or: [{key1: value1}, {key2: value2}]}). Pretty db (a) not conditions. Set name. Find ({key:{$not:{$operator :value}}). Pretty () Pretty () prints the formatted data from the queryCopy the code
  • Query the person whose salary is 2500 bj in city
db.user.find({salary:2500,city:'bj'})
Copy the code
  • Query people whose salary is 2500 or 3500
db.user.find({ $or:[{salary:2500},{salary:3500}]})
Copy the code

(3) paging query

Db.set name.find({condition}).sort({sort: sort}).skip(number of rows skipped).limit(number of rows displayed on a page)Copy the code
  • Query bj for two data on the first page in ascending salary order
db.user.find({city:'bj'}).sort({salary:1}).skip(0).limit(2)
Copy the code

(4) Define the return field

  • Db.user. find({},{_id:0}) the _id field is not returned
  • Db.user. find({},{_id:1}) returns only _id
  • Db.user. find({},{name:0}) name field does not return all other returns
  • Db.user.find ({},{name:1}) returns only name and _id

2.2.3 Data Update

db.user.update(query, update, options)

$set: sets a field value. $unset: deletes a specified field. $inc: increments the value. Options: Optional configuration upsert: < Boolean >, whether to insert objNew if there is no update record,true is insert, default is false, do not insert. Multi: < Boolean >, default is false, update only the first record found, if this parameter is true, update all records. WriteConcern: <document> is used to specify whether mongod should acknowledge write actions.Copy the code
  • Modify John’s salary
Db.user. update({$set:{salary:3000}})Copy the code
  • Modify the information about John 3-2. If there is no John 3-2, add a new one
Db. User. Update ({name: 'Joe 2}, {$set: {salary: 2000, city:' bj '}}, {upsert: true})Copy the code
  • Change all recorded salaries for the city as BJ to 2000
db.user.update({city:'bj'}, {$set:{salary:2000}},{multi:true})
Copy the code
  • Increase Joe’s salary by 1000
Db.user. update({$inc:{salary:1000}})Copy the code
  • Delete the birthday field for Joe
Db. User. Update ({name: 'Joe'}, {$unset: {birthday: "'}})Copy the code

2.2.4 Deleting Data

Db.collection. remove(<query>, {justOne: < Boolean >, writeConcern: <document>}) Parameter description: query: (Optional) Conditions of the document to be deleted. JustOne: (Optional) If this parameter is set to true or 1, only one document is deleted. If this parameter is not set, or the default value false is used, all documents matching the criteria are deleted. WriteConcern :(optional) specifies mongod's acknowledgement behavior for write operations.Copy the code

2.3 MongoDB Aggregation

2.3.1 Monocular aggregation

db.user.find().count()

db.user.count()

db.user.distinct("city")
Copy the code

2.3.2 Aggregate pipes

MongoDB’s aggregation pipeline passes the MongoDB document to the next pipeline after one pipeline has finished processing. Pipeline operations can be repeated.

Common operations in an aggregation framework

operation describe
$group Group documents in a collection that can be used for statistical results.
$project Modify the structure of the input document. It can be used to rename, add, or delete domains, create computed results, and nest documents.
$match Used to filter data and output only documents that meet the criteria. $match uses MongoDB’s standard query operations.
$limit Used to limit the number of documents returned by the MongoDB aggregation pipeline.
$skip Skip the specified number of documents in the aggregation pipe and return the remaining documents.
$sort Sort the input documents and output them.
$geoNear Output an ordered document close to a geographic location.
expression describe
$sum Calculate the total
$avg Calculate the average
$min Gets the minimum value corresponding to all documents in the collection
$max Gets the maximum value corresponding to all documents in the collection
$push Inserts values into an array in the resulting document
$addToSet Inserts values into an array in the resulting document without duplicating the data
$first Get the first document data according to the ordering of the resource documents
$last Gets the last document data according to the ordering of the resource documents
  • Group by city and count the occurrence times of each city
Db. User. Aggregate ([{$group: {_id: "$city", city_count: {$sum: 1}}}]) or db.user.aggregate({$group:{_id:"$city",city_count:{$sum:1}}})Copy the code
  • Group by city and calculate the average salary of each city
db.user.aggregate({$group:{_id:"$city",avg_salary:{$avg:"$salary"}}})
Copy the code
  • $project renamed
db.user.aggregate([
    {$group:{_id:"$city",avg_salary:{$avg:"$salary"}}},
    {$project : {city: "$city", saly : "$avg_salary"}}
])
Copy the code
  • $match The output of the filter data is greater than 2500
db.user.aggregate([
    {$group:{_id:"$city",avg_salary:{$avg:"$salary"}}},
    {$project : {city: "$city", saly : "$avg_salary"}},
    {$match:{saly:{$gt:2500}}}
])

Copy the code

2.3.3 MapReduce programming model

Pipeline queries are faster than MapReduce, but the power of MapReduce lies in its ability to execute complex aggregation logic in parallel across multiple servers. MongoDB does not allow a single aggregate operation of Pipeline to consume too much system memory. If an aggregate operation consumes more than 20% of the memory, MongoDB stops the operation and sends an error message to the client. MapReduce is a computing model. In simple terms, a large amount of work (data) is decomposed (MAP) and executed, and then the results are combined into a final result (REDUCE).

Db.user. mapReduce(//map function(){emit(this.city,this.salary)}, //reduce function(key,values){return array. avg(values)}, {out:'cityAvgSal', query:{salary:{$gt: Cityavgsal.find (); db.cityavgsal.find ();Copy the code

To use MapReduce, two functions are implemented: Map function and Reduce function. The Map function invokes emit(key, value), traverses all records in the collection, and transfers the key and value to Reduce function for processing

Parameter Description:

  • Map :JavaScript function that converts each input document to zero or more documents, generating a sequence of key-value pairs as arguments to the reduce function
  • Reduce: JavaScript function that merges and simplifies the output of the map operation (turning key-value into keyvalues, that is, turning the values array into a single value value)
  • Out: stores statistical results
  • Query: A filter condition in which the map function is called only for documents that meet the condition.
  • Sort: The sort sort argument combined with limit (which also sorts documents before sending them to the map function) optimizes the grouping mechanism
  • Limit: An upper limit on the number of documents sent to the map function (without a limit, sort alone is not useful)
  • Finalize: You can modify the output result of Reduce again
  • Verbose: indicates whether to include the time information in the result information. The default value is fasle

3. Directing Index Index

3.1 Index Type

3.1.1 Single-Key Index (Single Field)

db.user.createIndex(keys, options)

Create index on a single instance:

Db.createindex ({” createIndex “: sort})

Db.user.createindex ({"name":1}) db.user.getIndexes()Copy the code

Delete index:

db.users.dropIndex({name:1})
db.COLLECTION_NAME.dropIndex("INDEX-NAME")
db.COLLECTION_NAME.dropIndexes()
Copy the code
  • Special single-key index Expired index TTL (Time To Live)

The TTL index is a special index in MongoDB, which can automatically delete documents after a certain period of time. At present, the TTL index can only be established on a single field, and the field type must be the date type.

CreateIndex ({" date ": sort}, {expireAfterSeconds: number of seconds})Copy the code

After 20 seconds of index creation, documents with a birthday field are automatically deleted

db.user.createIndex({"birthday":1}, {expireAfterSeconds:20})
Copy the code

3.1.2 Compound Index

Important considerations when creating composite indexes include: field order and index direction.

Db.createindex ({" createIndex ": sort," createIndex ": sort})Copy the code

3.1.3 Multikey Indexes

For attributes that contain array data, MongoDB supports creating indexes for each element in an array. Multikey Indexes support strings, numbers, and nested documents

3.1.4 Geospatial Index

Create indexes for geospatial coordinate data.

  • 2dSPHERE index for storing and finding points on a sphere
  • A 2D index for storing and finding points on a plane

GeoJSON is a format for encoding various geographic data structures.

{type: "Point", coordinates: [106.320765, 29.594854]}Copy the code

The value of the type member is one of the following strings: “Point”, “MultiPoint”, “LineString”, “MultiLineString”, “Polygon”, “MultiPolygon”, or “GeometryCollection”.

The GeoJSON geometry must be made of a member named “Coordinates”. The value of the coordinates member is always an array. The structure of the elements in this array is determined by the geometric type.

(1) Insert data

Db.com pany. Insert ([{loc: {type: "Point", coordinates: [106.320765, 29.594854]}. Name :' Xi City Blue Bay '},{loc:{type:"Point",coordinates:[106.322175,29.596256]}, Name :' West Lake Baby circle '},{loc:{type:"Point",coordinates:[106.240726,29.584199]}, Name :' Bishan Stadium '},{loc:{type:"Point",coordinates:[106.48104,29.530853]}, Name :' Shixinlu '},{loc:{type:"Point",coordinates:[106.319417,29.59191]}, Name :' Coordinates'},{loc:{type:"Point",coordinates:[106.315384,29.587426]}, Name: 'association, city letter'}, {loc: {type: "Point", coordinates: [106.442809, 29.507721]}, name: 'west station of chongqing'}])Copy the code

(2) Create an index

db.company.createIndex({loc:"2dsphere"})
Copy the code

(3) Query 5 kilometers around Xi City Blue Bay

Db.phany. find({loc:{$geoWithin: {$center:[[106.320765,29.594854],0.05]}}})Copy the code

(4) Query the nearest 3 points around Blue Bay in Xi City

Db.pany. Aggregate ([{$geoNear:{near:{type:"Point",coordinates:[106.320765,29.594854]}, key:"loc", distanceField:"dist.calculated" } }, {$limit:3} ])Copy the code

3.1.5 Text Index

MongoDB provides Text query for string content. Text Index supports any Index query whose attribute value is String or string array element. Note: a set supports only one Text Index at most, and Chinese segmentation is not recommended.

db.test.insert({id:1,name:'test1',city:'bj',description:'one world one dream in bj'})
db.test.insert({id:2,name:'test2',city:'nj',description:'two world two dream in nj'})
db.test.insert({id:3,name:'test3',city:'dj',description:'three world three dream in dj'})

db.test.createIndex({description:'text'})

db.test.find({$text:{$search:'world'}})
Copy the code

3.1.6 Hashed Index

When Hashed index is used, MongoDB automatically calculates the hash value without using an application. Note: Hash index only supports equal query, not range query.

Db.pool.createindex ({" field ": "hashed"})Copy the code

3.2 Index and Explain analysis

3.2.1 the explain analysis

  • Insert 1 million pieces of data. You are advised to use Linux
for(var i=0; i<1000000; i++){ db.lg_resume.insert({id:i,name:'test'+i,salary:(Math.random()*20000).toFixed(2)}) }Copy the code

Explain () also receives different parameters that we can set to view more detailed query plans.

  • QueryPlanner: queryPlanner is the default parameter, and see the table below for execution plan information.
  • ExecutionStats: executionStats returns some statistics about the execution plan (some versions are equivalent to allPlansExecution).
  • AllPlansExecution: allPlansExecution used to retrieve all the execution plan, results were essentially the same as the above parameters.

Db.lg_resume.find ({name:”test11011″}).explain()

{"queryPlanner": {"plannerVersion": NumberInt("1"), // Query plan version "namespace": "Lagou.lg_resume ",// The collection to query (this value returns the table queried by the query) database. Set" indexFilterSet": false,// Whether there is an indexFilter "parsedQuery": {// query condition "name": {"$eq": "Test11011"}}, "queryHash": "01AEE5EC", "winningPlan": {// Selected execution plan "stage": "COLLSCAN", // Stage of the selected execution plan "filter": {"name": {"$eq": "test11011"}}, "direction": "Forward" // The order of this query}, "rejectedPlans": [] // The detailed return of rejected execution plans, with the same meaning as in winningPlan}, "serverInfo": {//MongoDB server information "host": "VM-8-15-centos", "port": NumberInt("27017"), "version": "4.1.3", "gitVersion": "7c13a75b928ace3f65c9553352689dc0a6d0ca83" }, "ok": 1 }Copy the code

winningPlan.stage

Select the stage(query mode) to execute the plan. Common examples are: COLLSCAN/ full table scan: Table scan/heap scan (mysql) CollectionScan (table scan/heap scan) FETCH/ retrieve documents by index, SHARD_MERGE/ merge sharding results, IDHACK/ query against _id, etc

winningPlan.direction

The order of the query for this query, here is forward, if.sort({field :-1}) is used displays backward.

Db.lg_resume.find ({name:”test11011″}).explain(“executionStats”)

// 1 { "queryPlanner": { "plannerVersion": NumberInt("1"), "namespace": "lagou.lg_resume", "indexFilterSet": false, "parsedQuery": { "name": { "$eq": "test11011" } }, "queryHash": "01AEE5EC", "winningPlan": { "stage": "COLLSCAN", "filter": { "name": { "$eq": "test11011" } }, "direction": "forward" }, "rejectedPlans": []}, "executionStats": {"executionSuccess": true, // Whether the execution succeeded "nReturned": "ExecutionTimeMillis ": NumberInt("355"), // Execution time "totalKeysExamined": "TotalDocsExamined ": NumberInt("0"), // Number of index scans "totalDocsExamined": NumberInt("1000000"), // Number of document scans "executionStages": { "COLLSCAN", / / scan mode "filter" : {" name ": {" $eq" : "test11011"}}, "nReturned" : NumberInt (" 1 "), / / number of query results "executionTimeMillisEstimate" : NumberInt (" 298 "), / / retrieve the document data "works" of time: NumberInt("1000002"),// Number of units of work, a query is decomposed into small units of work "advanced": NumberInt("1"), // Number of results returned first "needTime": NumberInt("1000000"), "needYield": NumberInt("0"), "saveState": NumberInt("7824"), "restoreState": NumberInt("7824"), "isEOF": NumberInt("1"), "direction": "forward", "docsExamined": NumberInt("1000000")// Number of documents checked, consistent with totalDocsExamined. }}, "serverInfo": {"host": "VM-8-15-centos", "port": NumberInt (" 27017 "), "version" : "4.1.3", "gitVersion" : "7 c13a75b928ace3f65c9553352689dc0a6d0ca83}", "ok" : 1}Copy the code

(3) ID field analysis

db.lg_resume.find({id:{$gt:222333}}).explain(“executionStats”)

{"queryPlanner": {"winningPlan": {"stage": "COLLSCAN",},}, "executionStats": {"nReturned": NumberInt("777666"), "executionTimeMillis": NumberInt("346"), "totalKeysExamined": NumberInt("0"), "totalDocsExamined": NumberInt("1000000"), "executionStages": { "stage": "COLLSCAN", "nReturned": NumberInt("777666"), "executionTimeMillisEstimate": NumberInt("209"), " } }, }Copy the code

Db.lg_resume.createindex ({id:1})

// 1 {"queryPlanner": {"winningPlan": {"stage": "FETCH", "inputStage": { "stage": "IXSCAN", "keyPattern": { "id": 1 }, "indexName": "id_1", "isMultiKey": false, "multiKeyPaths": { "id": [ ] }, } }, }, "executionStats": { "nReturned": NumberInt("777666"), "executionTimeMillis": NumberInt("766"), "totalKeysExamined": NumberInt("777666"), "totalDocsExamined": NumberInt("777666"), "executionStages": { "stage": "FETCH", "nReturned": NumberInt("777666"), "executionTimeMillisEstimate": NumberInt("715"), } }, }Copy the code

3.2.2 executionStats returns layer by layer analysis

The first layer:

ExecutionTimeMillis The most intuitive explain return value is the executionTimeMillis value, which refers to the execution time of this statement. This value is of course desirable as little as possible.

There are three executionTimeMillis:

ExecutionStats. ExecutionTimeMillis the whole query time of the query.

ExecutionStats. ExecutionStages. ExecutionTimeMillisEstimate which retrieves the document to obtain the data of time.

ExecutionStats. ExecutionStages. InputStage. ExecutionTimeMillisEstimate the query time used scanning document index.

The second layer:

Index and document Number of scanned items and number of returned items

This focuses on three return items

NReturned, totalKeysExamined, and totalDocsExamined represent the items returned by the query, index scan items, and document scan items respectively. All of this intuitively affects executionTimeMillis, the less we have to scan the faster. For a query,

Our ideal state would be: nReturned=totalKeysExamined=totalDocsExamined

The third layer:

Stage state analysis

The types are as follows: COLLSCAN: full table scan

IXSCAN: index scan

FETCH: Retrieves the specified document according to the index

SHARD_MERGE: Merge data returned by fragments

SORT: indicates that SORT is performed in memory

LIMIT: LIMIT the number of returns

SKIP: SKIP is used to SKIP

IDHACK: Queries the id

SHARDING_FILTER: Queries fragmented data using Mongos

COUNT: Use db.col.explain ().count() to perform COUNT operations

TEXT: Stage return for a query using a full-text index

PROJECTION: the return of stage when it limits the return of fields

For a normal query, I would like to see a combination of stages (using indexes whenever possible) :

Fetch+IDHACK

Fetch+IXSCAN

Limit + + IXSCAN (Fetch)

PROJECTION+IXSCAN

SHARDING_FITER+IXSCAN

You do not want to see stages that contain the following: COLLSCAN

SORT(use SORT but no index)

COUNT does not COUNT with index.

3.2.3 Slow Query analysis

1. Enable the built-in query analyzer to record the read/write efficiency

Db. setProfilingLevel(n,m),n can be 0,1,2

0 indicates that no record is recorded

1 indicates that slow operations are recorded. If the value is 1,m must be set in ms to define the threshold of slow query time

2 records all read and write operations

2. Query the monitoring result

db.system.profile.find().sort({millis:-1}).limit(3)

3. Analyze slow queries

Poor application design, incorrect data model, hardware configuration issues, missing indexes, etc

4. Interpret explain results to determine if indexes are missing

4. MongoDB application

4.1 Java Accessing MongoDB

Maven rely on

< the dependency > < groupId > org. Mongo < / groupId > < artifactId > mongo Java - driver < / artifactId > < version > 3.10.1 < / version > </dependency>Copy the code
package com.lagou.test; import com.mongodb.MongoClient; import com.mongodb.client.FindIterable; import com.mongodb.client.MongoCollection; import com.mongodb.client.MongoDatabase; import com.mongodb.client.model.Filters; import org.bson.Document; /** * @author * @date 2021/07/08 23:08 */ public class DocumentTest {public static void main(String[] args) { MongoClient MongoClient = new MongoClient("123.207.31.211",27017); MongoDatabase database = mongoclient.getDatabase ("lagou"); // MongoDatabase = mongoclient.getDatabase ("lagou"); MongoCollection<Document> collection = database.getCollection("user"); MongoCollection<Document> collection = database.getCollection("user"); Parse ("{name:'test',birthday:new ISODate('2002-02-11'),salary:2000,city:'cq'}"); // Add Document user = document. parse("{name:'test',birthday:new ISODate('2002-02-11'),salary:2000,city:'cq'}"); // collection.insertOne(user); // query Document sortDocument = new Document(); // sortdocument. append("salary",-1); //FindIterable<Document> FindIterable = collection.find(Document.parse("{salary:{$gt:2000}}")).sort(sortDocument); Filter FindIterable<Document> FindIterable = collection.find(Filters. Gt ("salary",2000)).sort(sortDocument);  for (Document document : findIterable) { System.out.println(document); } mongoClient.close(); }}Copy the code

4.2 Spring Accessing MongoDB

4.2.1 Importing dependent packages based on Maven to Create a Project

Depending on dependencies, this package is imported without reference to spring-related packages

<dependency> <groupId>org.springframework.data</groupId> <artifactId>spring-data-mongodb</artifactId> < version > 2.0.9. RELEASE < / version > < / dependency >Copy the code

4.2.2 applicationContext. XML

<? The XML version = "1.0" encoding = "utf-8"? > <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context" xmlns:mongo="http://www.springframework.org/schema/data/mongo" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd http://www.springframework.org/schema/data/mongo http://www.springframework.org/schema/data/mongo/spring-mongo.xsd "> <! <context:component-scan base-package="com.lagou"/> <! - build directing a factory object - > < mongo: db - factory id = "mongoDbFactory" client - uri = "mongo: / / 123.207.31.211:27017 / lagou" / > <! - build MongoTemplate type of object - > < bean id = "MongoTemplate" class = "org. Springframework. Data. The mongo. Core. MongoTemplate" > <constructor-arg ref="mongoDbFactory"/> </bean> </beans>Copy the code

Holdings User. Java

  • The ID field corresponds to the _ID
public class User { private String id; private String name; private Date birthday; private Double salary; // omit get set method}Copy the code

4.2.4 UserDao. Java

package com.lagou.dao; import com.lagou.bean.User; /** * @author * Created on 2021/07/12 10:24 */ public interface UserDao {// add void insertUser(User User); // Query User findByName(String name); }Copy the code

4.2.5 UserDaoImpl. Java

package com.lagou.dao; import com.lagou.bean.User; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.data.mongodb.core.MongoTemplate; import org.springframework.data.mongodb.core.query.Criteria; import org.springframework.data.mongodb.core.query.Query; import org.springframework.stereotype.Repository; /** * @author * Created on 2021/07/12 10:29 */ @repository ("userDao") public class UserDaqImpl implements userDao { @Autowired private MongoTemplate mongoTemplate; @Override public void insertUser(User user) { //mongoTemplate.insert(user); MongoTemplate. Insert (user,"users"); // mongoTemplate will be automatically created if the specified set is not inserted. } @Override public User findByName(String name) { Query query = new Query(); query.addCriteria(Criteria.where("name").is(name)); Mongotemplate. findOne(query, user.class,"users"); // Mongotemplate. findOne(query, user.class,"users"); return user; }}Copy the code

4.2.6 test class

package com.lagou; import com.lagou.bean.User; import com.lagou.dao.UserDao; import org.springframework.context.support.ClassPathXmlApplicationContext; import java.text.ParseException; import java.text.SimpleDateFormat; import java.util.Date; /** * @author * MongoTemplateMain public class MongoTemplateMain {public static void main(String] args) { ClassPathXmlApplicationContext context = new ClassPathXmlApplicationContext("classpath:applicationContext.xml");  UserDao userDao = context.getBean("userDao",UserDao.class); User user = new User(); user.setName("test"); Date date = null; SimpleDateFormat simpleDateFormat = new SimpleDateFormat("yyyy-MM-dd hh:mm:ss"); try { date = simpleDateFormat.parse("2001-12-12 11:21:11"); } catch (ParseException e) { e.printStackTrace(); } user.setBirthday(date); user.setSalary(4500d); userDao.insertUser(user); User test = userDao.findByName("test"); System.out.println(test); }}Copy the code

4.4 SpringBoot Accessing MongoDB

4.4.1 MongoTemplate way

Step 1: Create a Springboot project based on Maven

<dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-mongodb</artifactId> < version > 2.2.2. RELEASE < / version > < / dependency >Copy the code

Step 2: Configure the file application.properties

Spring. The data. The mongo. Host = 123.207.31.211 spring. The data. The mongo. Port = 27017 spring. The data. The mongo. Database = lagou spring.data.cassandra.read-timeout=10000Copy the code

The rest is the same as Spring

4.4.2 MongoRepository approach

1, 2 steps as above

Step 3: Write the entity class and type @document (” Collection name “) on the entity class

@Document("users")
public class User {
    private String id;
    private String name;
    private Date birthday;
    private Double salary;
}
Copy the code

Step 4: Write the Repository interface to extend MongoRepository

IO /spring-data… ethods.query-creation

If a built-in method inadequate definition without him Such as: define the find | read | get such as the beginning of the method

package com.lagou.repository; import com.lagou.bean.User; import org.springframework.data.mongodb.repository.MongoRepository; /** * @author * repositroy extends MongoRepository */ public interface repositroy extends MongoRepository<User,String> {  User findUsersByNameAndSalaryEquals(String name,Double salary); User findUsersByNameEquals(String name); }Copy the code

Step 5: Test by getting the Repository object from the Spring container

@SpringBootApplication public class Application { public static void main(String[] args) { ApplicationContext context = SpringApplication.run(Application.class,args); UserRepositroy userRepositroy = context.getBean(UserRepositroy.class); User user = userRepositroy.findUsersByNameAndSalaryEquals("test", 4600d); System.out.println(user); }}Copy the code