[MongoDB- Query and aggregate operations]

Query keyword

$and[#]

# conditions are set up to the query to the db. The stutent. Find ({$and: [{name: "little spiral"}, {age: 30}]})Copy the code

Or query $or [#]

# one condition established can query to the db. The stu. Find ({$or: [{name: "green"}, {name: "little black"}]})Copy the code

The subquery $all [#]

The partial order of the elements in the list following all, as long as it exists in test_List, all results can be queried

> db. Stutent. Find ({" test_list ": {$all: [1," five "]}}, {" _id ": ObjectId (" 5 d2eee1314ff51d814e40365"), "name" : "small whirlpool", "age" : 30, "test_list" : [1, 2, 3, 4, "five", 1000], "hobby" : [" permed hair "]}Copy the code

$in[#]

[" green "and" at dark ", "red", "black"] # should accord with the list of names to find out all the stu. Find ({name: {$in: [" green "and" at dark ", "little red", "black"]}})Copy the code

Sort/select/skip

Sort :sort db.stu.find().sort({age:1}) 1 reverse order :limit db.stu.find().limit(2) skip db.stu.find().skip(2) Db.stu.find ().skip(0).limit(2).sort({age:-1}) priority: sort first - skip - select var page = 1 var num = 2 var sk = (page-1) * num db.stu.find().skip(sk).limit(num).sort({ age:-1 })Copy the code

Mongo commonly used query, update, delete and other statements

Note: All collections in this article represent mongo table names

Commonly used condition operator: gt (>) and gt (>) and gt (>), lt (<), gte (> =), gte (> =), gte (> =), lte (< =), ne (! (=), can be! (=), can be! =), eq (=), the add (+), add (+), the add (+), subtract (-), multiply (∗), multiply (*), multiply (∗), divide (/)

Insert:

The _ID field is added to the document by default as its primary key. Every MongoDB document needs an ID

db.users.insert({username:"smith"})
Copy the code

Query:

1. Basic condition query

Db. Collection. Find ({" type ":" test ");Copy the code

2. Interval query

Db. Collection. Find ({" type ":" test ", "addTime" : {$lte: ISODate (" the 2019-06-11 T59:59:00. 492 + 08:00 "), $gte: ISODate (" the 2019-06-12 T00:0 0:00. 492 + 08:00 ")}});Copy the code

Array list in query

Db. Collection. Find ({" type ":" test ", "ids" : {$in: [1.2.3]}});
Copy the code

4, paging sort query, reverse order (-1), positive order (1)

Db. Collection. The find ({" type ":" test "}), sort ({" addTime ":- 1}).skip(0).limit(2);
Copy the code

5. Group query and count the total age of type

db.collection.aggregate([{$group:{_id:"$type",total:{$sum:"$age"}}}]);
Copy the code

Select * from age group where name is not null and type is age$group*

db.collection.aggregate([{$match:{"name":{$ne:null}}}, {$group:{_id:"$type",total:{$sum:"$age"}}}]);
Copy the code

Type is test. Remark is not a list of repeated order numbers greater than 1 that are manually generated. * must be used$group*

db.collection.aggregate([{$match:{"type" :" test","remark" :{$ne:" manual generation task "}}},{$group:{_id:"$orderNo",total:{$sum:1}}},{ $match: { total: { $gt : 1}}}]);
Copy the code

Select the value of the (age1 / age2) expression test from the aggregate aggregate. * must be used$project*

db.collection.aggregate([{$match:{"type" : "test"}},{$project:{_id:"$id",sub:{{ $divide: [ "$age1", "$age2" ]}}}}]);
Copy the code

Select * from aggregate where type = test ((age1 + age2) * (year1-year2)); select * from aggregate where type = test$project*

db.collection.aggregate([{$match:{"type" : "test"}},{$project:{_id:"$id",total:{$multiply:[{ $add: [ "$age1", "$age2" ]},{ $subtract: [ "$year1", "$year2" ]}]}}}]);
Copy the code

Update:

The default value is false. The default value is false. The default value is false.

db.collection.update({"_id": ObjectId('123456')}, {$set:{"type":"test"}},false.true);
Copy the code

2. Replace update, the document is replaced with a document containing only the country field. The username field is removed because it is only used to match the document, and the second parameter is used to update the replacement.

db.users.update({username:"smith"},{country:"Canada"})
Copy the code

Delete with $unset

db.users.update({username:"smith"},{$unset:{country:1}})
Copy the code

4. Advanced update to add data to arrays using $push or $addToSet(unique, prevents duplicates) to allow anyone who likes Casablanca to like The Maltese Falcon

db.users.update({"favorites.movies":"Casablanca"}),
         {$addToSet:{"favorites.movies":"The Maltese Falcon"}},
                false.true)
Copy the code

The first parameter is a query that matches the user whose movie list contains Casablanca. The second argument uses $addToSet to add The Maltese Falcon to The list.

{
	username:"smith",
	favorites:{
		cities:["Chicago","Cheyenne"],
		movies:["Casablanca","For a Few Dollars More","The String","The Maltese Falcon"]
	}
}
Copy the code

Add index (-1); add index (-1);

db.collection.createIndex({type:1})
Copy the code

Delete:

1. Drop the entire table

db.collection.drop()
Copy the code

2. Delete data whose type is test

db.collection.remove({"type" : "test"})
Copy the code

Index:

Explain (), a great tool for tuning and optimizing queries.

db.numbers.find({num:{"$gt":1995}}).explain("executionStats")
Copy the code

Typical explain(“executionStats”) output without indexes

{
    "queryPlanner": {
        "plannerVersion": NumberInt("1"),
        "namespace": "test.numbers",
        "indexFilterSet": false,
        "parsedQuery": {
            "num": {
                "$gt": 1995
            }
        },
        "winningPlan": {
            "stage": "COLLSCAN",
            "filter": {
                "num": {
                    "$gt": 1995
                }
            },
            "direction": "forward"
        },
        "rejectedPlans": [ ]
    },
    "executionStats": {
        "executionSuccess": true,
        "nReturned": NumberInt("4"),
        "executionTimeMillis": NumberInt("20"),
        "totalKeysExamined": NumberInt("0"),
        "totalDocsExamined": NumberInt("2000"),
        "executionStages": {
            "stage": "COLLSCAN",
            "filter": {
                "num": {
                    "$gt": 1995} }, "nReturned": NumberInt("4"), "executionTimeMillisEstimate": NumberInt("11"), "works": NumberInt("2002"), "advanced": NumberInt("4"), "needTime": NumberInt("1997"), "needYield": NumberInt("0"), "saveState": NumberInt("16"), "restoreState": NumberInt("16"), "isEOF": NumberInt("1"), "direction": "forward", "docsExamined": NumberInt("2000") } }, "serverInfo": { "host": "5007451YFBPC1", "port": NumberInt("27017"), "version": "4.2.4," "gitVersion" : "b444815b69ab088a808162bdb4676af2ce00ff2c}", "ok" :1
}
Copy the code

The totalKeysExamined field shows the number of indexes in the entire scan. Its value is 0. TotalDocsExamined indicates the number of scanned documents

CreateIndex (num) with createIndex();

>db.numbers.createIndex({num:1})
{
    "createdCollectionAutomatically": false,
    "numIndexesBefore": NumberInt("1"),
    "numIndexesAfter": NumberInt("2"),
    "ok": 1
}
Copy the code

The getIndexes() method retrieves whether the index was created successfully:

>db.numbers.getIndexes()
[
    {
        "v": NumberInt("2"),
        "key": {
            "_id": NumberInt("1")
        },
        "name": "_id_",
        "ns": "test.numbers"
    },
    {
        "v": NumberInt("2"),
        "key": {
            "num": 1
        },
        "name": "num_1",
        "ns": "test.numbers"
    }
]
Copy the code

The first is the standard _ID index, which is automatically created for each collection. The second is the num index we created ourselves. These indexes are named _ID_ and num_1, respectively.

A query with Explain () shows that the response time varies considerably.

>db.numbers.find({num:{"$gt":1995}}).explain("executionStats")
{
    "queryPlanner": {
        "plannerVersion": NumberInt("1"),
        "namespace": "test.numbers",
        "indexFilterSet": false,
        "parsedQuery": {
            "num": {
                "$gt": 1995
            }
        },
        "winningPlan": {
            "stage": "FETCH",
            "inputStage": {
                "stage": "IXSCAN",
                "keyPattern": {
                    "num": 1
                },
                "indexName": "num_1",
                "isMultiKey": false,
                "multiKeyPaths": {
                    "num": [ ]
                },
                "isUnique": false,
                "isSparse": false,
                "isPartial": false, "indexVersion": NumberInt("2"), "direction": "forward", "indexBounds": { "num": ["(1995.0, INF.0]"]}}, "rejectedPlans": []}, "executionStats": {true,
        "nReturned": NumberInt("4"),
        "executionTimeMillis": NumberInt("14"),
        "totalKeysExamined": NumberInt("4"),
        "totalDocsExamined": NumberInt("4"),
        "executionStages": {
            "stage": "FETCH",
            "nReturned": NumberInt("4"),
            "executionTimeMillisEstimate": NumberInt("2"),
            "works": NumberInt("5"),
            "advanced": NumberInt("4"),
            "needTime": NumberInt("0"),
            "needYield": NumberInt("0"),
            "saveState": NumberInt("0"),
            "restoreState": NumberInt("0"),
            "isEOF": NumberInt("1"),
            "docsExamined": NumberInt("4"),
            "alreadyHasObj": NumberInt("0"),
            "inputStage": {
                "stage": "IXSCAN",
                "nReturned": NumberInt("4"),
                "executionTimeMillisEstimate": NumberInt("2"),
                "works": NumberInt("5"),
                "advanced": NumberInt("4"),
                "needTime": NumberInt("0"),
                "needYield": NumberInt("0"),
                "saveState": NumberInt("0"),
                "restoreState": NumberInt("0"),
                "isEOF": NumberInt("1"),
                "keyPattern": {
                    "num": 1
                },
                "indexName": "num_1",
                "isMultiKey": false,
                "multiKeyPaths": {
                    "num": [ ]
                },
                "isUnique": false,
                "isSparse": false,
                "isPartial": false, "indexVersion": NumberInt("2"), "direction": "forward", "indexBounds": { "num": [" (1995.0, inf. 0]]}, "" keysExamined" : NumberInt (" 4 "), "seeks" : NumberInt (" 1 "), "dupsTested" : NumberInt("0"), "dupsDropped": NumberInt("0") } } }, "serverInfo": { "host": "5007451YFBPC1", "port": NumberInt (" 27017 "), "version" : "4.2.4", "gitVersion" : "b444815b69ab088a808162bdb4676af2ce00ff2c}", "ok" :1
}
Copy the code

After using the index, only four documents related to the query are scanned. Does an index take up space and increase insert costs.

Basic management

1. Obtain all database list information

show dbs
Copy the code

2. Display all collections in the current database

show collections
Copy the code

Execute stats() on database object;

>db.stats()
{
    "db": "test",
    "collections": NumberInt("14"),
    "views": NumberInt("0"),
    "objects": NumberInt("7665046"),
    "avgObjSize": 469.136041192708,
    "dataSize": 3595949336,
    "storageSize": 796413952,
    "numExtents": NumberInt("0"),
    "indexes": NumberInt("16"),
    "indexSize": 161972224,
    "fsUsedSize": 24373190656,
    "fsTotalSize": 53660876800,
    "ok": 1
}
Copy the code

Perform stats() on a single collection;

>db.event_log.stats()
{
    "ns": "test.event_log",
    "size": NumberInt("268546353"),
    "count": NumberInt("326524"),
    "avgObjSize": NumberInt("822"),
    "storageSize": NumberInt("76910592"),
    "capped": false, "wiredTiger": { "metadata": { "formatVersion": NumberInt("1") }, "creationString": "access_pattern_hint=none,allocation_size=4KB,app_metadata=(formatVersion=1),assert=(commit_timestamp=none,read_timestam p=none),block_allocation=best,block_compressor=snappy,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dic tionary=0,encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory _cache_size=false,immutable=false,internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=4K B,key_format=q,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=true),lsm=( auto_throttle=true,bloom=true,bloom_bit_count=16,bloom_config=,bloom_hash_count=8,bloom_oldest=false,chunk_count_limit=0 ,chunk_max=5GB,chunk_size=10MB,merge_custom=(prefix=,start_generation=0,suffix=),merge_max=15,merge_min=0),memory_page_i mage_max=0,memory_page_max=10m,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=false,prefix_compression_min=4,sou rce=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,type=file,value_format=u", "type": "file", "uri": "statistics:table:collection-543233-3575391902029516587", "LSM": { "bloom filter false positives": NumberInt("0"), "bloom filter hits": NumberInt("0"), "bloom filter misses": NumberInt("0"), "bloom filter pages evicted from cache": NumberInt("0"), "bloom filter pages read into cache": NumberInt("0"), "bloom filters in the LSM tree": NumberInt("0"), "chunks in the LSM tree": NumberInt("0"), "highest merge generation in the LSM tree": NumberInt("0"), "queries that could have benefited from a Bloom filter that did not exist": NumberInt("0"), "sleep for LSM checkpoint throttle": NumberInt("0"), "sleep for LSM merge throttle": NumberInt("0"), "total size of bloom filters": NumberInt("0") }, "block-manager": { "allocations requiring file extension": NumberInt("69"), "blocks allocated": NumberInt("459"), "blocks freed": NumberInt("181"), "checkpoint size": NumberInt("76869632"), "file allocation unit size": NumberInt("4096"), "file bytes available for reuse": NumberInt("24576"), "file magic number": NumberInt("120897"), "file major version number": NumberInt("1"), "file size in bytes": NumberInt("76910592"), "minor version number": NumberInt("0") }, "btree": { "btree checkpoint generation": NumberInt("10231"), "column-store fixed-size leaf pages": NumberInt("0"), "column-store internal pages": NumberInt("0"), "column-store variable-size RLE encoded values": NumberInt("0"), "column-store variable-size deleted values": NumberInt("0"), "column-store variable-size leaf pages": NumberInt("0"), "fixed-record size": NumberInt("0"), "maximum internal page key size": NumberInt("368"), "maximum internal page size": NumberInt("4096"), "maximum leaf page key size": NumberInt("2867"), "maximum leaf page size": NumberInt("32768"), "maximum leaf page value size": NumberInt("67108864"), "maximum tree depth": NumberInt("4"), "number of key/value pairs": NumberInt("0"), "overflow pages": NumberInt("0"), "pages rewritten by compaction": NumberInt("0"), "row-store internal pages": NumberInt("0"), "row-store leaf pages": NumberInt("0") }, "cache": { "bytes currently in the cache": NumberInt("297695535"), "bytes dirty in the cache cumulative": NumberInt("23372697"), "bytes read into cache":4177634496,
            "bytes written from cache": NumberInt("3092655"),
            "checkpoint blocked page eviction": NumberInt("0"),
            "data source pages selected for eviction unable to be evicted": NumberInt("0"),
            "eviction walk passes of a file": NumberInt("10173"),
            "eviction walk target pages histogram - 0-9": NumberInt("1350"),
            "eviction walk target pages histogram - 10-31": NumberInt("6604"),
            "eviction walk target pages histogram - 128 and higher": NumberInt("0"),
            "eviction walk target pages histogram - 32-63": NumberInt("2219"),
            "eviction walk target pages histogram - 64-128": NumberInt("0"),
            "eviction walks abandoned": NumberInt("630"),
            "eviction walks gave up because they restarted their walk twice": NumberInt("0"),
            "eviction walks gave up because they saw too many pages and found no candidates": NumberInt("2"),
            "eviction walks gave up because they saw too many pages and found too few candidates": NumberInt("1"),
            "eviction walks reached end of tree": NumberInt("101"),
            "eviction walks started from root of tree": NumberInt("633"),
            "eviction walks started from saved location in tree": NumberInt("9540"),
            "hazard pointer blocked page eviction": NumberInt("0"),
            "in-memory page passed criteria to be split": NumberInt("0"),
            "in-memory page splits": NumberInt("0"),
            "internal pages evicted": NumberInt("0"),
            "internal pages split during eviction": NumberInt("0"),
            "leaf pages split during eviction": NumberInt("3"),
            "modified pages evicted": NumberInt("3"),
            "overflow pages read into cache": NumberInt("0"),
            "page split during eviction deepened the tree": NumberInt("0"),
            "page written requiring cache overflow records": NumberInt("0"),
            "pages read into cache": NumberInt("148821"),
            "pages read into cache after truncate": NumberInt("0"),
            "pages read into cache after truncate in prepare state": NumberInt("0"),
            "pages read into cache requiring cache overflow entries": NumberInt("0"),
            "pages requested from the cache": NumberInt("152960052"),
            "pages seen by eviction walk": NumberInt("212384"),
            "pages written from cache": NumberInt("287"),
            "pages written requiring in-memory restoration": NumberInt("0"),
            "tracked dirty bytes in the cache": NumberInt("0"),
            "unmodified pages evicted": NumberInt("138859")
        },
        "cache_walk": {
            "Average difference between current eviction generation when the page was last considered": NumberInt("0"),
            "Average on-disk page image size seen": NumberInt("0"),
            "Average time in cache for pages that have been visited by the eviction server": NumberInt("0"),
            "Average time in cache for pages that have not been visited by the eviction server": NumberInt("0"),
            "Clean pages currently in cache": NumberInt("0"),
            "Current eviction generation": NumberInt("0"),
            "Dirty pages currently in cache": NumberInt("0"),
            "Entries in the root page": NumberInt("0"),
            "Internal pages currently in cache": NumberInt("0"),
            "Leaf pages currently in cache": NumberInt("0"),
            "Maximum difference between current eviction generation when the page was last considered": NumberInt("0"),
            "Maximum page size seen": NumberInt("0"),
            "Minimum on-disk page image size seen": NumberInt("0"),
            "Number of pages never visited by eviction server": NumberInt("0"),
            "On-disk page image sizes smaller than a single allocation unit": NumberInt("0"),
            "Pages created in memory and never written": NumberInt("0"),
            "Pages currently queued for eviction": NumberInt("0"),
            "Pages that could not be queued for eviction": NumberInt("0"),
            "Refs skipped during cache traversal": NumberInt("0"),
            "Size of the root page": NumberInt("0"),
            "Total number of pages currently in cache": NumberInt("0")
        },
        "compression": {
            "compressed pages read": NumberInt("148779"),
            "compressed pages written": NumberInt("115"),
            "page written failed to compress": NumberInt("0"),
            "page written was too small to compress": NumberInt("172")
        },
        "cursor": {
            "bulk-loaded cursor-insert calls": NumberInt("0"),
            "close calls that result in cache": NumberInt("0"),
            "create calls": NumberInt("374"),
            "cursor operation restarted": NumberInt("0"),
            "cursor-insert key and value bytes inserted": NumberInt("519499"),
            "cursor-remove key bytes removed": NumberInt("0"),
            "cursor-update value bytes updated": NumberInt("0"),
            "cursors reused from cache": NumberInt("10731"),
            "insert calls": NumberInt("425"),
            "modify calls": NumberInt("0"),
            "next calls": 3346962122,
            "open cursor count": NumberInt("1"),
            "prev calls": NumberInt("1"),
            "remove calls": NumberInt("0"),
            "reserve calls": NumberInt("0"),
            "reset calls": NumberInt("26448080"),
            "search calls": NumberInt("317775"),
            "search near calls": NumberInt("26423409"),
            "truncate calls": NumberInt("0"),
            "update calls": NumberInt("0")
        },
        "re
Copy the code

How to Execute a command

Using the runCommand help method can be even simpler

Db.stats () is equivalent to db.runcommand ({dbstats:1})

db.runCommand()
Copy the code

The sample

Mongodb database:

Alarm_log Table information:

{ "_id": ObjectId("5f3bbe31ef227a0001c50740"), "product_key": "6af4d3af657", "device_name": "381b9964b9b3438", "create_time": NumberLong("1597750833151"), "log_content": { "id": NumberInt("373"), "params": {"outputData": {"PumpFaultNum": "0,2,10", "PumpFault": "0", "SlotID": "HighestLevel": "High", "Content": "No external power ", "WarningType":" distribution alarm ", "Time": NumberLong("1597738273825"), "Level": "Low", "PumpDeviceSN": "Sn00005"}, "Identifier ": "AlarmLog"}, "version": "1.0.0"," TIMESTAMP ": NumberLong("1597049274321")}, "_class": "com.medcaptain.parsedata.entity.mongodb.AlarmLog" } { "_id": ObjectId("5f3f6dceef227a0001c9fa61"), "product_key": "6af4d3af657", "device_name": "381b9964b9b3438", "create_time": NumberLong("1597992398229"), "log_content": { "id": NumberInt("92372004"), "params": {"outputData": {"PumpFaultNum": "10,11,12", "PumpFault": "12", "SlotID": "1", "HighestLevel": "High", "Content": "Infusion block ", "WarningType":" distribution alarm ", "Time": NumberInt("3881"), "Level": "High", "PumpDeviceSN": "SN0001"}, "Identifier ": "AlarmLog"}, "version": "1.0.0", "timestamp": NumberLong("1597992372004") }, "_class": "com.medcaptain.parsedata.entity.mongodb.AlarmLog" }Copy the code

Event_log table information:

{ "_id": ObjectId("5f3f7643ef227a000173b143"), "product_key": "6af4d3af657", "device_name": "381b9964b9b3438", "create_time": "Fri Aug 21 15:22:43 CST 2020", "log_content": { "id": NumberInt("94559007"), "params": { "outputData": { "PumpPressureUnit": "3", "PumpInfusionConcentration": NumberInt("34564355"), "PumpInfusionSpeed": "bcfd6f0aeddd41f296077415d0c1e339", "WorkstationDeviceSN": "20200812", "PumpInfusionDoseRateUnit": "35", "Department": "782ebf4544f846cf932390f3d27d5161", "PumpInfusionDrugName": "63d84690e71e4f46b04524e7e4bcb80b", "PumpInfusionRemainTime": "c307e9247bbb4324b6b3979776011e76", "PumpSyringSize": "ce2cabb68a914b139b728cb74d05e2e0", "PumpDeviceType": NumberInt("2"), "PumpStatus": "3", "Room": "a551d6ead01f47f2aff734ffaa364323", "WorkMode": "0", "BedNumber": "4636567568704 b6fa24755aefab6a9c4 PumpInfusionRemain", "" :" 4169.5 ", "PumpDeviceModel" : NumberInt (" 2 "), "PumpDeviceSN" : "Sn0001 PumpFaultNum", ""," 10 final three ", "PatientWeight" : "78 ef5bd1aa62453fb6d48cef95fb10a3", "PumpDlotID" : "1", "PumpInfusionConcentrationUnit": "11", "PumpInfusionTime": "9ba6557ade6e4ec78be40634e7db9675", "PumpInfusionSum": "4f6d902bb226489a81ec6eeb7bf29bf4", "PumpPressureVal": "255", "PumpInfusionDrugID": "d82ca7c01f624c31b525f1a896a4b066", "PumpInfusionDoseRate": "d9c0e510a90341a5b4e3879c6040d0bb", "PumpInfusionTotal": "4443.3"}, "Identifier ": "Pump"}, "version": "1.0.0"," TIMESTAMP ": NumberLong("1597994559007")}, "organization_id": "", "department_id": "", "_class": "com.medcaptain.parsedata.entity.mongodb.EventLog" }Copy the code

Associated query: Query the list of faulty pumps (including faulty pump SN and Model) according to produce_key and device_name

db.alarm_log.aggregate([
				{$match: {" product_key ":" 6 af4d3af657 ", "device_name" : "381 b9964b9b3438", "log_content. Params. OutputData. WarningType" : "distributed alarm"}}, {$group:{_id: "$log_content.params.outputData.PumpDeviceSN",time:{$max:"$log_content.timestamp"}}},
				{$lookup:{
									from: "event_log",
									localField: "_id",
									foreignField: "log_content.params.outputData.PumpDeviceSN",
									as: "inventory_docs"
									}
				},
				{$unwind:"$inventory_docs"},
				{$match:{"inventory_docs.log_content.params.identifier":"Pump"}},
				{$project:{"_id":1,"inventory_docs.log_content.params.outputData.PumpDeviceModel":1,"time":1}},
				{$group:{_id:"$_id",time:{$max:"$time"},PumpDeviceModel:{$first:"$inventory_docs.log_content.params.outputData.PumpDeviceModel"}}},
				{$sort:{"time":- 1}}])Copy the code
Explanation:
  • inalarm_logIn the table
  • $lookupThe command
    • In appearance,alarm_logIn the table
    • Local field _ID equals external fieldlog_content.params.outputData.PumpDeviceSN ( alarm_log._id == event_log.log_content.params.outputData.PumpDeviceSN) of the document
    • As ainventory_docsObject joins stage (inventory_docs is an array)
  • $match: matches document within all stages (this line can be deleted)
  • $unwind: due to theinventory_docsThe object holds an array (this is a unique match, so there must be only one element). Expand it. (Inventory_docs will vanish without document elements)
  • $project: Reassembles the field names of this query
    • Format for:showField: $originalField
    • showField: 01 indicates that the field is displayed (the field is not renamed).
  • $groupThe command
    • Press the field in document_idLet’s call this group_idFields will be in doctimeThe maximum value of the field is MaxtimeWithin the

Results:

// 1
{
    "_id": "sn0006",
    "time": NumberLong("1597994563007"),
    "PumpDeviceModel": NumberInt("2")
}

// 2
{
    "_id": "sn0001",
    "time": NumberLong("1597994562006"),
    "PumpDeviceModel": NumberInt("12")
}

// 3
{
    "_id": "sn00005",
    "time": NumberLong("1597811155691"),
    "PumpDeviceModel": "Hp-60-test"
}
Copy the code

The corresponding Java code:

public AggregationResults<JSONObject> findPumpDeviceSNList(ParamEntity paramEntity) {
        Criteria criteria = CommonAlarmLog(paramEntity);
        Aggregation aggregation = Aggregation.newAggregation(
                Aggregation.match(criteria),  // Filter alarm_log information for the first time
                Aggregation.group("log_content.params.outputData.PumpDeviceSN").max("log_content.timestamp").as("time"),  / / to heavy
                Aggregation.lookup("event_log"."_id"."log_content.params.outputData.PumpDeviceSN"."inventory_docs"),// Associate the event_log table
                Aggregation.unwind("inventory_docs"),  // represents a non-array structure
                Aggregation.match(Criteria.where("inventory_docs.log_content.params.identifier").is("Pump")), // Filter event_log information whose identifier is Pump
                Aggregation.project("_id"."inventory_docs.log_content.params.outputData.PumpDeviceModel"."time"), // Only SN and Model are displayed
                Aggregation.group("_id").first("log_content.params.outputData.PumpDeviceModel").as("PumpDeviceModel").max("time").as("time"),// Undo again
                Aggregation.sort(Sort.Direction.DESC,"time")); AggregationResults<JSONObject> alarmLogEntity = mongoTemplate.aggregate(aggregation,"alarm_log", JSONObject.class);
        return alarmLogEntity;
    }
Copy the code
private Criteria CommonAlarmLog(ParamEntity paramEntity) {
        // query the identifier qualification
        if (paramEntity == null) {
            return null;
        }
        Criteria criteria;
        if(! StringUtils.isEmpty(paramEntity.getIdentifier())) { criteria = Criteria.where("log_content.params.identifier").is(paramEntity.getIdentifier());
        } else {
            return null;
        }
        if(! StringUtils.isEmpty(paramEntity.getProductKey())) { criteria = criteria.and("product_key").is(paramEntity.getProductKey());
        }
        if(! StringUtils.isEmpty(paramEntity.getDeviceName())) { criteria = criteria.and("device_name").is(paramEntity.getDeviceName());
        }

        if(paramEntity.getParamKV() ! =null) {
            for (Map.Entry<String, Object> param : paramEntity.getParamKV().entrySet()) {
                if(! StringUtils.isEmpty(param.getKey())) { criteria = criteria.and("log_content.params.outputData."+ param.getKey()).is(param.getValue()); }}}/ / difference set
        if(paramEntity.getParamNameAnd() ! =null) {
            for (String param : paramEntity.getParamNameAnd()) {
                if(! StringUtils.isEmpty(param)) { criteria = criteria.and("log_content.params.outputData." + param).exists(true); }}}// Query time limit
        if(paramEntity.getStartTime() ! =null&& paramEntity.getEndTime() ! =null) {
            criteria.andOperator(Criteria.where("log_content.timestamp").lt(paramEntity.getEndTime()).gt(paramEntity.getStartTime()));
        }
        return criteria;
    }
Copy the code