Basic introduction

Official documentation: docs.mongodb.com/

Chinese document: www.mongodb.org.cn/

Operation document: www.qikegu.com/docs/3283

Advantages over RDBMS (relational database)

  • Unfixed structure
  • Data consists of key-value pairs. Documents are similar to JSON objects. Field values can contain other documents, arrays, and document arrays.
  • There are no complex table links and no need to maintain internal relationships between tables
  • Powerful query function
  • Easy to optimize and extend
  • Application objects naturally correspond to database objects
  • Can be based on memory or hard disk storage, provides rich differential operations and indexing support

Data comparison

The term alignment

SQL Mongodb describe
Library (database) Library (database)
Table (Table) Collection
Row/Record (Row) Document Document is a json structured data record
Columns/Fields (Col) Field/key/Field (Field)
Primary Key ObjectId (ObjectId) _id: ObjectId(“10c191e8608f19729507deea”)
Index Index There are also normal indexes, which are unique

Install community MongoDB on macOS

Xcode -select --install brew tap mongodb/brew brew install [email protected]# error:
#Error: No similarly named formulae found.
#Error: No available formula with the name "mongosh" (dependency of mongodb/brew/mongodb-community).Run the following command:echo 'export HOMEBREW_BOTTLE_DOMAIN=https://mirrors.ustc.edu.cn/homebrew-bottles/' >> ~/.zshrc
source ~/.zshrc
brew update -v


brew install [email protected]
# error
#Error: Your Command Line Tools (CLT) does not support macOS 11.

sudo rm -rf /Library/Developer/CommandLineTools
# After execution is completeBrew install [email protected] brew services start mongodb/brew/mongodb-community# start
Copy the code

Up and running

mongo
# the resultsMeow (base) : / yongjuanwang $mongo mongo shell version v4.4.5 connecting to: mongo: / / 127.0.0.1:27017 /? compressors=disabled&gssapiServiceName=mongodb Implicit session: session {"id" : UUID("9d61154c-8789-4a36-96de-494999c18bcf"} MongoDB server version: 4.4.5 Welcome to the MongoDB shell. For interactivehelp.type "help".
For more comprehensive documentation, see
	https://docs.mongodb.com/
Questions? Try the MongoDB Developer Community Forums
	https://community.mongodb.com
---
The server generated these startup warnings when booting: 
        2021-06-15T21:17:26.510+08:00: Access control is not enabled for the database. Read and write access to data and configuration is unrestricted
---
---
        Enable MongoDB's free cloud-based monitoring service, which will then receive and display metrics about your deployment (disk utilization, CPU, operation statistics, etc). The monitoring data will be available on a MongoDB website with a unique URL accessible to you and anyone you share the URL with. MongoDB may use this information to make product improvements and to suggest MongoDB products and deployment options to you. To enable free monitoring, run the following command: db.enableFreeMonitoring() To permanently disable this reminder, run the following command: db.disableFreeMonitoring() ---Copy the code

Note:

Warning: It is strongly recommended to use the XFS file system and the WiredTiger storage engine. Note: Since Ubuntu currently uses ext4 file system, mongodb is advised to use XFS file system for better mongodb performance

Mongo is a command line tool for connecting to a specific mongod instance.

When we run mongo with no arguments it will connect to localhost using the default port number 27017

Exit terminal interaction
exit

# check version
mongo -- version 

# Check the help documentation
help

# Current server status
db.serverStatus()

Check the current db connection machine address
db.getMongo()

# check log
show logs
show log global

Database backup and recovery
mongodump -h dbhost -d dbname -o dbdirectory
--host=MongoDB address --port= port
# -d backup database name --db= database name
It takes seven days to create a directory

# Data recovery
mongorestore -h dbhost -d dbname --dir dbdirectory
--host=MongoDB address --port= port
# -d backup database name --db= database name
# --dir Directory where multiple data resides-- DROP Deletes MongoDB data before restoring data# Data export
mongoexport -d dbname -c collectionname -o file --type json/csv -f  field
# -d backup database name --db= database name
# -c Collection name to export --collection= collection name
-o Specifies the name of the file to save the exported data
-f "field 1, field 2,...." -f" field 1, field 2,...."


# Data import
mongoimport -d dbname -c colletionname --file filename --headerline --type json/csv -f field 
# -d Database name to import --db= database name
# -c collection name to import --collection= collection name
# --file Specifies the name of the file to save the imported data
# --type Import data file format default json, can be CSV,
-f "field 1, field 2,...." 

2. Optionally add --headerline to set the first behavior import field
Copy the code
{
	"host" : "meow.local"."version" : "4.4.5"."process" : "mongod"."pid" : NumberLong(5020),
	"uptime" : 906,
	"uptimeMillis" : NumberLong(906236),
	"uptimeEstimate" : NumberLong(906),
	"localTime" : ISODate("The 2021-06-15 T13: indeed. 604 z"),
	"asserts" : {
		"regular": 0."warning": 0."msg": 0."user": 12."rollovers": 0}."connections" : {
		"current" : 1,
		"available" : 51199,
		"totalCreated" : 1,
		"active" : 1,
		"exhaustIsMaster": 0."exhaustHello": 0."awaitingTopologyChanges": 0}."electionMetrics" : {
		"stepUpCmd" : {
			"called" : NumberLong(0),
			"successful" : NumberLong(0)
		},
		"priorityTakeover" : {
			"called" : NumberLong(0),
			"successful" : NumberLong(0)
		},
		"catchUpTakeover" : {
			"called" : NumberLong(0),
			"successful" : NumberLong(0)
		},
		"electionTimeout" : {
			"called" : NumberLong(0),
			"successful" : NumberLong(0)
		},
		"freezeTimeout" : {
			"called" : NumberLong(0),
			"successful" : NumberLong(0)
		},
		"numStepDownsCausedByHigherTerm" : NumberLong(0),
		"numCatchUps" : NumberLong(0),
		"numCatchUpsSucceeded" : NumberLong(0),
		"numCatchUpsAlreadyCaughtUp" : NumberLong(0),
		"numCatchUpsSkipped" : NumberLong(0),
		"numCatchUpsTimedOut" : NumberLong(0),
		"numCatchUpsFailedWithError" : NumberLong(0),
		"numCatchUpsFailedWithNewTerm" : NumberLong(0),
		"numCatchUpsFailedWithReplSetAbortPrimaryCatchUpCmd" : NumberLong(0),
		"averageCatchUpOps": 0}."extra_info" : {
		"note" : "fields vary by platform"."page_faults": 0}."flowControl" : {
		"enabled" : true."targetRateLimit" : 1000000000,
		"timeAcquiringMicros" : NumberLong(16),
		"locksPerKiloOp": 0."sustainerRate": 0."isLagged" : false."isLaggedCount": 0."isLaggedTimeMicros" : NumberLong(0)
	},
	"freeMonitoring" : {
		"state" : "undecided"
	},
	"globalLock" : {
		"totalTime" : NumberLong(906233000),
		"currentQueue" : {
			"total": 0."readers": 0."writers": 0}."activeClients" : {
			"total": 0."readers": 0."writers": 0}},"locks" : {
		"ParallelBatchWriterMode" : {
			"acquireCount" : {
				"r" : NumberLong(57)
			}
		},
		"ReplicationStateTransition" : {
			"acquireCount" : {
				"w" : NumberLong(3660)
			}
		},
		"Global" : {
			"acquireCount" : {
				"r" : NumberLong(3631),
				"w" : NumberLong(25),
				"W" : NumberLong(4)
			}
		},
		"Database" : {
			"acquireCount" : {
				"r" : NumberLong(45),
				"w" : NumberLong(19),
				"W" : NumberLong(6)
			}
		},
		"Collection" : {
			"acquireCount" : {
				"r" : NumberLong(53),
				"w" : NumberLong(20),
				"W" : NumberLong(2)
			}
		},
		"Mutex" : {
			"acquireCount" : {
				"r" : NumberLong(71)
			}
		}
	},
	"logicalSessionRecordCache" : {
		"activeSessionsCount" : 1,
		"sessionsCollectionJobCount": 4."lastSessionsCollectionJobDurationMillis": 0."lastSessionsCollectionJobTimestamp" : ISODate("The 2021-06-15 T13: humbled. 804 z"),
		"lastSessionsCollectionJobEntriesRefreshed": 0."lastSessionsCollectionJobEntriesEnded": 0."lastSessionsCollectionJobCursorsClosed": 0."transactionReaperJobCount": 4."lastTransactionReaperJobDurationMillis": 0."lastTransactionReaperJobTimestamp" : ISODate("The 2021-06-15 T13: humbled. 804 z"),
		"lastTransactionReaperJobEntriesCleanedUp": 0."sessionCatalogSize": 0}."network" : {
		"bytesIn" : NumberLong(2027),
		"bytesOut" : NumberLong(12436),
		"physicalBytesIn" : NumberLong(2027),
		"physicalBytesOut" : NumberLong(12436),
		"numSlowDNSOperations" : NumberLong(0),
		"numSlowSSLOperations" : NumberLong(0),
		"numRequests" : NumberLong(21),
		"tcpFastOpen" : {
			"serverSupported" : false."clientSupported" : false."accepted" : NumberLong(0)
		},
		"compression" : {
			"snappy" : {
				"compressor" : {
					"bytesIn" : NumberLong(0),
					"bytesOut" : NumberLong(0)
				},
				"decompressor" : {
					"bytesIn" : NumberLong(0),
					"bytesOut" : NumberLong(0)
				}
			},
			"zstd" : {
				"compressor" : {
					"bytesIn" : NumberLong(0),
					"bytesOut" : NumberLong(0)
				},
				"decompressor" : {
					"bytesIn" : NumberLong(0),
					"bytesOut" : NumberLong(0)
				}
			},
			"zlib" : {
				"compressor" : {
					"bytesIn" : NumberLong(0),
					"bytesOut" : NumberLong(0)
				},
				"decompressor" : {
					"bytesIn" : NumberLong(0),
					"bytesOut" : NumberLong(0)
				}
			}
		},
		"serviceExecutorTaskStats" : {
			"executor" : "passthrough"."threadsRunning": 1}},"opLatencies" : {
		"reads" : {
			"latency" : NumberLong(0),
			"ops" : NumberLong(0)
		},
		"writes" : {
			"latency" : NumberLong(0),
			"ops" : NumberLong(0)
		},
		"commands" : {
			"latency" : NumberLong(1229),
			"ops" : NumberLong(20)
		},
		"transactions" : {
			"latency" : NumberLong(0),
			"ops" : NumberLong(0)
		}
	},
	"opReadConcernCounters" : {
		"available" : NumberLong(0),
		"linearizable" : NumberLong(0),
		"local" : NumberLong(0),
		"majority" : NumberLong(0),
		"snapshot" : NumberLong(0),
		"none" : NumberLong(4)
	},
	"opcounters" : {
		"insert" : NumberLong(0),
		"query" : NumberLong(4),
		"update" : NumberLong(1),
		"delete" : NumberLong(0),
		"getmore" : NumberLong(0),
		"command" : NumberLong(30)
	},
	"opcountersRepl" : {
		"insert" : NumberLong(0),
		"query" : NumberLong(0),
		"update" : NumberLong(0),
		"delete" : NumberLong(0),
		"getmore" : NumberLong(0),
		"command" : NumberLong(0)
	},
	"security" : {
		"authentication" : {
			"mechanisms" : {
				"MONGODB-X509" : {
					"speculativeAuthenticate" : {
						"received" : NumberLong(0),
						"successful" : NumberLong(0)
					},
					"authenticate" : {
						"received" : NumberLong(0),
						"successful" : NumberLong(0)
					}
				},
				"SCRAM-SHA-1" : {
					"speculativeAuthenticate" : {
						"received" : NumberLong(0),
						"successful" : NumberLong(0)
					},
					"authenticate" : {
						"received" : NumberLong(0),
						"successful" : NumberLong(0)
					}
				},
				"SCRAM-SHA-256" : {
					"speculativeAuthenticate" : {
						"received" : NumberLong(0),
						"successful" : NumberLong(0)
					},
					"authenticate" : {
						"received" : NumberLong(0),
						"successful" : NumberLong(0)
					}
				}
			}
		}
	},
	"storageEngine" : {
		"name" : "wiredTiger"."supportsCommittedReads" : true."oldestRequiredTimestampForCrashRecovery" : Timestamp(0, 0),
		"supportsPendingDrops" : true."dropPendingIdents" : NumberLong(0),
		"supportsTwoPhaseIndexBuild" : true."supportsSnapshotReadConcern" : true."readOnly" : false."persistent" : true."backupCursorOpen" : false
	},
	"trafficRecording" : {
		"running" : false
	},
	"transactions" : {
		"retriedCommandsCount" : NumberLong(0),
		"retriedStatementsCount" : NumberLong(0),
		"transactionsCollectionWriteCount" : NumberLong(0),
		"currentActive" : NumberLong(0),
		"currentInactive" : NumberLong(0),
		"currentOpen" : NumberLong(0),
		"totalAborted" : NumberLong(0),
		"totalCommitted" : NumberLong(0),
		"totalStarted" : NumberLong(0),
		"totalPrepared" : NumberLong(0),
		"totalPreparedThenCommitted" : NumberLong(0),
		"totalPreparedThenAborted" : NumberLong(0),
		"currentPrepared" : NumberLong(0)
	},
	"transportSecurity" : {
		"1.0" : NumberLong(0),
		"1.1" : NumberLong(0),
		"1.2" : NumberLong(0),
		"1.3" : NumberLong(0),
		"unknown" : NumberLong(0)
	},
	"twoPhaseCommitCoordinator" : {
		"totalCreated" : NumberLong(0),
		"totalStartedTwoPhaseCommit" : NumberLong(0),
		"totalAbortedTwoPhaseCommit" : NumberLong(0),
		"totalCommittedTwoPhaseCommit" : NumberLong(0),
		"currentInSteps" : {
			"writingParticipantList" : NumberLong(0),
			"waitingForVotes" : NumberLong(0),
			"writingDecision" : NumberLong(0),
			"waitingForDecisionAcks" : NumberLong(0),
			"deletingCoordinatorDoc" : NumberLong(0)
		}
	},
	"wiredTiger" : {
		"uri" : "statistics:"."block-manager" : {
			"blocks pre-loaded": 0."blocks read": 15."blocks written" : 100,
			"bytes read" : 61440,
			"bytes read via memory map API": 0."bytes read via system call API": 0."bytes written" : 655360,
			"bytes written for checkpoint" : 655360,
			"bytes written via memory map API": 0."bytes written via system call API": 0."mapped blocks read": 0."mapped bytes read": 0."number of times the file was remapped because it changed size via fallocate or truncate": 0."number of times the region was remapped via write": 0}."cache" : {
			"application threads page read from disk to cache count": 0."application threads page read from disk to cache time (usecs)": 0."application threads page write from cache to disk count": 50."application threads page write from cache to disk time (usecs)" : 14387,
			"bytes allocated for updates" : 50279,
			"bytes belonging to page images in the cache": 0."bytes belonging to the history store table in the cache" : 173,
			"bytes not belonging to page images in the cache" : 56252,
			"cache overflow score": 0."eviction calls to get a page" : 44,
			"eviction calls to get a page found queue empty" : 44,
			"eviction calls to get a page found queue empty after locking": 0."eviction currently operating in aggressive mode": 0."eviction empty score": 0."eviction passes of a file": 0."eviction server candidate queue empty when topping up": 0."eviction server candidate queue not empty when topping up": 0."eviction server evicting pages": 0."eviction server slept, because we did not make progress with eviction": 0."eviction server unable to reach eviction goal": 0."eviction server waiting for a leaf page": 0."eviction state" : 64,
			"eviction walk target strategy both clean and dirty pages": 0."eviction walk target strategy only clean pages": 0."eviction walk target strategy only dirty pages": 0."eviction worker thread active": 4."eviction worker thread created": 0."eviction worker thread evicting pages": 0."eviction worker thread removed": 0."eviction worker thread stable number": 0."files with active eviction walks": 0."files with new eviction walks started": 0."force re-tuning of eviction workers once in a while": 0."forced eviction - history store pages failed to evict while session has history store cursor open": 0."forced eviction - history store pages selected while session has history store cursor open": 0."forced eviction - history store pages successfully evicted while session has history store cursor open": 0."forced eviction - pages evicted that were clean count": 0."forced eviction - pages evicted that were clean time (usecs)": 0."forced eviction - pages evicted that were dirty count": 0."forced eviction - pages evicted that were dirty time (usecs)": 0."forced eviction - pages selected because of too many deleted items count": 0."forced eviction - pages selected count": 0."forced eviction - pages selected unable to be evicted count": 0."forced eviction - pages selected unable to be evicted time": 0."forced eviction - session returned rollback error while force evicting due to being oldest": 0."hazard pointer check calls": 0."hazard pointer check entries walked": 0."hazard pointer maximum array length": 0."history store score": 0."history store table max on-disk size": 0."history store table on-disk size": 0."internal pages queued for eviction": 0."internal pages seen by eviction walk": 0."internal pages seen by eviction walk that are already queued": 0."maximum bytes configured" : 16642998272,
			"maximum page size at eviction": 0."modified pages evicted by application threads": 0."operations timed out waiting for space in cache": 0."pages currently held in the cache": 21."pages evicted by application threads": 0."pages evicted in parallel with checkpoint": 0."pages queued for eviction": 0."pages queued for eviction post lru sorting": 0."pages queued for urgent eviction": 0."pages queued for urgent eviction during walk": 0."pages queued for urgent eviction from history store due to high dirty content": 0."pages seen by eviction walk that are already queued": 0."pages selected for eviction unable to be evicted": 0."pages selected for eviction unable to be evicted as the parent page has overflow items": 0."pages selected for eviction unable to be evicted because of active children on an internal page": 0."pages selected for eviction unable to be evicted because of failure in reconciliation": 0."pages walked for eviction": 0."percentage overhead" : 8,
			"tracked bytes belonging to internal pages in the cache" : 5109,
			"tracked bytes belonging to leaf pages in the cache" : 51143,
			"tracked dirty pages in the cache": 0."bytes currently in the cache" : 56252,
			"bytes dirty in the cache cumulative" : 517929,
			"bytes read into cache": 0."bytes written from cache" : 280420,
			"checkpoint blocked page eviction": 0."eviction walk target pages histogram - 0-9": 0."eviction walk target pages histogram - 10-31": 0."eviction walk target pages histogram - 128 and higher": 0."eviction walk target pages histogram - 32-63": 0."eviction walk target pages histogram - 64-128": 0."eviction walk target pages reduced due to history store cache pressure": 0."eviction walks abandoned": 0."eviction walks gave up because they restarted their walk twice": 0."eviction walks gave up because they saw too many pages and found no candidates": 0."eviction walks gave up because they saw too many pages and found too few candidates": 0."eviction walks reached end of tree": 0."eviction walks restarted": 0."eviction walks started from root of tree": 0."eviction walks started from saved location in tree": 0."hazard pointer blocked page eviction": 0."history store table insert calls": 0."history store table insert calls that returned restart": 0."history store table out-of-order resolved updates that lose their durable timestamp": 0."history store table out-of-order updates that were fixed up by moving existing records": 0."history store table out-of-order updates that were fixed up during insertion": 0."history store table reads": 0."history store table reads missed": 0."history store table reads requiring squashed modifies": 0."history store table truncation by rollback to stable to remove an unstable update": 0."history store table truncation by rollback to stable to remove an update": 0."history store table truncation to remove an update": 0."history store table truncation to remove range of updates due to key being removed from the data page during reconciliation": 0."history store table truncation to remove range of updates due to non timestamped update on data page": 0."history store table writes requiring squashed modifies": 0."in-memory page passed criteria to be split": 0."in-memory page splits": 0."internal pages evicted": 0."internal pages split during eviction": 0."leaf pages split during eviction": 0."modified pages evicted": 0."overflow pages read into cache": 0."page split during eviction deepened the tree": 0."page written requiring history store records": 0."pages read into cache": 0."pages read into cache after truncate" : 10,
			"pages read into cache after truncate in prepare state": 0."pages requested from the cache" : 646,
			"pages seen by eviction walk": 0."pages written from cache": 50."pages written requiring in-memory restoration": 0."tracked dirty bytes in the cache": 0."unmodified pages evicted": 0}."capacity" : {
			"background fsync file handles considered": 0."background fsync file handles synced": 0."background fsync time (msecs)": 0."bytes read": 0."bytes written for checkpoint" : 279105,
			"bytes written for eviction": 0."bytes written for log" : 33408,
			"bytes written total" : 312513,
			"threshold to call fsync": 0."time waiting due to total capacity (usecs)": 0."time waiting during checkpoint (usecs)": 0."time waiting during eviction (usecs)": 0."time waiting during logging (usecs)": 0."time waiting during read (usecs)": 0}."checkpoint-cleanup" : {
			"pages added for eviction": 0."pages removed": 0."pages skipped during tree walk": 0."pages visited": 25},"connection" : {
			"auto adjusting condition resets": 57."auto adjusting condition wait calls" : 5354,
			"auto adjusting condition wait raced to update timeout and skipped updating": 0."detected system time went backwards": 0."files currently open": 14."hash bucket array size for data handles" : 512,
			"hash bucket array size general" : 512,
			"memory allocations" : 43807,
			"memory frees" : 42822,
			"memory re-allocations" : 3050,
			"pthread mutex condition wait calls" : 11181,
			"pthread mutex shared lock read-lock calls" : 15357,
			"pthread mutex shared lock write-lock calls" : 998,
			"total fsync I/Os" : 134,
			"total read I/Os": 46."total write I/Os": 171}."cursor" : {
			"cached cursor count": 21."cursor bulk loaded cursor insert calls": 0."cursor close calls that result in cache" : 7168,
			"cursor create calls" : 66,
			"cursor insert calls" : 99,
			"cursor insert key and value bytes" : 48453,
			"cursor modify calls": 0."cursor modify key and value bytes affected": 0."cursor modify value bytes modified": 0."cursor next calls": 38."cursor operation restarted": 0."cursor prev calls" : 10,
			"cursor remove calls": 0."cursor remove key bytes removed": 0."cursor reserve calls": 0."cursor reset calls" : 7787,
			"cursor search calls" : 369,
			"cursor search history store calls": 0."cursor search near calls": 22."cursor sweep buckets" : 1230,
			"cursor sweep cursors closed": 0."cursor sweep cursors examined": 3."cursor sweeps" : 205,
			"cursor truncate calls": 0."cursor update calls": 0."cursor update key and value bytes": 0."cursor update value size change": 0."cursors reused from cache" : 7147,
			"Total number of entries skipped by cursor next calls": 0."Total number of entries skipped by cursor prev calls": 0."Total number of entries skipped to position the history store cursor": 0."cursor next calls that skip due to a globally visible history store tombstone": 0."cursor next calls that skip greater than or equal to 100 entries": 0."cursor next calls that skip less than 100 entries": 38."cursor prev calls that skip due to a globally visible history store tombstone": 0."cursor prev calls that skip greater than or equal to 100 entries": 0."cursor prev calls that skip less than 100 entries" : 10,
			"open cursor count": 7},"data-handle" : {
			"connection data handle size" : 456,
			"connection data handles currently active": 21."connection sweep candidate became referenced": 0."connection sweep dhandles closed": 0."connection sweep dhandles removed from hash list" : 8,
			"connection sweep time-of-death sets" : 148,
			"connection sweeps" : 90,
			"connection sweeps skipped due to checkpoint gathering handles": 0."session dhandles swept": 11."session sweep attempts": 36},"lock" : {
			"checkpoint lock acquisitions": 15."checkpoint lock application thread wait time (usecs)": 0."checkpoint lock internal thread wait time (usecs)": 0."dhandle lock application thread time waiting (usecs)": 0."dhandle lock internal thread time waiting (usecs)": 0."dhandle read lock acquisitions" : 3534,
			"dhandle write lock acquisitions" : 39,
			"durable timestamp queue lock application thread time waiting (usecs)": 0."durable timestamp queue lock internal thread time waiting (usecs)": 0."durable timestamp queue read lock acquisitions": 0."durable timestamp queue write lock acquisitions": 0."metadata lock acquisitions": 15."metadata lock application thread wait time (usecs)": 0."metadata lock internal thread wait time (usecs)": 0."read timestamp queue lock application thread time waiting (usecs)": 0."read timestamp queue lock internal thread time waiting (usecs)": 0."read timestamp queue read lock acquisitions": 0."read timestamp queue write lock acquisitions": 0."schema lock acquisitions" : 27,
			"schema lock application thread wait time (usecs)": 0."schema lock internal thread wait time (usecs)": 0."table lock application thread time waiting for the table lock (usecs)": 0."table lock internal thread time waiting for the table lock (usecs)": 0."table read lock acquisitions": 0."table write lock acquisitions" : 10,
			"txn global lock application thread time waiting (usecs)": 0."txn global lock internal thread time waiting (usecs)": 0."txn global read lock acquisitions": 56."txn global write lock acquisitions": 47},"log" : {
			"busy returns attempting to switch slots": 0."force archive time sleeping (usecs)": 0."log bytes of payload data" : 26078,
			"log bytes written" : 33280,
			"log files manually zero-filled": 0."log flush operations" : 5683,
			"log force write operations" : 6628,
			"log force write operations skipped" : 6614,
			"log records compressed" : 45,
			"log records not compressed": 3."log records too small to compress" : 44,
			"log release advances write LSN": 26."log scan operations": 0."log scan records requiring two reads": 0."log server thread advances write LSN": 14."log server thread write LSN walk skipped" : 3855,
			"log sync operations": 37."log sync time duration (usecs)" : 1011125,
			"log sync_dir operations" : 1,
			"log sync_dir time duration (usecs)" : 20275,
			"log write operations" : 92,
			"logging bytes consolidated" : 32768,
			"maximum log file size" : 104857600,
			"number of pre-allocated log files to create": 2."pre-allocated log files not ready and missed" : 1,
			"pre-allocated log files prepared": 2."pre-allocated log files used": 0."records processed by log scan": 0."slot close lost race": 0."slot close unbuffered waits": 0."slot closures": 40."slot join atomic update races": 0."slot join calls atomic updates raced": 0."slot join calls did not yield" : 92,
			"slot join calls found active slot closed": 0."slot join calls slept": 0."slot join calls yielded": 0."slot join found active slot closed": 0."slot joins yield time (usecs)": 0."slot transitions unable to find free slot": 0."slot unbuffered writes": 0."total in-memory size of compressed records" : 48831,
			"total log buffer size" : 33554432,
			"total size of compressed records" : 24236,
			"written slots coalesced": 0."yields waiting for previous log file close": 0}."perf" : {
			"file system read latency histogram (bucket 1) - 10-49ms": 0."file system read latency histogram (bucket 2) - 50-99ms": 0."file system read latency histogram (bucket 3) - 100-249ms": 0."file system read latency histogram (bucket 4) - 250-499ms": 0."file system read latency histogram (bucket 5) - 500-999ms": 0."file system read latency histogram (bucket 6) - 1000ms+": 0."file system write latency histogram (bucket 1) - 10-49ms": 2."file system write latency histogram (bucket 2) - 50-99ms": 0."file system write latency histogram (bucket 3) - 100-249ms": 0."file system write latency histogram (bucket 4) - 250-499ms": 0."file system write latency histogram (bucket 5) - 500-999ms": 0."file system write latency histogram (bucket 6) - 1000ms+": 0."operation read latency histogram (bucket 1) - 100-249us": 0."operation read latency histogram (bucket 2) - 250-499us": 0."operation read latency histogram (bucket 3) - 500-999us": 0."operation read latency histogram (bucket 4) - 1000-9999us": 0."operation read latency histogram (bucket 5) - 10000us+": 0."operation write latency histogram (bucket 1) - 100-249us": 0."operation write latency histogram (bucket 2) - 250-499us": 0."operation write latency histogram (bucket 3) - 500-999us": 0."operation write latency histogram (bucket 4) - 1000-9999us": 0."operation write latency histogram (bucket 5) - 10000us+": 0}."reconciliation" : {
			"internal-page overflow keys": 0."leaf-page overflow keys": 0."maximum seconds spent in a reconciliation call": 0."page reconciliation calls that resulted in values with prepared transaction metadata": 0."page reconciliation calls that resulted in values with timestamps": 0."page reconciliation calls that resulted in values with transaction ids": 15."pages written including at least one prepare state": 0."pages written including at least one start timestamp": 0."records written including a prepare state": 0."split bytes currently awaiting free": 0."split objects currently awaiting free": 0."approximate byte size of timestamps in pages written": 0."approximate byte size of transaction IDs in pages written" : 384,
			"fast-path pages deleted": 0."page reconciliation calls": 50."page reconciliation calls for eviction": 0."pages deleted": 0."pages written including an aggregated newest start durable timestamp ": 0."pages written including an aggregated newest stop durable timestamp ": 0."pages written including an aggregated newest stop timestamp ": 0."pages written including an aggregated newest stop transaction ID": 0."pages written including an aggregated newest transaction ID ": 0."pages written including an aggregated oldest start timestamp ": 0."pages written including an aggregated prepare": 0."pages written including at least one start durable timestamp": 0."pages written including at least one start transaction ID": 15."pages written including at least one stop durable timestamp": 0."pages written including at least one stop timestamp": 0."pages written including at least one stop transaction ID": 0."records written including a start durable timestamp": 0."records written including a start timestamp": 0."records written including a start transaction ID": 48."records written including a stop durable timestamp": 0."records written including a stop timestamp": 0."records written including a stop transaction ID": 0}."session" : {
			"flush_tier operation calls": 0."open session count": 14."session query timestamp calls": 0."table alter failed calls": 0."table alter successful calls": 0."table alter unchanged and skipped": 0."table compact failed calls": 0."table compact successful calls": 0."table create failed calls": 0."table create successful calls" : 9,
			"table drop failed calls": 0."table drop successful calls": 0."table rename failed calls": 0."table rename successful calls": 0."table salvage failed calls": 0."table salvage successful calls": 0."table truncate failed calls": 0."table truncate successful calls": 0."table verify failed calls": 0."table verify successful calls": 0."tiered storage local retention time (secs)": 0."tiered storage object size": 0}."thread-state" : {
			"active filesystem fsync calls": 0."active filesystem read calls": 0."active filesystem write calls": 0}."thread-yield" : {
			"application thread time evicting (usecs)": 0."application thread time waiting for cache (usecs)": 0."connection close blocked waiting for transaction state stabilization": 0."connection close yielded for lsm manager shutdown": 0."data handle lock yielded": 0."get reference for page index and slot time sleeping (usecs)": 0."log server sync yielded for log write": 0."page access yielded due to prepare state change": 0."page acquire busy blocked": 0."page acquire eviction blocked": 0."page acquire locked blocked": 0."page acquire read blocked": 0."page acquire time sleeping (usecs)": 0."page delete rollback time sleeping for state change (usecs)": 0."page reconciliation yielded due to child modification": 0}."transaction" : {
			"Number of prepared updates": 0."prepared transactions": 0."prepared transactions committed": 0."prepared transactions currently active": 0."prepared transactions rolled back": 0."query timestamp calls" : 871,
			"rollback to stable calls": 0."rollback to stable pages visited": 0."rollback to stable tree walk skipping pages": 0."rollback to stable updates aborted": 0."set timestamp calls": 0."set timestamp durable calls": 0."set timestamp durable updates": 0."set timestamp oldest calls": 0."set timestamp oldest updates": 0."set timestamp stable calls": 0."set timestamp stable updates": 0."transaction begins": 54."transaction checkpoint currently running": 0."transaction checkpoint generation": 16."transaction checkpoint history store file duration (usecs)" : 1,
			"transaction checkpoint max time (msecs)" : 233,
			"transaction checkpoint min time (msecs)" : 87,
			"transaction checkpoint most recent duration for gathering all handles (usecs)" : 33,
			"transaction checkpoint most recent duration for gathering applied handles (usecs)": 0."transaction checkpoint most recent duration for gathering skipped handles (usecs)": 17."transaction checkpoint most recent handles applied": 0."transaction checkpoint most recent handles skipped" : 10,
			"transaction checkpoint most recent handles walked": 21."transaction checkpoint most recent time (msecs)" : 87,
			"transaction checkpoint prepare currently running": 0."transaction checkpoint prepare max time (msecs)": 0."transaction checkpoint prepare min time (msecs)": 0."transaction checkpoint prepare most recent time (msecs)": 0."transaction checkpoint prepare total time (msecs)": 0."transaction checkpoint scrub dirty target": 0."transaction checkpoint scrub time (msecs)": 0."transaction checkpoint total time (msecs)" : 1748,
			"transaction checkpoints": 15."transaction checkpoints skipped because database was clean": 0."transaction failures due to history store": 0."transaction fsync calls for checkpoint after allocating the transaction ID": 15."transaction fsync duration for checkpoint after allocating the transaction ID (usecs)" : 21765,
			"transaction range of IDs currently pinned": 0."transaction range of IDs currently pinned by a checkpoint": 0."transaction range of timestamps currently pinned": 0."transaction range of timestamps pinned by a checkpoint": 0."transaction range of timestamps pinned by the oldest active read timestamp": 0."transaction range of timestamps pinned by the oldest timestamp": 0."transaction read timestamp of the oldest active reader": 0."transaction sync calls": 0."transaction walk of concurrent sessions" : 2123,
			"transactions committed" : 9,
			"transactions rolled back" : 45,
			"race to read prepared update retry": 0."rollback to stable history store records with stop timestamps older than newer records": 0."rollback to stable inconsistent checkpoint": 0."rollback to stable keys removed": 0."rollback to stable keys restored": 0."rollback to stable restored tombstones from history store": 0."rollback to stable restored updates from history store": 0."rollback to stable sweeping history store keys": 0."rollback to stable updates removed from history store": 0."transaction checkpoints due to obsolete pages": 0."update conflicts": 0}."concurrentTransactions" : {
			"write" : {
				"out": 0."available" : 128,
				"totalTickets": 128}."read" : {
				"out" : 1,
				"available" : 127,
				"totalTickets": 128}}."snapshot-window-settings" : {
			"cache pressure percentage threshold" : 95,
			"current cache pressure percentage" : NumberLong(0),
			"total number of SnapshotTooOld errors" : NumberLong(0),
			"max target available snapshots window size in seconds" : 5,
			"target available snapshots window size in seconds" : 5,
			"current available snapshots window size in seconds": 0."latest majority snapshot timestamp available" : "Jan 1 08:00:00:0"."oldest majority snapshot timestamp available" : "Jan 1 08:00:00:0"
		},
		"oplog" : {
			"visibility timestamp" : Timestamp(0, 0)
		}
	},
	"mem" : {
		"bits" : 64,
		"resident": 41."virtual" : 6941,
		"supported" : true
	},
	"metrics" : {
		"aggStageCounters" : {
			"$_internalInhibitOptimization" : NumberLong(0),
			"$_internalSplitPipeline" : NumberLong(0),
			"$addFields" : NumberLong(0),
			"$bucket" : NumberLong(0),
			"$bucketAuto" : NumberLong(0),
			"$changeStream" : NumberLong(0),
			"$collStats" : NumberLong(0),
			"$count" : NumberLong(0),
			"$currentOp" : NumberLong(0),
			"$facet" : NumberLong(0),
			"$geoNear" : NumberLong(0),
			"$graphLookup" : NumberLong(0),
			"$group" : NumberLong(0),
			"$indexStats" : NumberLong(0),
			"$limit" : NumberLong(0),
			"$listLocalSessions" : NumberLong(0),
			"$listSessions" : NumberLong(0),
			"$lookup" : NumberLong(0),
			"$match" : NumberLong(0),
			"$merge" : NumberLong(0),
			"$mergeCursors" : NumberLong(0),
			"$out" : NumberLong(0),
			"$planCacheStats" : NumberLong(0),
			"$project" : NumberLong(0),
			"$redact" : NumberLong(0),
			"$replaceRoot" : NumberLong(0),
			"$replaceWith" : NumberLong(0),
			"$sample" : NumberLong(0),
			"$set" : NumberLong(1),
			"$skip" : NumberLong(0),
			"$sort" : NumberLong(0),
			"$sortByCount" : NumberLong(0),
			"$unionWith" : NumberLong(0),
			"$unset" : NumberLong(0),
			"$unwind" : NumberLong(0)
		},
		"commands" : {
			"buildInfo" : {
				"failed" : NumberLong(0),
				"total" : NumberLong(3)
			},
			"createIndexes" : {
				"failed" : NumberLong(0),
				"total" : NumberLong(1)
			},
			"find" : {
				"failed" : NumberLong(0),
				"total" : NumberLong(4)
			},
			"getCmdLineOpts" : {
				"failed" : NumberLong(0),
				"total" : NumberLong(1)
			},
			"getFreeMonitoringStatus" : {
				"failed" : NumberLong(0),
				"total" : NumberLong(1)
			},
			"getLog" : {
				"failed" : NumberLong(0),
				"total" : NumberLong(1)
			},
			"isMaster" : {
				"failed" : NumberLong(0),
				"total" : NumberLong(11)
			},
			"listDatabases" : {
				"failed" : NumberLong(0),
				"total" : NumberLong(1)
			},
			"listIndexes" : {
				"failed" : NumberLong(2),
				"total" : NumberLong(8)
			},
			"replSetGetStatus" : {
				"failed" : NumberLong(1),
				"total" : NumberLong(1)
			},
			"serverStatus" : {
				"failed" : NumberLong(0),
				"total" : NumberLong(1)
			},
			"update" : {
				"arrayFilters" : NumberLong(0),
				"failed" : NumberLong(0),
				"pipeline" : NumberLong(1),
				"total" : NumberLong(1)
			},
			"whatsmyuri" : {
				"failed" : NumberLong(0),
				"total" : NumberLong(1)
			}
		},
		"cursor" : {
			"timedOut" : NumberLong(0),
			"open" : {
				"noTimeout" : NumberLong(0),
				"pinned" : NumberLong(0),
				"total" : NumberLong(0)
			}
		},
		"document" : {
			"deleted" : NumberLong(0),
			"inserted" : NumberLong(0),
			"returned" : NumberLong(0),
			"updated" : NumberLong(0)
		},
		"getLastError" : {
			"wtime" : {
				"num": 0."totalMillis": 0}."wtimeouts" : NumberLong(0),
			"default" : {
				"unsatisfiable" : NumberLong(0),
				"wtimeouts" : NumberLong(0)
			}
		},
		"operation" : {
			"scanAndOrder" : NumberLong(0),
			"writeConflicts" : NumberLong(0)
		},
		"query" : {
			"planCacheTotalSizeEstimateBytes" : NumberLong(0),
			"updateOneOpStyleBroadcastWithExactIDCount" : NumberLong(0)
		},
		"queryExecutor" : {
			"scanned" : NumberLong(0),
			"scannedObjects" : NumberLong(0),
			"collectionScans" : {
				"nonTailable" : NumberLong(0),
				"total" : NumberLong(0)
			}
		},
		"record" : {
			"moves" : NumberLong(0)
		},
		"repl" : {
			"executor" : {
				"pool" : {
					"inProgressCount": 0}."queues" : {
					"networkInProgress": 0."sleepers": 0}."unsignaledEvents": 0."shuttingDown" : false."networkInterface" : "DEPRECATED: getDiagnosticString is deprecated in NetworkInterfaceTL"
			},
			"apply" : {
				"attemptsToBecomeSecondary" : NumberLong(0),
				"batchSize" : NumberLong(0),
				"batches" : {
					"num": 0."totalMillis": 0}."ops" : NumberLong(0)
			},
			"buffer" : {
				"count" : NumberLong(0),
				"maxSizeBytes" : NumberLong(0),
				"sizeBytes" : NumberLong(0)
			},
			"initialSync" : {
				"completed" : NumberLong(0),
				"failedAttempts" : NumberLong(0),
				"failures" : NumberLong(0)
			},
			"network" : {
				"bytes" : NumberLong(0),
				"getmores" : {
					"num": 0."totalMillis": 0."numEmptyBatches" : NumberLong(0)
				},
				"notPrimaryLegacyUnacknowledgedWrites" : NumberLong(0),
				"notPrimaryUnacknowledgedWrites" : NumberLong(0),
				"oplogGetMoresProcessed" : {
					"num": 0."totalMillis": 0}."ops" : NumberLong(0),
				"readersCreated" : NumberLong(0),
				"replSetUpdatePosition" : {
					"num" : NumberLong(0)
				}
			},
			"stateTransition" : {
				"lastStateTransition" : ""."userOperationsKilled" : NumberLong(0),
				"userOperationsRunning" : NumberLong(0)
			},
			"syncSource" : {
				"numSelections" : NumberLong(0),
				"numTimesChoseDifferent" : NumberLong(0),
				"numTimesChoseSame" : NumberLong(0),
				"numTimesCouldNotFind" : NumberLong(0)
			}
		},
		"ttl" : {
			"deletedDocuments" : NumberLong(0),
			"passes" : NumberLong(15)
		}
	},
	"ok": 1}Copy the code

User management

Create a user and a super administrator

Enter/switch the database to admin: use admin;Create an account administrator
db.createUser({
user:'aa'.pwd:"12345",
roles:[
    {"role":"userAdminAnyDatabase",db:"admin"}]});# the results

Successfully added user: {
	"user" : "aa"."roles": [{"role" : "userAdminAnyDatabase"."db" : "admin"}}]Copy the code

Creating a Super Administrator

Enter/switch the database to admin
use admin
Create a super administrator account
db.createUser({
    user: "super".pwd: "123456",
    roles: [
    	"root".{role:"root", db:"admin"}
    ]
})

db.createUser({
    user: "python".pwd: "123456",
    roles: [
    	{role:"root", db:"admin"]})},Copy the code

The built-in role

  • Database user roles: read and readWrite
  • Database management roles: dbAdmin, dbOwner, and userAdmin
  • Cluster management roles: clusterAdmin, clusterManager, clusterMonitor, and hostManager
  • Backup and restore roles: backup and restore
  • All database roles: readAnyDatabase, readWriteAnyDatabase, userAdminAnyDatabase, and dbAdminAnyDatabase
  • Super user role: root

Built-in permissions

  • Read: Allows the user to Read the specified database
  • ReadWrite: allows users to read and write specified databases
  • DbAdmin: allows users to perform administrative functions in a specified database, such as index creation, deletion, viewing statistics, or accessing system.profile
  • UserAdmin: allows users to write to the System. users collection and to create, delete, and manage users in the specified database
  • ClusterAdmin: Available only in the Admin database, it grants the management rights to all sharding and replication set-related functions.
  • ReadAnyDatabase: only available in the admin database, granting the user read permission for all databases
  • ReadWriteAnyDatabase: This parameter is available only in the admin database and is granted read and write permissions to all databases
  • UserAdminAnyDatabase: Only available in the admin database, granting the user the userAdmin permission for all databases
  • DbAdminAnyDatabase: only available in the admin database, giving the user the dbAdmin permission for all databases.
  • Root: available only in the admin database. Super account with super rights

Create the user’s own database role

If the current database does not exist, it will be created automatically
use mofang
Db.system.users. remove({user:"mofang"});
db.createUser({
    user: "mofang".pwd: "123",
    roles: [
        { role: "dbOwner", db: "mofang"}]})Copy the code

The user information

View the current user of the database
use mofang
show users

# check all users in the system (you need to switch to admin and use the account administrator permission to operate
use admin
db.auth("root"."123456")
db.system.users.find() Only use in admin database.

Copy the code

Delete user

Db.system.users. Remove ({user: db.system.users."mofang"});

# Delete effect:
WriteResult({ "nRemoved": 1})# nRemoved A value greater than 0 indicates that the administrator was removed successfully; a value equal to 0 indicates that it was not removed.
Copy the code

Change the password

You must first switch to the corresponding database
use mofang
The registration must be guaranteed to have this administrator
db.changeUserPassword("mofang"."123456")
Copy the code

MongoDB account authentication mechanism

Setting Account Authentication

After enabling the account authentication mechanism, enter Mofang again
mongo
use mofang
show users    Uncaught exception: Error: Command usersInfo requires authentication
db.auth("mofang"."123")   Error message:
# Error: Authentication failed.
# 0
db.auth("mofang"."123456")  Enter the correct password for authentication. The effect is as follows:
# 1
show users    The current command will not be restricted after authentication.
Copy the code

Library management

  • The list of all databases is displayed, the database is not displayed, or the empty database is reclaimed by MongoDB
  • Switch databases and create databases if they do not exist
  • View the current working database
  • Deletes the current database. If the database does not exist, it will exist{"ok":1}
  • View the current database status
how dbs
show databases
use  <database>

Check the current database
db
db.getName()

Mysql > delete database
db.dropDatabase() 
Copy the code

The collection management

In mongodb, there is no need to create collections. Add documents directly, and mongodb will automatically generate collections.

# name is mandatory and options is optional. Capped if set to true, size must be set as wellDb.createcollection (name=< set name >, options = {capped: < Boolean >,A fixed collection is a collection that limits the size of a set of data and automatically overwrites the earliest document content when the data reaches its maximum value
		size : <bytes_size>,    # Specifies the maximum number of bytes stored in a fixed collection, in bytes.
		max : <collection_size>   # specifies the maximum number of documents to contain in a fixed collection, in bytes
});

Add a document to a collection that doesn't already exist, and mongodb will automatically create the collection.Db. Collection. Insert ({"name":"Introduction to python"."price" : 31.4})

# set list
show collections # 或 show tables   或 db.getCollectionNames()

# delete setThe collection. The drop ()# view collection
db.getCollection("Set") the db. Collection# view collection creation information
db.printCollectionStats()
Copy the code