MongoDB is a high performance NoSQL(Not Only SQL) database

Flowers bloom and fall, clouds roll clouds

MongoDB is a high-performance NoSQL(Not Only SQL) database official website Chinese document

5. MongoDB architecture

Part-1 Logical structure of MongoDB

The architecture of MongoDB and MySQL is similar, with pluggable storage engines at the bottom to meet the different needs of users. Users can choose different storage engines based on the data characteristics of the program. In the latest version of MongoDB, WiredTiger is used as the default storage engine. WiredTiger provides different granularity concurrency control and compression mechanisms, providing the best performance and storage rate for different applications.

On the upper layer of the storage engine is the data model and query language of MongoDB. Because MongoDB stores data differently from RDBMS, it has created a set of different data model and query language.

Part-2 Data model of MongoDB

  1. Embedding: Storing associated data in the same document structure. MongoDB’s document structure allows a field or an array of values to be nested as a document
  2. References: The association between two different documents is achieved by storing data reference information that applications can parse to access related data

Part-3 MongoDB storage engine

  • Before V3.2, go to MMAPv1
  • After V3.2 => WiredTiger
  • InMemory storage: The engine stores data only in memory and stores only a small amount of meta-data and Diagnostic logs in Disk files. The required data can be obtained without Disk I/O operations. InMemory storage engine significantly reduces data query Latency
  • Mongodb4.x no longer supports the MMAPv1 storage engine

[Advantages of WiredTiger] :

  • WiredTiger use BTree store | MMAPV1 linear storage need Padding
  • WiredTiger document level lock | MMAPV1 engine using table level lock
  • Snappy (default) and Zlib, saving several times more space than MMAPV1(uncompressed)
  • WiredTiger can specify how much memory to use.
  • The WiredTiger Cache and File System Cache are used to ensure the final consistency of data on Disk. MMAPv1 only has journal logs.

WiredTiger engine contains files and functions

【WiredTiger Principle 】

[Checkpoint process]

  1. A checkpoint is performed on all tables, and the metadata of the checkpoint of each table is updated to wiredtig.wt
  2. Check wiredtig. wt and update the metadata of the table checkpoint to the temporary file wiredtig.turtle.set
  3. Rename wiredtig.turtle. set to wiredtig.turtle
  4. If the preceding processes fail, WiredTiger restores data to the latest snapshot state and then restores data based on WAL to ensure storage reliability

6. MongoDB HA cluster

Part-1 Principles and defects of the MongoDB master-slave replication architecture

In the master-slave architecture, the master node reads and writes data, while the slave does not have the write permission.

In the master-slave structure, the operation records of the master node are called Oplog (Operation log). Oplog is stored in the oplog.$main collection of the local system database. Each document in this collection represents an operation performed on the primary node. The slave server periodically fetches oplog records from the master server and executes them on the local machine! For storing oplog collections, MongoDB uses fixed collections, which means that new operations overwrite old ones as there are too many operations!

Mongodb4.0 no longer supports master/slave replication! (Officially not recommended)

Part-2 Replica sets

What is a replica set?

A replication set is a cluster of Mongod instances that have the same data set.

A replication set is a cluster consisting of 2 or more servers, and the members of the replication set include the Primary Primary node, secondary secondary node, and voting node.

Replication sets provide redundant backup of data and store data copies on multiple servers, improving data availability and ensuring data security.

“Why use replica Sets?”

  • High availability
    • Prevent device (server, network) failures.
    • Provides automatic failover.
    • Technology to ensure high availability
  • The disaster recovery
    • When a fault occurs, it can be restored from another node for backup.
  • Function isolation
    • We can perform reads on the standby node to reduce stress on the primary node

    For example: for analysis, reports, data mining, system tasks and so on.

“Principles of Replication Set Clustering Architecture”

Read and write operations can be performed on the Primary node in a replication set, while the Secondary node can only be used for read operations.

The Primary node needs to record all operations that change the state of the database, and these records are saved in oplog. The file is stored in the local database. Each Secondary node uses oplog to copy data and apply it locally to keep the local data consistent with that of the Primary node.

Oplog is idempotent, meaning its results are consistent no matter how many times it is executed, making it more useful than mysql binary logs.

{
    "ts": Timestamp(1446011584.2),
    "h": NumberLong("1687359108795812092"),
    "v": 2."op": "i"."ns": "test.nosql"."o":
    {
        "_id": ObjectId("563062c0b085733f34ab4129"),
        "name": "mongodb"."score": "10"}}/* ts: operation time, current timestamp + counter, counter reset every second h: global unique identifier of the operation v: oplog version information op: operation type I: insert operation u: update operation d: delete operation c: Run commands, such as createDatabase or dropDatabase. N: empty operation, special purpose ns: set to which the operation is performed o: operation content O2: Update the query condition. Only the update operation contains this field */
Copy the code
  • Replication set data synchronization
    • Initial synchronization:

      Data is fully synchronized from the Primary node. If the Primary node has a large amount of data, the synchronization takes a long time
      • Trigger time 1 ==> Secondary Indicates the first time to join
      • Trigger time 2 ==> Secondary Indicates that the amount of data left behind exceeds the size of oplog. In this case, the data will also be fully replicated
    • Keep Replication synchronization:

      After initial synchronization, real-time synchronization between nodes is generally incremental synchronization

MongoDB Primary node elections are based on heartbeat triggers.

Heartbeat detection: The whole cluster needs to maintain a certain amount of communication to know which nodes are alive and which nodes are down. The mongodb node sends pings packages every 2 seconds to other nodes in the replica set, and if the other nodes do not return within 10 seconds, they are marked as inaccessible. Each node maintains a state mapping table, indicating the current role of each node, log timestamp and other key information. If the master node finds itself unable to communicate with most of the nodes, it demotes itself to secondary read-only node.

Trigger timing of primary node election:

  • Initializes a replication set for the first time
  • If the weight of the Secondary node is higher than that of the Primary node, a replacement election is initiated
  • The Secondary node initiates an election when it finds no Primary in the cluster
  • A Primary node that cannot access a Majority of members actively degrades

Election process :(two-stage process + majority agreement)

  1. In the first stage, it detects whether it is eligible for election. If it is, it will initiate FreshnessCheck to other nodes to determine whether the node is eligible for election and conduct peer arbitration
  2. In the second stage, the initiator sends an Elect request to the surviving nodes in the cluster. The arbiter performs a series of legitimacy checks on the nodes receiving the request. If the check passes, the arbiter (up to 7 of the 50 nodes in a replication set can vote) gives the initiator one vote.
  • If the initiator of the majority agreement obtains more than half of the votes, the election passes and the initiator becomes the Primary node. In addition to the common network problems, the reason for obtaining less than half of the votes is that the nodes of the same priority go through the peer arbitration of the first stage and enter the second stage at the same time. Therefore, when there are insufficient votes, sleep[0,1] seconds of random time, after which the election is attempted again.

Replication Set Setup ☆

Due to limited server resources, the following author sets up different Mongo instances on one machine according to different port numbers, so as to achieve the effect of different nodes:

  • Primary node * 1:37017
  • Slave node *3:37018 + 37019 + 37020(later positioned as arbitration node)

  • The configuration file is as follows :(more than beforereplSetAttributes)
    • Configuration of primary node 37017:
    Make sure data/mongo/data/server1/ exists
    dbpath=/data/mongo/data/server1
    port=37017
    bind_ip=0.0.0.0
    fork=true
    Make sure data/mongo/data/logs/ exists
    logpath = /data/mongo/logs/server1.log
    replSet=lagouCluster
    Copy the code
    • Configuration on node 37018:
    Make sure data/mongo/data/server2/ exists
    dbpath=/data/mongo/data/server2
    port=37018
    bind_ip=0.0.0.0
    fork=true
    Make sure data/mongo/data/logs/ exists
    logpath = /data/mongo/logs/server2.log
    replSet=lagouCluster
    Copy the code
    • Configuration on node 37019:
    Make sure data/mongo/data/server3/ exists
    dbpath=/data/mongo/data/server3
    port=37019
    bind_ip=0.0.0.0
    fork=true
    Make sure data/mongo/data/logs/ exists
    logpath = /data/mongo/logs/server3.log
    replSet=lagouCluster
    Copy the code
    • Configuration on node 37020:
    Make sure data/mongo/data/server4/ exists
    dbpath=/data/mongo/data/server4
    port=37020
    bind_ip=0.0.0.0
    fork=true
    Make sure data/mongo/data/logs/ exists
    logpath = /data/mongo/logs/server4.log
    replSet=lagouCluster
    Copy the code

Use shell to enter any node and run the following commands in sequence:

var cfg = {
    "_id":"lagouCluster"."protocolVersion":1."members": [{"_id":1."host":"106.75.60.49:37017"."priority":10}, {"_id":2."host":"106.75.60.49:37018"}, {"_id":3."host":"106.75.60.49:37019"
    }]
    / /... 37020 Use dynamic join later
};
Copy the code

rs.initiate(cfg); 
Copy the code

You can use rs.status(); Check the replication set information

// Add nodes
rs.add("192.168.211.133: XXXXX") 
// Delete the slave node
rs.remove("192.168.211.133: XXXXX")
Copy the code

——- Replication set operation demonstration ——-

Configuration Parameters for Replication Set Members

  • _id: Identity of the replication set (integer)
    • For example:_id:0
  • host: Node host name (string)
    • For example:Host :" Host: port"
  • arbiterOnly: Whether it is an arbitration node (Boolean value)
    • For example:arbiterOnly:true
  • Priority (weight): Default 1, whether it is eligible to become the primary node. The value ranges from 0 to 1000. 0 –> never becomes the primary node (integer)
    • For example:priority=0|1
  • hidden: Whether to hide; Weight must be 0 to set (Boolean value)
    • For example:Hidden = true | false, 0 | 1
  • votes: Whether to vote node,0 –> no vote, 1 –> vote (integer)
    • For example:votes= 0|1
  • slaveDelay: How many seconds is the delay from the library (integer)
    • For example:slaveDelay=3600
  • buildIndexes: primary library index, secondary library also created, _id index invalid (Boolean)
    • For example:buildIndexes=true|false,0|1

Rs.reconfig (CFG) // Reloads the configuration and regenerates the cluster node.

“Replication set Setup with quorum Nodes”

Remove the original 37020 node and run rs.addarb (“IP: port “); Add the command to the quorum node

Part 3 Shard Cluster

What is Sharding?

Sharding is a method used by MongoDB to split large sets horizontally among different servers (or replication sets). You don’t need a powerful mainframe computer to store more data and handle larger loads.

“Why Sharding?”

  1. The storage capacity exceeds the capacity of the single-node disk.
  2. The active data set exceeds the capacity of the stand-alone memory, causing many requests to read data from disk, affecting performance.
  3. IOPS exceeds the service capability of a single MongoDB node. As data increases, the bottleneck of single MongoDB instances becomes more and more obvious.
  4. Replica sets have a node limit.

Vertical expansion: Add more cpus and storage resources to expand capacity. Horizontal scaling: Distribute data sets across multiple servers. Horizontal scaling is sharding.

“How Sharding works”

  • A sharding cluster consists of the following three services:
    • Shards Server: Each Shard consists of one or more Mongod processes for storing data.
    • Router Server: All requests to the database cluster are coordinated through the Router(Mongos), which is a request distribution center that forwards application requests to the corresponding Shard server, without adding a routing selector to the application program.
    • Config Server: configures the Server. Stores the configuration of all database meta-information (routes and fragments).
  • Shard key: To allocate documents within a data set, MongoDB uses the shard primary key to split the set
  • Chunk: Within a Shard server, MongoDB still divides data into chunkschunks, each chunk represents a portion of the internal data of the Shard Server. MongoDB splits sharded data into blocks, each of which contains data based on the sharded primary keyLeft closed right awayThe interval range of.
  • Sharding strategy:
    • The scope of fragmentation(Range based Sharding)



      Each block will be assigned a range based on the value of the shard primary key

      Range sharding is suitable for searching data within a certain range, for example, searching data with the value of X between [20 and 30]. Mongo routes can be directly located to the Chunk of the specified Shard according to the metadata stored in the Config Server.
    • Hash shard(Hash based Sharding)



      Hash sharding calculates the Hash value of a sharding primary key. Each block is assigned a range of Hash values.

      Hash sharding is complementary to range sharding. It can randomly distribute documents to each chunk, which fully expands the write capability and makes up for the deficiency of range sharding. The disadvantage is that it cannot efficiently serve range query.

Reasonable choice of shard key: Considering from two aspects, the best effect of data query and write is: fewer shards can be hit during data query, and each shard can be randomly written during data write. The key is how to balance performance and load

“The Building process of Sharding Cluster”

Next, we set up the fragment cluster according to the following situation:

  • Routing nodes 27017
  • Shard node 1– Replication cluster37017 ~ 37019
  • Shard node 2– Replication cluster47017 ~ 47019
  • Node is configured– Replication cluster17017 ~ 17019
1. Configure and start the Config node cluster

Node 1 config – 17017. Conf

# database file location
dbpath=data/mongo/data/config1
# Log file location
logpath=data/mongo/data/logs/config1.log
Write to the log appending
logappend=true
# whether to run as daemons
fork=true
# port
port=17017
# Bind a host
bind_ip=0.0.0.0
Whether to configure the server
configsvr=true
Configure the server replication set name
replSet=configsvr
Copy the code

Node 2 config – 17018. Conf

# database file location
dbpath=data/mongo/data/config2
# Log file location
logpath=data/mongo/data/logs/config2.log
Write to the log appending
logappend=true
# whether to run as daemons
fork=true
# port
port=17018
# Bind a host
bind_ip=0.0.0.0
Whether to configure the server
configsvr=true
Configure the server replication set name
replSet=configsvr
Copy the code

Node 3 config – 17019. Conf

# database file location
dbpath=data/mongo/data/config3
# Log file location
logpath=data/mongo/data/logs/config3.log
Write to the log appending
logappend=true
# whether to run as daemons
fork=true
# port
port=17019
# Bind a host
bind_ip=0.0.0.0
Whether to configure the server
configsvr=true
Configure the server replication set name
replSet=configsvr
Copy the code

Starting the Configuration Node

./bin/mongod -f config-17017.conf
./bin/mongod -f config-17018.conf
./bin/mongod -f config-17019.conf
Copy the code

Go to mongo shell of any node and add configuration node cluster! Pay attention to use the admin

./bin/mongo --port=17017

> use admin

> var cfg ={
    "_id":"configsvr"."members":[ {
        "_id":1."host":"106.75.60.49:17017"
    }, {
        "_id":2."host":"106.75.60.49:17018"
    }, {
    	"_id":3."host":"106.75.60.49:17019"}}; > rs.initiate(cfg)Copy the code

2. Configure shard cluster 1

Node 1 shard1-37017. Conf

# the location of the database file dbpath = data/mongo/data/shard1 / shard1 -37017# log file location logpath = data/mongo/data/shard1 / shard1 -37017.Log # Write the log to append logappend=trueWhether to run fork= as a daemontrue# port=37017Bind_ip =0.0. 0. 0Sharding server shardsvr=trueReplSet =shard1Copy the code

Node 2 shard1-37018. Conf

# the location of the database file dbpath = data/mongo/data/shard1 / shard1 -37018# log file location logpath = data/mongo/data/shard1 / shard1 -37018.Log # Write the log to append logappend=trueWhether to run fork= as a daemontrue# port=37018Bind_ip =0.0. 0. 0Sharding server shardsvr=trueReplSet =shard1Copy the code

Node 1 shard1-37017. Conf

# the location of the database file dbpath = data/mongo/data/shard1 / shard1 -37019# log file location logpath = data/mongo/data/shard1 / shard1 -37019.Log # Write the log to append logappend=trueWhether to run fork= as a daemontrue# port=37019Bind_ip =0.0. 0. 0Sharding server shardsvr=trueReplSet =shard1Copy the code

Starting the Configuration Node

./bin/mongod -f shard1-37017.conf
./bin/mongod -f config-37018.conf
./bin/mongod -f config-37019.conf
Copy the code

Go into the Mongo shell of any node and add the shard node cluster

./bin/mongo --port=37017

> var cfg ={
    "_id":"shard1"."protocolVersion":1."members":[ {
        "_id":1."host":"106.75.60.49:37017"
    }, {
        "_id":2."host":"106.75.60.49:37018"
    }, {
    	"_id":3."host":"106.75.60.49:37019"}}; > rs.initiate(cfg)Copy the code
3. Configure shard cluster 2

Node 1 shard1-47017. Conf

# the location of the database file dbpath = data/mongo/data/shard2 / shard2 -47017# log file location logpath = data/mongo/data/shard2 / shard2 -47017.Log # Write the log to append logappend=trueWhether to run fork= as a daemontrue# port=47017Bind_ip =0.0. 0. 0Sharding server shardsvr=trueReplSet =shard2Copy the code

Node 2 shard1-47018. Conf

# the location of the database file dbpath = data/mongo/data/shard2 / shard2 -47018# log file location logpath = data/mongo/data/shard2 / shard2 -47018.Log # Write the log to append logappend=trueWhether to run fork= as a daemontrue# port=47018Bind_ip =0.0. 0. 0Sharding server shardsvr=trueReplSet =shard2Copy the code

Node 1 shard1-47017. Conf

# the location of the database file dbpath = data/mongo/data/shard2 / shard2 -47019# log file location logpath = data/mongo/data/shard2 / shard2 -47019.Log # Write the log to append logappend=trueWhether to run fork= as a daemontrue# port=47019Bind_ip =0.0. 0. 0Sharding server shardsvr=trueReplSet =shard2Copy the code

Starting the Configuration Node

./bin/mongod -f shard1-47017.conf
./bin/mongod -f config-47018.conf
./bin/mongod -f config-47019.conf
Copy the code

Go into the Mongo shell of any node and add the shard node cluster

./bin/mongo --port=47017

> var cfg ={
    "_id":"shard2"."protocolVersion":1."members":[ {
        "_id":1."host":"106.75.60.49:47017"
    }, {
        "_id":2."host":"106.75.60.49:47018"
    }, {
    	"_id":3."host":"106.75.60.49:47019"}}; > rs.initiate(cfg)Copy the code
4. Configure and start the routing node
  1. route-27017.conf
# Log file location
logpath=/data/mongo/data/route/logs/route.log
Configure node tags
configdb=Configsvr / 106.75.60.49:17017106.75. 60.49:17018106.75. 60.49:17019
Write to the log appending
logappend=true
# whether to run as daemons
fork=true
# port
port=27017
# Bind a host
bind_ip=0.0.0.0
Copy the code
  1. usemongosStart a routing node! Not beforemongod
./bin/monogs -f route-27017.conf
Copy the code
5. Add a fragment node to mongos (Route)
./bin/mongo --port=27017

sh.status(); // Check the status

sh.addShard("shard1/106.75.60.49:37017,106.75.60.49:37018,106.75.60.49:37019");
sh.addShard("shard2/106.75.60.49:47017,106.75.60.49:47018,106.75.60.49:47019");
Copy the code
6. Enable database and collection shard (specify shard key)
// Enable sharding for the database
sh.enableSharding("users")  /* if the users database does not exist, create */
// Enable sharding for the specified collection
sh.shardCollection("users.user_info", {"name":"hashed"}) /* Hash sharding */
Copy the code
7. Insert data tests into the collection
use users;
for (var i = 0; i <= 1000; i ++) {
	db.user_info.insert({
    	"name":"User" + i,
        "salary": (Math.random()*20000).toFixed(2)}); }Copy the code
8. Verify the fragmentation effect

Enter databases in SHARd1 and SHARd2 respectively for verification

Seven, MongoDB security certification

Part – 1 Overview of security certification

MongoDB has no account by default and can be connected directly without authentication.

In the actual project must be authority verification, otherwise the consequences can not be imagined. Since 2016, there have been a number of MongoDB hacker ransom incidents. Most of the MongoDB security problems have exposed that the shortcomings of security problems are actually users. Firstly, users do not pay attention to the security of the database, and secondly, users may not form the good habit of regular backup in the process of use. Finally, enterprises may lack experienced and technical professionals.

Therefore, security authentication to MongoDB is a must.

Part-2 User-related operations

“Switch to the Admin library to add users”

use admin;
// Syntax: db.createUser(userDocument); // The method used to create MongoDB login users and assign permissions
db.createUser({
    user:"Account".pwd:"Password".roles:[{
    {role:"Role 1 to bind".db:"Authenticated database A"},
    {role:"Role 2 to bind".db:"Authenticated database B"},... }}]);Copy the code
  • User: indicates the created user name, such as admin, root, and lagou
  • PWD: indicates the password for logging in
  • Roles: Roles assigned to users. Different roles have different rights. The parameter is an array
  • Role: a role that MonngoDB has agreed on. Different roles have different permissions. Roles will be explained in detail later
  • Db: specifies the name of the database instance. For example, MongoDB 4.0.2 provides admin, local, config, and test users by default

Change the password

db.changeUserPassword( 'root' , 'rootNew' );
Copy the code

Adding a Role

db.grantRolesToUser( 'Username'[{role: 'Role name' , db: 'database name'}])
Copy the code

Start mongod in the Auth direction

./bin/mongod -f conf/mongo.conf --auth
// You can also add auth=true to mongo.conf
Copy the code

Verify the user

db.auth("Account"."Password")
Copy the code

Delete user

db.dropUser("Username")
Copy the code

Part – 3 Roles

“Database built-in Roles”

  • Read: Allows the user to read the specified database
  • ReadWrite: allows users to read and write specified databases
  • DbAdmin: allows users to perform administrative functions in a specified database, such as index creation, deletion, viewing statistics, or accessing system.profile
  • UserAdmin: allows users to write to the System. users collection and to create, delete, and manage users in the specified database
  • ClusterAdmin: Available only in the Admin database, it grants the management rights to all sharding and replication set-related functions
  • ReadAnyDatabase: only available in the admin database, granting the user read permission for all databases
  • ReadWriteAnyDatabase: This parameter is available only in the admin database and is granted read and write permissions to all databases
  • UserAdminAnyDatabase: Only available in the admin database, granting the user the userAdmin permission for all databases
  • DbAdminAnyDatabase: only available in the admin database, giving the user the dbAdmin permission for all databases
  • Root: available only in the admin database. Super account, super permission
  • DbOwner: library owner permission, which is the combination of readWrite, dbAdmin, and userAdmin roles

Roles for Each Type of User

  • Database user roles: read and readWrite
  • Database management roles: dbAdmin, dbOwner, and userAdmin
  • Cluster management roles: clusterAdmin, clusterManager, clusterMonitor, and hostManager
  • Backup and restoration roles: Backup and restore.
  • All database roles: readAnyDatabase, readWriteAnyDatabase, userAdminAnyDatabase, and dbAdminAnyDatabase
  • Super user role: root

There are several other roles that indirectly or directly provide access to the system superuser (dbOwner, userAdmin, userAdminAnyDatabase)

Part – 4 Process for implementing single-machine security authentication

Create An Administrator

use admin

db.createUser({
    "user":"root"."pwd":"123456"."roles": [{"role":"root"."db":"admin"}}]);Copy the code

“Create a Common User”

use mydb

// User with read and write permission -- Zhang SAN
db.createUser({
    "user":"zhangsan"."pwd":"123456"."roles": [{"role":"readWrite"."db":"mydb1"}}]);// Read-only user -- Li Si
db.createUser({
    "user":"lisi"."pwd":"123456"."roles": [{"role":"read"."db":"mydb1"}}]);Copy the code

MongoDB Security Authentication

  1. Mongod -- dbPath = database path --port= port --auth
  2. Add auth=true to the configuration file

Common User Login Authentication Permission

Use target_DB to perform security authentication using db.auth(” account “,” password “). Only after the authentication succeeds, the operation within the permission can be performed

use mydb1;

db.auth("zhangsan"."123456");

show dbs; / /)
show tables; / /).Mydb1 = mydb1 = mydb1 = mydb1
Copy the code

Administrator Login Authentication Permission

use admin

db.auth("root"."123456");

show dbs; / /)
show tables; / /).// You can perform operations on admin
Copy the code

Part – 5 Security authentication for fragment cluster

  1. Before enabling security authentication, access route creation administrator and common user
use admin

db.createUser({
    "user":"root"."pwd":"123456"."roles": [{"role":"root"."db":"admin"}}]);Copy the code
use users

// Read/write user -- girder
db.createUser({
    "user":"daliang"."pwd":"123456"."roles": [{"role":"readWrite"."db":"users"}}]);Copy the code
  1. Disable all configuration nodes/shard nodes/routing nodes

Because there are too many processes, you can use the PSMISC component to shut down threads in batches

yum Install psmisc // Install PSmisc
killall -2 mongod // Run the killall command to quickly stop multiple processes
Copy the code
  1. Generate key files and change permissions
Make sure the relative path data/mongodb/ exists
openssl rand -base64 756 > data/mongodb/testKeyFile.file
chmod 600 data/mongodb/keyfile/testKeyFile.file
Copy the code
  1. Configure node clusters and fragmented node clusters, enable security authentication, and specify key files
auth=true 
keyFile=data/mongodb/testKeyFile.file
Copy the code

This configuration must be added to shard cluster 1 (37017 to 37019), shard cluster 2 (47017 to 47019), and configuration cluster (27017 to 27019)

  1. Set the key file in the routing configuration file
keyFile=data/mongodb/testKeyFile.file
Copy the code
  1. Enable all configuration nodes. The shard nodes and routing nodes use routes for permission authentication
./bin/mongod -f config/config-17017.conf 
./bin/mongod -f config/config-17018.conf 
./bin/mongod -f config/config-17019.conf 
./bin/mongod -f shard/shard1/shard1-37017.conf 
./bin/mongod -f shard/shard1/shard1-37018.conf 
./bin/mongod -f shard/shard1/shard1-37019.conf 
./bin/mongod -f shard/shard2/shard2-47017.conf 
./bin/mongod -f shard/shard2/shard2-47018.conf 
./bin/mongod -f shard/shard2/shard2-47019.conf 
./bin/mongos -f route/route-27017.conf
Copy the code

It is recommended to write a shell script to start batch

  1. Spring Boot connects to a sharded cluster for security authentication
spring.data.mongodb.username=account
spring.data.mongodb.password=password
# spring.data.mongodb.uri=mongodb:// account: password @ip: port/database name
Copy the code