First, sharding mechanism

1.1 Core Components

In a single-node environment, high-frequency queries will bring huge burden to server CPU and I/O. Therefore, MongoDB provides sharding mechanism to solve distributed deployment of large data sets, thus improving system throughput. A standard MongoDB sharding cluster usually consists of the following three components:

  • Shard: Mongod server used to store fragmented data. To ensure high data availability, you are advised to deploy it as a replica set.
  • Config Servers: The configuration server, which is the core of the cluster and is used to store the metadata and configuration information of the cluster (such as what blocks of data are stored on the shard server and the data range of those blocks). Starting with MongoDB 3.4, configuration servers must be deployed as replica sets.
  • Mongos: the router for querying services. It is the portal for the cluster to provide services externally. Mongos gets the information about the data block from the configuration server and routes the client requests to the corresponding shard server.

1.2 the shard key

To distribute different documents in the collection to different data blocks, MongoDB requires the user to specify sharding keys and then split the data into different data blocks based on the selected sharding strategy. Each collection that needs sharding can have only one sharding key:

  • When a non-empty collection is sharded, the sharding key must be the index or prefix index of the collection (that is, the index must start with the sharding key).
  • For empty collections, if the index that matches the rule does not exist, MongoDB will automatically create the index with the shard key.

1.3 Sharding Policy

Currently, MongoDB 4.x supports two sharding strategies: hash sharding and range sharding.

  • Hash sharding: Hash the shard key and allocate it to a data block.
  • Range sharding: Allocates sharding keys to a data block by range.

Hash fragmentation is advantageous to the data more uniform distribution, but in carried out in accordance with the scope of the query scenario performance is low (such as query order information within the scope of a certain number), because in this fragmentation rules adjacent data is not on the same piece of data, usually at this point need to be within the scope of the whole cluster radio and query.

On the contrary, the performance of range query is better in the scenario of range query, but the data may be unevenly distributed. If the shard keys are monotonically increasing, the data may stay in the last data segment for a long time. Therefore, range sharding is more suitable for data whose shard keys are stable in a certain range for a long time, such as age.

It is important to note that shard keys cannot be modified after being selected, so you should take all considerations into consideration when selecting a shard key.

1.4 piece break up

Regardless of the sharding strategy, data is eventually stored on the corresponding chunk. The default size of each chunk is 64 MB. Due to the constant addition of data, block splitting occurs when blocks exceed the specified size. It is important to note that block splitting is a lightweight operation because there is no movement of data per se, just metadata information about the block being updated by config Servers.

When there is too much data on a shard server, to avoid performance problems caused by CPU and IO operations on a single server, block migration occurs. Blocks are migrated from one shard to another, and the Config Servers update the metadata information of the related blocks. Block migration is carried out by a balancer running in the background, which continuously monitors, and if the difference in the number of blocks between the largest and smallest shards exceeds the migration threshold, the balancer starts migrating blocks in the cluster to ensure even distribution of data.

The block size can be modified manually, but the pros and cons need to be weighed:

  • Small blocks cause frequent data splitting and migration to ensure uniform data distribution, but data migration brings extra network overhead and reduces routing performance.
  • Large chunks mean less data splitting and migration, less network overhead, but there is a risk of uneven data distribution.

1.5 Data Query

Note that the block migration does not affect the query. The query principles of MongoDB and Redis clusters are different. For Redis Cluster, the hashing rule of data is also the routing rule of query. However, for MongoDB sharded cluster, the query needs to go through Mongos first, which will obtain the block location information and data range from Config Servers, and then make matching according to these information before routing to the correct sharded. Therefore, the sharding policy of MongoDB has nothing to do with routing rules in essence, assuming that A document is distributed to Chunk01 of Shard A according to sharding policy, Chunk01 is then migrated to Shard B and is still routed correctly to because the configuration server updates the location of Chunk01 blocks.

1.6 Non-sharded Collections

All of the above is for sharded collections, and in practice not every collection needs to be sharded. MongoDB allows for a mix of sharded and non-sharded collections within the same database. Each database has its own master shard on which all non-shard collections are uniformly stored. It should be noted that the master shard has no relationship with the master node in the replica set. When creating a new database, the program will automatically select the shard with the least amount of data in the current cluster as the master shard. As shown in the figure below, Shard A is the master sharding, Collection1 is the sharding collection, and Collection2 is the non-sharding collection.

Second, cluster building

Here, I have only three servers. To ensure high availability, mongod service is deployed on all three servers (two Mongod services are deployed on each server using different ports), forming two shard replica sets. Meanwhile, the Config Servers service is deployed on all three servers to form a set of configuration copies:

2.1 Configuring a Fragment Replica Set

The mongod-27018.conf configuration for the first shard replica set is as follows:

Fork: true systemLog: destination: file path: "/home/mongodb/log-27018/mongod.log" logAppend: true storage: dbPath: "/home/mongodb/data-27018" net: Port: 27018 bindIp: 0.0.0.0 replication: # Name of replica set replSetName: rs0 sharding: # The role in the cluster is ShardSvrCopy the code

The mongod-37018.conf configuration for the second shard replica set is as follows:

processManagement: fork: true systemLog: destination: file path: "/home/mongodb/log-37018/mongod.log" logAppend: True storage: dbPath: "/home/mongodb/data-37018" net: port: 37018 bindIp: 0.0.0.0 replication: # Rs1 Sharding: # Sharding clusterRole: ShardSvRCopy the code

2.2 Configuring the Replica Set Configuration

Create the replica set configuration file mongo-config-server.conf with the following contents:

processManagement: fork: true systemLog: destination: file path: "/home/mongodb/config-server-log/mongod.log" logAppend: true storage: dbPath: "/home/mongodb/config-serve-data" net: Port: 27019 bindIp: 0.0.0.0 replication: replSetName: configReplSet sharding: ClusterRole: ConfigSvrCopy the code

2.3 Configuring routing Services

Create the mongos routing service configuration file mongos.conf with the following contents:

processManagement: fork: true systemLog: destination: file path: "/home/mongodb/mongos-log/mongod.log" logAppend: Port: 27017 bindIp: 0.0.0.0 Sharding: # configDB: configReplSet/hadoop001:27019,hadoop002:27019,hadoop003:27019Copy the code

2.4 Configuring Distribution

Distribute all the above configuration to the other two hosts:

scp mongod-27018.conf mongod-37018.conf mongo-config-server.conf mongos.conf root@hadoop002:/etc/
scp mongod-27018.conf mongod-37018.conf mongo-config-server.conf mongos.conf root@hadoop003:/etc/
Copy the code

All folders used in the configuration file must be created in advance; otherwise, services cannot be started. Run the following command on all three hosts:

mkdir -p /home/mongodb/log-27018
mkdir -p /home/mongodb/data-27018

mkdir -p /home/mongodb/log-37018
mkdir -p /home/mongodb/data-37018

mkdir -p /home/mongodb/config-server-log
mkdir -p /home/mongodb/config-serve-data

mkdir -p /home/mongodb/mongos-log
Copy the code

2.5 Starting the Sharding and Configuration Service

Sharding Run the following command on the three hosts to enable the sharding service and configuration service:

mongod -f /etc/mongod-27018.conf
mongod -f /etc/mongod-37018.conf
mongod -f /etc/mongo-config-server.conf
Copy the code

2.6 Initializing All Replica Sets

Execute the following command on any host to initialize all replica sets:

Initialize the fragmented replica set rs0:

#Connect to theMongo 127.0.0.1:27018
#Initialize replica set RS01
rs.initiate( {
   _id : "rs0",
   members: [
      { _id: 0, host: "hadoop001:27018" },
      { _id: 1, host: "hadoop002:27018" },
      { _id: 2, host: "hadoop003:27018" }
   ]
})

#exit
exit
Copy the code

To initialize the fragmented replica set rs1, run the following command:

rs.initiate( {
   _id : "rs1",
   members: [
      { _id: 0, host: "hadoop001:37018" },
      { _id: 1, host: "hadoop002:37018" },
      { _id: 2, host: "hadoop003:37018" }
   ]
})
Copy the code

To initialize the configuration replica set (configReplSet), run the following command:

rs.initiate( {
   _id : "configReplSet",
   members: [
      { _id: 0, host: "hadoop001:27019" },
      { _id: 1, host: "hadoop002:27019" },
      { _id: 2, host: "hadoop003:27019" }
   ]
})
Copy the code

2.7 Starting the Routing Service

Start the routing service on one or more hosts. Mongos does not have a replica set concept, so if you want to start multiple routing services, just start them on multiple servers, using the following command:

mongos -f /etc/mongos.conf
Copy the code

Conf file is already configured with the configuration parameter sharding.configDB, so the routing service is aware of the configuration replica set, so it only needs to pass the information of the shard replica set to Mongos. The command is as follows:

#Connect to the Mongos serviceMongo 127.0.0.1:27017
#Switch to the administrator role
use admin

#Add a shard replica set
db.runCommand({ addshard : "rs0/hadoop001:27018,hadoop002:27018,hadoop003:27018",name:"shard0"})
db.runCommand({ addshard : "rs1/hadoop001:37018,hadoop002:37018,hadoop003:37018",name:"shard1"})
Copy the code

Note that addShard may only be run against the admin database. If you do not add a shard to the database, you must switch to the administrator role. After the cluster is added successfully, you can use sh.status() to check the cluster status. The following output shows that two shard replica sets have been added successfully.

2.8 Testing Fragments

1. Enable the fragmentation function

Connect to Mongos, enable sharding for testdb, and set the sharding key for users to the user ID, followed by 1 to indicate range sharding. {uid:”hashed”} is used for sharding.

db.runCommand({ enablesharding : "testdb" })
db.runCommand({ shardcollection : "testdb.users",key : {uid:1} })
Copy the code

2. Create indexes

Switch to testdb and create an index for the user table as follows:

use testdb
db.users.createIndex({ uid:1 })
Copy the code

3. Insert test data

Use the following command to insert some data. The amount of data used for testing can be modified according to the performance of your server:

var arr=[]; for(var i=0; i<3000000; i++){ arr.push({"uid":i,"name":"user"+i}); } db.users.insertMany(arr);Copy the code

4. View fragments

After data is inserted, run the sh.status() command to view fragmentation and block data. The following output is displayed:

The resources

  • The official document: docs.mongodb.com/manual/shar…
  • The official configuration instructions: docs.mongodb.com/manual/refe…

For more articles, please visit the full stack Engineer manual at GitHub.Github.com/heibaiying/…