8. Set up the management console

A console is not available in the RocketMQ source code, but one is available in the Rocket community Extension project at github.com/apache/rock…

Once downloaded, go to the Rocket-Console directory and compile using Maven

mvn clean package -Dmaven.test.skip=true
Copy the code

After compiling, obtain the JAR package from target and execute directly. Note, however, that the address of nameserver needs to be specified in the project’s application.properties. This property is null by default.

Add an application.properties file to the jar’s current directory to override one of the jar’s default properties:

rocketmq.config.namesrvAddr=worker1:9876; worker2:9876; worker3:9876Copy the code

Then execute:

Java jar rocketmq - the console - ng - 1.0.1. JarCopy the code

After the completion of the start, can visit http://192.168.232.128:8080 to see management page

You can select a language in the upper right corner of the admin page.

The Dleger HA cluster is set up

In this way, we set up a RocketMQ cluster with a master-slave structure, but note that this master-slave structure only does data backup, not disaster recovery. In other words, when a master node fails, the slave node cannot be switched to the master node to continue providing services. Note that the cluster should have at least three nodes, allowing less than half of the nodes to fail.

If the slave hangs, the impact on the cluster will not be significant, because the slave only does data backup. For example, when a large amount of data is pulled by the consumer, RocketMQ has a mechanism to ensure that the Master node returns only a small portion of the data, while the rest of the data is pulled from the slave node.

Also, note that Dleger has its own CommitLog mechanism, meaning that messages accumulated using the master and slave clusters cannot be transferred to the Dleger cluster.

To implement high availability Dr Backup, use Dledger to set up a high availability cluster. Note that this Dledger is not supported until RocketMQ4.5, the 4.7.1 version we used already integrates Dledger by default.

Building methods

To set up a cluster of highly available brokers, we simply need to configure the configuration file under conf/dleger.

This mode is based on Raft protocol, an election protocol similar to Zookeeper’s PaxOS protocol, which randomly elects a leader and followers in the cluster. It’s just that his election process is a little different from Paxos’s. Raft protocol is based on random sleep and the election process is slower than PaxOS.

First: we also need to modify runserver.sh and runbroker.sh to customize JVM memory.

Then: we need to modify the configuration file under conf/dleger. Dleger configuration items are as follows:

name meaning For example,
enableDLegerCommitLog Whether to enable DLedger true
dLegerGroup DLedger Raft Group name, recommended to be consistent with brokerName RaftNode00
dLegerPeers Port information of each node in the DLedger Group. Ensure that the configurations of each node in the same Group are consistent N0-127.0.0.1:40911; N1-127.0.0.1:40912; N2-127.0.0.1:40913
dLegerSelfId Node ID, which must belong to one of dLegerPeers; Each node in the same Group must be unique n0
sendMessageThreadPoolNums Number of sending threads. You are advised to set it to the number of Cpu cores 16

After configuration, use nohup bin/mqbroker -c $conf_name & to specify the instance file.

In a fast – try. Under the bin/dleger sh, the script is in three local boot RocketMQ instance, build a high availability cluster, a broker of reading is to the conf/dleger – no. Conf, Broker – n1. Conf and broker – n2 conf. Use this script to also be careful to customize the JVM memory, it is the default for each instance is 1 GB of memory, the virtual machine is definitely not enough.

After the three-instance cluster is set up, you can run the bin/mqadmin clusterList -n worker1.conf command to check the cluster status.

It takes about 10 seconds to perform a primary/secondary switchover in a single-node system.

9. Adjust system parameters

At this point, our entire RocketMQ service is set up. But in practice, we say that RocketMQ has very high throughput, very high performance, and to achieve the high performance of RocketMQ, you need to customize the performance of RocketMQ and the performance of the server

RocketMQ JVM memory size:

As mentioned earlier, nameserver memory size needs to be customized in runserver.sh and broker memory size needs to be customized in runbroker.sh. These default configurations can be considered as proven optimization configurations, but in actual situations, they need to be adjusted according to the actual situation of the server. Here is an example of the G1GC configuration in runbroker.sh, and the key configuration in runbroker.sh:

JAVA_OPT="${JAVA_OPT} -XX:+UseG1GC -XX:G1HeapRegionSize=16m -XX:G1ReservePercent=25 -XX:InitiatingHeapOccupancyPercent=30 -XX:SoftRefLRUPolicyMSPerMB=0"
JAVA_OPT="${JAVA_OPT} -verbose:gc -Xloggc:${GC_LOG_DIR}/rmq_broker_gc_%p_%t.log -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCApplicationStoppedTime -XX:+PrintAdaptiveSizePolicy"
JAVA_OPT="${JAVA_OPT} -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=5 -XX:GCLogFileSize=30m"
Copy the code

-XX:+UseG1GC: Use G1 garbage collector, -xx :G1HeapRegionSize= 16M Set G1 region block size to 16M, -xx :G1ReservePercent: In the old days of the G1, 25% free memory was reserved. The default was 10%, and RocketMQ turned this parameter up. – XX: InitiatingHeapOccupancyPercent = 30: When heap memory usage reaches 30%, the G1 garbage collector is started to attempt garbage collection. The default value is 45%. RocketMQ makes this parameter low, increasing the frequency of garbage collection, but avoiding the problem of too many garbage objects and too long a garbage collection.

Then, I customized the GC log file, determined the address of the GC log file, what to print, and controlled the size of each log file to be 30M and only five files. These are important references for performance testing.

2. Other core parameters of RocketMQ

For example, in the conf/dleger/broker – n0. There’s a parameter in the conf, sendMessageThreadPoolNums = 16. This parameter indicates that the number of threads in the thread pool used to send messages within RocketMQ is 16. This parameter can be adjusted depending on the number of CPU cores on your machine, for example, if your machine has more than 16 cores, this parameter can be adjusted up.

3. Customize Linux kernel parameters

When we deploy RocketMQ, we also need to customize some of the Linux kernel parameters. For example,

  • Ulimit, which requires a lot of network communication and disk IO.
  • The EXTRA_free_kbytes tells the VM to keep extra free memory between the threshold at which background reclamation (KSWAPD) starts and the threshold at which direct reclamation (through the allocation process) is started. RocketMQ uses this parameter to avoid long latency in memory allocation. (Kernel version dependent)
  • Vm. min_free_kbytes, if set to less than 1024KB, will subtly break the system, and the system is prone to deadlocks under high loads.
  • Max_map_count, which limits the maximum number of memory-mapped regions a process can have. RocketMQ will use Mmap to load CommitLog and ConsumeQueue, so it is recommended to set a large value for this parameter.
  • Vm. swappiness, which defines how active the kernel swap memory pages are. Higher values increase aggression and lower values decrease exchange. It is recommended to set the value to 10 to avoid switching delays.
  • File Descriptor limits, RocketMQ needs to open File descriptors for files (CommitLog and ConsumeQueue) and network connections. We recommend setting the value of the file descriptor to 655350.

The configuration files for these parameters in CentOS7 are in the /proc/sys/vm directory.

In addition, the RocketMQ bin directory has an os.sh file that sets RocketMQ’s recommended kernel parameters, which can be adjusted as required.