How about using EMQ to host millions of user connections? The official reply is that the 8-core 32GB configuration can carry 160W devices, so what is the performance? You’ll only know if you try, so this section takes you through tuning your system configuration and stress testing EMQ to see what EMQ looks like

Attach:

Meow a mi blog: W-blog. cnEMQ official address :emqtt.com/EMQ Chinese document :emqtt.com/docs/v2/gui…

1. Tune Liunx and Erlang VMS

Optimization of Linux system parameters

Example Change the number of files that can be opened by all system processes

sysctl -w fs.file-max=2097152
sysctl -w fs.nr_open=2097152

> vi /etc/sysctl.conf
fs.file-max = 2097152
fs.nr_open = 2097152Copy the code

Set the maximum number of service file handles

vim /etc/systemd/system.conf 
DefaultLimitNOFILE=1048576Copy the code

Number of file handles allowed to open by user/process:

ulimit -n 1048576

> vim /etc/security/limits.conf
* soft nofile 1048576
* hard nofile 1048576Copy the code

The ‘*’ sign can be used to indicate that the limit of all users is changed; Soft or hard specifies whether to modify soft or hard limits. 10240 specifies the new limit value you want to change, the maximum number of open files (note that the soft limit value is less than or equal to the hard limit).

TCP stack network parameters

> vi /etc/sysctl.conf ### backlogs: Net.core. somaxconn=32768 net.ipv4.tcp_max_syn_backlog=16384 net.core.netdev_max_backlog=16384 ## Net.ipv4. ip_local_port_range=' 100065535 '## TCP Socket read Buffer Settings: net.core.rmem_default=262144 net.core.wmem_default=262144 net.core.rmem_max=16777216 net.core.wmem_max=16777216 net.core.optmem_max=16777216 #sysctl -w net.ipv4.tcp_mem='16777216 16777216 16777216' net.ipv4.tcp_rmem='1024 4096 Tcp_wmem ='1024 4096 16777216' ## TCP connection tracing Settings: net.nf_conntrack_max=1000000 net.netfilter.nf_conntrack_max=1000000 net.netfilter.nf_conntrack_tcp_timeout_time_wait=30 Tcp_fin_timeout = 15 # fin-wait Socket timeout: net.ipv4.tcp_fin_timeout = 15 Net.ipv4. tcp_max_tw_buckets=1048576 # Tcp_tw_recycle = 1 # net.ipv4.tcp_tw_reuse = 1Copy the code

Erlang VM parameters

> vim /usr/local/emqttd/etc/emq.conf ## Erlang Process Limit node.process_limit = 2097152 ## Sets the maximum number of Simultaneously existing ports for this system node.max_ports = 1048576 ## EMQ Maximum number of connections allowed ## Size of acceptor pools Listener. TCP. External. Acceptors = 64 # # Maximum number of concurrent clients (configuration) in 1 gb of memory than 5 w listener.tcp.external.max_clients = 1000000Copy the code

After emQ is restarted, the following information is displayed on the Dashboard:

2. Pressure test program EMQ

Erlang R17 or higher is required for stress testing.

Yum -y install ncurses-devel openssl-devel unixodbc-devel GCC -c++ mkdir -p /app/install? CD/app/install/wget http://erlang.org/download/otp_src_19.0.tar.gz tar - XVZF otp_src_19. 0. Tar. Gz CD otp_src_19. 0 ./configure --prefix=/usr/local/erlang --with-ssl -enable-threads -enable-smmp-support -enable-kernel-poll --enable-hipe  --without-javac make && make installCopy the code

Configure erL environment variables

vim /etc/profile

# erlang
export ERLPATH=/usr/local/erlang
export PATH=$ERLPATH/bin:$PATH

source /etc/profileCopy the code

Install the pressure testing software

yum -y install git cd /app/install git clone https://github.com/emqtt/emqtt_benchmark.git cd emqtt_benchmark ## Adjust system parameters and start the pressure test sysctl -w net.ipv4.ip_local_port_range="500 65535" echo 1000000 > /proc/sys/fs/nr_open ulimit -n 1000000 ./emqtt_bench_sub -h 192.168.2.111 -c 32219 -i 1 -t bench / % i-q 2Copy the code

Attached is the author’s pressure test diagram: The author uses 14 servers with 1 core of 1G to pressure EMQ servers with 2 core of 8G to obtain a stable connection peak of 44W. It can be known that the optimal ratio is 1G memory for 6W device connections, which is very close to the official connection number of 160W devices with 32G memory

3 summary

Through the pressure test after system tuning, we basically got the consistent data with the official data, which can be seen that the number of connections EMQ can carry is really amazing, so we call it a million level message service. After getting this conclusion, our next step is to start the exploration of cluster permission limits, we will be there or square…

Note: I have limited ability to say the wrong place hope we can point out, but also hope to communicate more!