Static proxy

Nginx is good at handling static files and is a great image and file server. By putting all static resources on Nginx, applications can be static and static and perform better.

Second, load balancing

Through reverse proxy, Nginx can realize load balancing of services, avoid single node failure of the server, and forward requests to different servers according to certain policies to achieve load effect. Common load balancing policies include:

! [](https://pic4.zhimg.com/80/v2-96af3cc82bb7638133bc83d4b333a037_720w.jpg)

1, polling

Requests are distributed to back-end servers in a sequential rotation that treats each server on the back-end equally, regardless of the actual number of connections to the server and the current system load.

2. Weighted polling

Different back-end servers may not have the same machine configuration and load on the current system, so they may not have the same ability to withstand stress. Assign higher weights to machines with high configuration and low load to handle more requests; Weighted polling works well for low-configuration, high-load machines by assigning them a lower weight, reducing their system load, and ordering and weighting requests to the back end.

3. Ip_hash

According to the IP address of the client, a value is calculated through the hash function, and the size of the server list is modulo calculated with this value. The result is the serial number of the server to be accessed by the client. The source address hash method is used for load balancing. When the list of back-end servers remains unchanged, clients with the same IP address will be mapped to the same back-end server each time for access.

4, random

Through the random algorithm of the system, a server is randomly selected for access according to the size of the list of back-end servers.

5. Least_conn (Minimum connection number Method)

Due to the backend server configuration is not the same, have fast or slow to request processing, the minimum number of connections method based on the current back-end server connection, dynamically select the current backlog connections at least one server to deal with the current request, improve the utilization efficiency of the back-end service as much as possible, will be responsible for properly to each server.

Three, the current limit

The current limiting module of Nginx is based on the leak-bucket algorithm, which is very practical in high concurrency scenarios.

! [](https://pic3.zhimg.com/80/v2-74a8f3075d51b04db68b0436181e33a7_720w.jpg)

1. Set parameters

1) Limit_REq_zone is defined in the HTTP block, where $binary_remote_addr means to save the binary form of the client IP address.

2) Zone The shared memory area that defines the IP status and URL access frequency. Zone =keyword Identifies the name of the region, followed by a colon, and the size of the region. The state information for 16000 IP addresses is about 1MB, so the zone in the example can store 160,000 IP addresses.

3) Rate Defines the maximum request Rate. The rate in the example cannot exceed 100 requests per second.

2. Set current limiting

Burst queue size, nodelay does not limit the time between individual requests.

Four, caching,

1, browser cache, static resource cache with expire.

! [](https://picb.zhimg.com/80/v2-0afc53844559a9167455939b9963db11_720w.jpg)

Proxy layer cache

! [](https://pic2.zhimg.com/80/v2-b2192e42e65cfa4ab09ce898c153499b_720w.jpg)

V. Black and white lists

1. Unlimited stream whitelist

! [](https://pic3.zhimg.com/80/v2-b478604a9a41181fc419d7b2affbcb8f_720w.jpg)

2. Blacklist

! [](https://pic2.zhimg.com/80/v2-73f3782a7487e91daa7ce7d3541def0e_720w.jpg)

Nginx: Static separation, load balancing, traffic limiting, caching, blacklists, etc.

I hope the above content can help you. Many PHPer will encounter some problems and bottlenecks when they are advanced, and they have no sense of direction when writing too many business codes. I have sorted out some information, including but not limited to: Distributed architecture, high scalability, high performance, high concurrency, server performance tuning, TP6, Laravel, Redis, Swoft, Kafka, Mysql optimization, shell scripting, Docker, microservices, Nginx, etc. Many knowledge points can be free to share with you