Nginx is a lightweight HTTP server with an event-driven asynchronous non-blocking processing framework, which gives it excellent IO performance and is often used for server-side reverse proxies and load balancing.

Launched in 2004, Nginx has evolved over the past 15 years and is now standard for building large web sites. Reverse proxy and load balancing are two of the most important labels of Nginx, and make up the public’s first impression of it. Here are two concepts.

Reverse proxy and load balancing

1. Look at forward proxies first

For example, the way to flip the wall is to find a proxy server that can access foreign websites. We send the request to the proxy server, and the proxy server visits foreign websites, and then transmits the accessed data to us.

The most important feature of forward proxy is that the client is very clear about the server address to be accessed. The server only knows which proxy server the request is coming from, not which specific client. Forward proxy mode masks or hides real client information.

Forward proxy is proxying the client.

2. Reverse proxy

A large website can handle high concurrency requirements has become the most basic requirements, at this time, a single server can not handle a large number of requests, so the emergence of distributed deployment, the request to multiple servers to handle high concurrency.

Requests sent by multiple clients to the server are received by the Nginx server and then distributed to the back-end business processing server for processing according to certain rules. At this point, the client is clear, but it is not clear which server handles the request. Nginx acts as a reverse proxy.

A reverse proxy is a proxy server.

3. Load balancing

Load balancing is a common feature of Nginx to optimize resource utilization, maximize throughput, reduce latency, ensure fault-tolerant configuration, and distribute traffic to multiple back-end servers.

Load balancing Distributes traffic to other proxy servers according to certain rules to ensure that the workload of each server in a server cluster is balanced and the server throughput is maximized in the face of high concurrency.

The load balancing modes are as follows:

  1. Polling (default): each request is allocated to a different back-end server one by one in chronological order;

  2. Ip_hash: Each request is allocated based on the hash result of the IP address. The same IP client accesses the same back-end server. This ensures that requests from the same IP address are routed to the same machine, thus solving session problems.

  3. Url_hash: Allocates requests based on the hash result of the url accessed so that each URL is directed to the same backend server. Background server for caching efficiency.

  4. Fair: This is a smarter load balancing algorithm than the two above. This algorithm can intelligently balance the load according to the page size and load time, that is, the request is allocated according to the response time of the back-end server, and the short response time is preferentially allocated. Nginx itself does not support fair. If you want to use this scheduling algorithm, you must download Nginx’s upstream_fair module.

How does Nginx enable the front end?

1. Cross-domain solution

The same origin policy restricts how documents or scripts loaded from the same source can interact with resources from another source. This is an important security mechanism for isolating potentially malicious files.

Because of the same origin policy, the front and back ends of different sources cannot communicate directly. Therefore, the cross-domain problem needs to be solved first. The solution can be CORS, JSONP, or Nginx.

Assume that the front-end domain name is www.a.com and is deployed through nginx. The domain name for API is api.a.com. The front-end direct request interface api.a.com/getList will report an error because it is cross-domain. Nginx can now be configured to block requests beginning with/API:

location /api {
    Set request header Host
    proxy_set_header Host $host;
    # path rewrite
    rewrite  /api/(.*)  /The $1  break;
    # proxy server
    proxy_pass api.a.com;
}
Copy the code

The front end can now request www.a.com/api/getList and the request will be forwarded to the API server, bypassing the browser’s cross-domain detection.

2. Enable Gzip compression

In the development process, it is inevitable to use some mature frameworks or plug-ins. These external dependencies are sometimes large, resulting in slow page response. We can use packaging tools (Webpack, rollup) to compress the code to reduce the code volume. Enable the Nginx Gzip compression function. It is important to note that the Gzip compression function needs to be supported by both the browser and the server, that is, server compression, browser parsing.

Check whether your browser supports Gzip:

Nginx configuration:

server {
    Enable gzip compression
    gzip on;
    Set the minimum HTTP protocol version required for gzip (HTTP/1.1, HTTP/1.0)
    gzip_http_version 1.1;
    Set the compression level, the higher the compression level, the longer the compression time (1-9)
    gzip_comp_level 4;
    # set the minimum number of bytes compressed, page content-length is obtained
    gzip_min_length 1000;
    # Set the type of compressed file (text/ HTML)
    gzip_types text/plain application/javascript text/css;
}
Copy the code

Check whether gzip takes effect:

3. Adapt to PC and mobile websites

Determine whether the request is sent from a PC or a mobile terminal and redirect it to the corresponding site:

location / {
    # Mobile and PC device adaptation
    if ($http_user_agent ~* '(Android|webOS|iPhone|iPod|BlackBerry)') {
        set $mobile_request '1';
    }
    if ($mobile_request = '1') {
        rewrite^. +http://mobile.a.com; }}Copy the code

The above code indicates that when a request is detected on the PC site, it is redirected to the mobile site.

4. Merge resource requests

Js file, for example, such as 1. Js, 2. Js, 3. Js, can request to http://www.a.com/static/js/??1.js, 2. Js, 3. Js this link, nginx can after receipt of a request to multiple files to synthesize a return, reduce the number of requests.

The # nginx-http-concat module has more parameters than the following three, please consult the documentation for the rest
location /static/js/ {
    concat on; # Whether to enable the resource merge function
    concat_types application/javascript; Type of resources allowed to merge
    concat_unique off; # allow merging different types of resources
    concat_max_files 5; The maximum number of resources allowed to merge
}
Copy the code

5.try_filesThe command

Usage Scenarios:

1. Use front-end routes (such as react-router) when there is no specific URL file locally. Here we can point to the index.html of the front-end project.

location / {
    index index.html;
    try_files $uri $uri/ /index.html;
}
Copy the code

2. When domain name resources are limited, front-end projects of multiple service lines share the same domain name. Use subdirectories under this domain name to distinguish projects. For example, www.a.com/proj1/ stores Project 1 and www.a.com/proj2/ stores Project 2.

location /proj1 {
    try_files $uri $uri/ /proj1/index.html;
}
Copy the code
location /proj2 {
    try_files $uri $uri/ /proj2/index.html;
}
Copy the code

After differentiating the two projects, start matching each front-end route, such as path www.a.com/proj2/page1, to the/PROJ2 / PAGe1 route under PROJ2.

If you want to keeptry_filesThe parameters in the link can be followed by @callback? $args, e.g./proj2/index.html? $args

To the end.

The Nuggets of Nginx: What front-end Developers need to know about Nginx: Nginx and front-end development