CHello, today I want to share Nginx with you. Take out your notebook and write it down!

1. Introduction of Nginx

1.1 summary of Nginx

Nginx (” Engine X “) is a high performance HTTP and reverse proxy server, which is featured by low memory and high concurrency. In fact, Nginx does perform better in the same type of web server. Baidu, JINGdong, Sina, netease, Tencent, Taobao and so on.

1.2 Nginx as a Web server

Nginx can be used as a Web server for static pages and also supports dynamic CGI languages such as Perl, PHP, etc. But Java is not supported. Java programs can only be completed in conjunction with Tomcat. Nginx was developed specifically for performance optimization. Performance is the most important consideration, and the implementation is very efficient. Nginx can withstand high loads and reports indicate that it can support up to 50,000 concurrent connections.

Liverpoolfc.tv: lnmp.org/nginx.html

1.3 Forward Proxy

Nginx can not only act as a reverse proxy to achieve load balancing. It can also be used as a forward proxy for Internet access and other functions. Forward proxy: If the Internet outside the LAN is regarded as a huge resource library, clients on the LAN need to access the Internet through a proxy server. This proxy service is called forward proxy.

1.4 Reverse Proxy

Reverse proxy, in fact, the client’s agent is no perception, because the client does not require any configuration can access, we only need to send the request to the reverse proxy server, the reverse proxy server to select the target server to get data, and then returned to the client, the reverse proxy server and the target server is a server, The proxy server address is exposed and the real server IP address is hidden.

1.5 Load Balancing

The client sends multiple requests to the server, which processes the requests, some of which may interact with the database, and then returns the results to the client.

This architectural pattern is suitable for early systems with relatively few concurrent requests and low cost. However, with the continuous growth of the amount of information, the rapid growth of the volume of visits and data, as well as the increase of the complexity of the system business, this architecture will cause the server corresponding client requests increasingly slow, when the concurrency is particularly large, but also easy to cause the server directly crash. This is obviously due to server performance bottlenecks, so what can be done to resolve this situation?

Our first thought may be to upgrade the server configuration, such as increasing the CPU execution frequency, increasing the memory and so on to improve the physical performance of the machine to solve this problem, but we know that Moore’s Law is increasingly ineffective, the performance of the hardware can no longer meet the increasing demand. The most obvious example is that the instant traffic of a hot commodity on Tmall’s Singles’ Day is extremely large, so the system architecture similar to the above and the addition of machines to the existing top physical configuration cannot meet the demand. So what to do?

In the above analysis, we have removed the method of increasing the physical configuration of the server to solve the problem, that is to say, the vertical solution to the problem is not feasible, so how about increasing the number of servers horizontally? Generated by this time the concept of cluster, a single server won’t solve, we increase the number of the server, and then will request distribution to each server, the original request is to focus on a single server to request distribution to multiple servers, load distribution to a different server, which is what we call the load balance.

1.6 Static and Static Separation

In order to speed up the resolution of the website, dynamic pages and static pages can be resolved by different servers to speed up the resolution. Reduce the stress of a single server.

2. Common nginx commands and configuration files

2.1 Common nginx commands:

Start the command

Run./nginx in the /usr/local/nginx/sbin directory

The shutdown command

Run./nginx -s stop in /usr/local/nginx/sbin

Reload command

Run./nginx -s reload in /usr/local/nginx/sbin

2.2 Nginx. conf configuration file

In the nginx installation directory, the default configuration file is stored in the conf directory of the nginx installation directory, and the main configuration file nginx.conf is also in the directory. The subsequent use of nginx is basically the corresponding configuration file modification.

There are a lot of #’s in the config file, so we removed all the #’s and condensed them as follows:

worker_processes 1;

events {

worker_connections 1024;

}

http {

include mime.types;

default_type application/octet-stream;

sendfile on;

keepalive_timeout 65;

server {

listen 80;

server_name localhost;

The nginx.conf configuration file can be divided into three parts:

2.2.1 Part I: Global Blocks

The contents from the configuration file to the Events block will mainly set some configuration instructions affecting the overall operation of the Nginx server, including the configuration of users (groups) running the Nginx server, the number of worker processes allowed to be generated, Process PID storage path, log storage path and type, and configuration file import.

For example, in the first line above:

worker_processes 1;

This is a key configuration for Nginx server concurrent processing service. The larger worker_processes value is, the more concurrent processing it can support is limited by hardware, software, and other devices

2.2.2 Part TWO: Events block

The instructions involved in the Events block mainly affect the network connection between the Nginx server and the user. The commonly used Settings include whether to enable serialization of network connections under the multi-work process, whether to allow multiple network connections to be received at the same time, and which event-driven model to process connection requests. Maximum number of connections that each Word process can support at the same time, etc.

events {

worker_connections 1024;

}

The above example indicates that each work process supports a maximum of 1024 connections.

This part of the configuration has a great impact on Nginx performance, and should be flexibly configured in practice.

2.2.3 Part 3: HTTP Blocks

This is the most frequent part of the Nginx server configuration, where most functions such as proxies, caching, and logging definitions are configured, as well as third-party modules.

Note that HTTP blocks can also include HTTP global blocks and server blocks.

HTTP global block:

HTTP global block configuration instructions include file import, MIME-Type definition, log customization, connection timeout, maximum number of single link requests, and so on

Server:

This and virtual host has a close relationship, virtual host from the user’s point of view, and an independent hardware host is exactly the same, the technology is produced in order to save the cost of Internet server hardware.

  • Each HTTP block can contain multiple Server blocks, and each server block is equivalent to a virtual host.

  • Each Server block is also divided into global Server blocks and can contain multiple Locaton blocks simultaneously.

1. Global Server block

The most common configuration is the listening configuration of the host and the name or IP configuration of the host.

2. The location of

A server block can be configured with multiple Location blocks.

The main purpose of this block is to process a specific request based on the request string received by the Nginx server (e.g. Server_name/URI-string), matching any string other than the virtual host name (or IP alias) (e.g. / URI-string). Address targeting, data caching and response control, as well as many third-party modules are configured here.

3. Nginx configuration instance – Reverse proxy

3.1 Reverse Proxy Example 1

Using the nginx reverse proxy, go to www.fanxiangdaili.com and jump to 127.0.0.1:8080

Map www.fanxiangdaili.com to 127.0.0.1 by modifying the local host file

Once configured, we can go throughwww.fanxiangdaili.comAccess the initial Tomcat screen shown in step 1. So how do I just need inputwww.fanxiangdaili.comCan jump to the Tomcat initial screen? Nginx’s reverse proxy is used.

Add the following configuration to the nginx.conf configuration file. Note which configuration file is used to start nginx by default. If you start nginx with the default configuration file, you need to modify the default configuration file.

As configured above, we listen on port 80 and access the domain namewww.fanxiangdaili.com. The default value is www.fanxiangdaili.com if no port number is addedPort 80. Therefore, the access to the domain name is diverted to 127.0.0.1:8080. Enter in the browserwww.fanxiangdaili.comThe results are as follows

3.2 Reverse Proxy Example 2

Implementation effect: use the nginx reverse proxy, according to the access path to jump to a different port service nginx listening port 9001,

Visit http://127.0.0.1:9001/edu/ jump straight to the 127.0.0.1:8081

Visit http://127.0.0.1:9001/vod/ jump straight to the 127.0.0.1:8082

Implementation steps:

The first step is to prepare two Tomcats, one 8001 port and one 8002 port, and prepare the test page

Step 2: Modify the nginx configuration file to add server{} to the HTTP block

Location directive Description This directive is used to match urls. The syntax is as follows:

  1. = : For urIs without regular expressions, the request string must match the URI strictly. If the match is successful, the search is stopped and the request is processed immediately.
  2. ~ : Indicates that the URI contains a regular expression and is case sensitive.
  3. ~* : Indicates that the URI contains a regular expression and is case insensitive.
  4. ^~ : For urIs that do not contain regular expressions, the Nginx server is required to find the location that best matches the request string, and immediately use this location instead of using the regular URI in the Location block to match the request string.

Note: If the URI contains a regular expression, it must be marked with ~ or ~*.

4. Configure nginx instance load Balancing

Implementation steps:

1) First prepare two Tomcatses running at the same time

2) Configure it in nginx.conf

With the explosive growth of Internet information, load balance is no longer a very strange topic. As the name implies, load balance is to allocate the load to different service units, so as to ensure the availability of the service and ensure the response is fast enough to give users a good experience. The rapid growth of visitors and data traffic has led to the birth of a variety of load balancing products. Many professional load balancing hardware provide good functions but are expensive, which makes load balancing software popular. Nginx is one of them. Nginx, LVS, Haproxy and other services provide load balancing services under Linux, and Nginx provides several allocation methods (strategies) :

1. Polling (default)

Each request is allocated to a different backend server one by one in chronological order. If the backend server goes down, the request is automatically removed.

2, weight

Weight indicates the weight. The default weight is 1. The higher the weight, the more clients can be assigned

Specifies the polling probability, weight proportional to the access ratio, for cases where back-end server performance is uneven. For example,

upstream server_pool{

Server 192.168.5.21 weight = 10;

Server 192.168.5.22 weight = 10;

}

3. Ip_hash Each request is allocated according to the hash result of the access IP address. In this way, each visitor can access the same back-end server permanently, which can solve the session sharing problem. Such as:

upstream server_pool{

ip_hash;

Server 192.168.5.21:80;

Server 192.168.5.22:80;

}

4. Fair (third party) allocates requests based on the response time of the back-end server, with priority given to those with short response times.

upstream server_pool{

Server 192.168.5.21:80;

Server 192.168.5.22:80;

fair;

}

5. Nginx configuration instance – Static and dynamic separation

Nginx separation of dynamic and static requests is simply a separation of dynamic and static requests. It cannot be understood as simply a physical separation of dynamic and static pages. Use Nginx to process static pages, and Tomcat to process dynamic pages. Static and static separation can be roughly divided into two kinds from the perspective of current implementation. One kind is purely to separate static files into a separate domain name and put it on an independent server, which is also the mainstream recommended scheme at present. Another option is to mix dynamic and static files and publish them separately via Nginx.

Different request forwarding is implemented by specifying different suffixes through location. The Expires parameter allows the browser to cache an expiration date, reducing requests and traffic to the server. Specific Expires definition: Sets an expiration time for a resource. That is, the browser can directly confirm the expiration of a resource without going to the server to verify it. Therefore, no additional traffic will be generated. This approach is well suited to resources that are not subject to constant change. (If the file is frequently updated, it is not recommended to use Expires to cache.) I set 3d here, which means that the URL will be accessed within 3 days, a request will be sent, the last update time of the file will not be changed, and the status code 304 will be returned. If there is any change, Then download it directly from the server again and return the status code 200.

Go to the nginx installation directory, open the /conf/nginx.conf configuration file,

The point is to add a location

Finally, check whether the Nginx configuration is correct, and then test whether the static separation is successful, you need to delete a static file on the back-end Tomcat server, check whether you can access, if you can access static resources Nginx directly returned, do not go to the back-end Tomcat server.

6. Nginx principle and optimized parameter configuration

The master – workers mechanism:

Benefits of master-workers mechanism

First of all, for each worker process, an independent process does not need to lock, so the cost of locking is saved. Meanwhile, programming and problem finding are much more convenient.

Secondly, independent processes can not affect each other. After one process exits, other processes are still working and the service will not be interrupted. The master process will soon start a new worker process. Of course, the abnormal exit of worker process must be a bug in the program. Abnormal exit will lead to the failure of all requests on the current worker, but it will not affect all requests, so the risk is reduced.

How many workers need to be set

Similar to Redis, Nginx adopts IO multiplexing mechanism. Each worker is an independent process, but there is only one main thread in each process. Requests are processed in an asynchronous and non-blocking way, even thousands of requests are not a problem. Each worker thread can maximize the performance of one CPU. Therefore, it is most appropriate for the number of workers to be equal to the number of cpus on the server. Setting too little will waste CPU, while setting too much will cause CPU consumption due to frequent context switching.

The number of connections worker_connection

This value represents the maximum number of connections each worker process can make, so the maximum number of connections an nginx can make is worker_connections * worker_processes. The maximum number of connections that can be supported by HTTP is worker_Connectionsworker_processes. Http1.1-enabled browsers require two connections per access, so the maximum number of concurrent static accesses is: Worker_connectionsworker_processes /2, whereas in the case of HTTP as a reverse proxy the maximum number of concurrent requests should be worker_connections* worker_processes/4. As a reverse proxy server, each concurrency establishes a connection to the client and a connection to the back-end service, occupying two connections.

Well, that’s all for today’s article, hoping to help those of you who are confused in front of the screen