Install and use Nginx on Windows

What is Nginx?

Baike.baidu.com/item/nginx/…

Nginx (” Engine X “) is a high performance HTTP and reverse proxy server, which is featured by low memory and high concurrency. In fact, Nginx does perform better in the same type of web server. Baidu, JINGdong, Sina, netease, Tencent, Taobao and so on

Nginx can be used as a Web server for static pages and also supports dynamic CGI languages such as Perl, PHP, etc. But Java is not supported. Java programs can only be completed in conjunction with Tomcat. Nginx was developed specifically for performance optimization. Performance is the most important consideration, and the implementation is very efficient. Nginx can withstand high loads and reports indicate that it can support up to 50,000 concurrent connections.

1. Download the nginx

Download: nginx.org/en/download…

After downloading, unpack, as shown in the figure

2. Start the nginx

  • Double click on thenginx.exeStart the
  • CMD startstart nginx.exe

3. The test

Enter http://localhsot:80 in the browser address bar. If the following figure is displayed, the startup is successful

4. Common commands

  • Start nginx.exe ## Start nginx.exe
  • Nginx. exe -s stop ##
  • Nginx. exe -s quit ##
  • Nginx. exe -s reload ## reloads the configuration file without stopping nginx
  • Nginx. exe -s reopen ##

Forward proxy and reverse proxy

If the Internet outside the LAN is considered as a huge resource library, clients in the LAN need to access the Internet through a proxy server. This proxy service is called forward proxy.

Reverse proxy, in fact, the client’s agent is no perception, because the client does not require any configuration can access, we only need to send the request to the reverse proxy server, chosen by the reverse proxy server to the target server to get data, after the return back to the client, the reverse proxy server and the target server is a server, The proxy server address is exposed and the real server IP address is hidden.

Load balancing

The client sends multiple requests to the server, which processes the requests, some of which may interact with the database, and then returns the results to the client. This architectural pattern is suitable for early systems with relatively single, relatively few concurrent requests and low cost. However, with the continuous growth of the amount of information, the rapid growth of the volume of visits and data, as well as the increase of the complexity of the system business, this architecture will cause the server corresponding client requests increasingly slow, when the concurrency is particularly large, but also easy to cause the server directly crash. This is obviously due to server performance bottlenecks, so what can be done to resolve this situation? Our first thought may be to upgrade the server configuration, such as increasing the CPU execution frequency, increasing the memory and so on to improve the physical performance of the machine to solve this problem, but we know that Moore’s Law is increasingly ineffective, the performance of the hardware can no longer meet the increasing demand. The most obvious example is that the instant traffic of a hot commodity on Tmall’s Singles’ Day is extremely large, so the system architecture similar to the above and the addition of machines to the existing top physical configuration cannot meet the demand. So what to do? In the above analysis, we have removed the method of increasing the physical configuration of the server to solve the problem, that is to say, the vertical solution to the problem is not feasible, so how about increasing the number of servers horizontally? Instead of a single server, we increased the number of servers and then distributed requests to different servers, instead of concentrating requests on a single server, we distributed requests to multiple servers and distributed loads to different servers.

Increase the number of servers and then distribute requests to different servers, instead of concentrating requests on a single server, distribute requests to multiple servers and distribute the load to different servers, which is called load-balancing. As shown in the figure:

Dynamic and static separation

In order to speed up the resolution of the site, dynamic pages and static pages can be resolved by different servers to speed up the resolution. Reduce the stress of a single server.

Reverse proxy resolves cross-domain applications

Open the file shown in the figure for configuration

Put your own HTML files into this folder

The index.html file

Run the nginx.exe -s reload command to reload the configuration file

The browser enter http://localhost:80/index.html

Nginx configuration file

In the nginx installation directory, the default configuration file is stored in the conf directory of the nginx installation directory, and the main configuration file nginx.conf is also in the directory. The subsequent use of nginx is basically the corresponding configuration file modification. The configuration file has a lot of # to indicate comment content. The nginx.conf configuration file is divided into three parts:

The first part is global block

The contents from the configuration file to the Events block will mainly set some configuration instructions affecting the overall operation of the Nginx server, including the configuration of users (groups) running the Nginx server, the number of worker processes allowed to be generated, Process PID storage path, log storage path and type, and configuration file import.

#user  nobody;
worker_processes  1;

#error_log  logs/error.log;
#error_log  logs/error.log  notice;
#error_log  logs/error.log  info;

#pid        logs/nginx.pid;
Copy the code

worker_processes 1; This is a key configuration for Nginx server concurrent processing service. The larger worker_processes value is, the more concurrent processing it can support is limited by hardware, software, and other devices

The second part events block

The instructions involved in the Events block mainly affect the network connection between the Nginx server and the user. The commonly used Settings include whether to enable serialization of network connections under the multi-work process, whether to allow multiple network connections to be received at the same time, and which event-driven model to process connection requests. Maximum number of connections that each Word process can support at the same time, etc.

Events {# indicate that each work process supports a maximum of 1024 connections. }Copy the code

The third part is HTTP blocks

This is the most frequent part of the Nginx server configuration, where most functions such as proxies, caching, and logging definitions are configured, as well as third-party modules.

Note that HTTP blocks can also include HTTP global blocks and server blocks.

HTTP {# HTTP global block include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; Listen 80; server_name localhost; #location block location /test {root HTML; index index.html index.htm; proxy_pass https://www.baidu.com/; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; }}}Copy the code

HTTP global block

The configured instructions include file import, MIME-type definition, connection timeout, and so on

The server block

This and virtual host has a close relationship, virtual host from the user’s point of view, and an independent hardware host is exactly the same, the technology is produced in order to save the cost of Internet server hardware.

Each HTTP block can contain multiple Server blocks, and each server block is equivalent to a virtual host.

Each server block is also divided into global Server blocks and can contain multiple Location blocks simultaneously.

1. Global Server block

The most common configuration is the listening configuration of the host and the name or IP configuration of the host.

2, the location

A server block can be configured with multiple Location blocks.

The main purpose of this block is to process a specific request based on the request string received by the Nginx server (e.g. Server_name/URI-string), matching any string other than the virtual host name (or IP alias) (e.g. / URI-string). Address targeting, data caching and response control, as well as many third-party modules are configured here.