One, what is it

Nginx is a high performance HTTP and reverse proxy Web server, the core features are small memory footprint, concurrency capability.

2. Application scenarios

  • The HTTP server

    • High concurrency (5W concurrent connection), high load, low power consumption
  • Reverse proxy server
  • Load balancing server
  • Dynamic and static separation

Three, characteristics

  • Cross-platform -> for most UNIX-like systems, as well as Windows
  • The easier
  • High concurrency and good performance
  • Stability is good

Four, reverse proxy

1. Forward Proxy

Request -> proxy server -> target server, return when the original path returned

2. Reverse proxy

Request -> Nginx (exposed IP and port) – IP of service that > actually handles (hidden)

NGINX reverse proxies to the real service provider. According to the load balancing policy, the user can only see the port and IP of the reverse proxy server NGINX.

Five, load balance

The process of assigning which service to handle the request when it arrives. To solve the problem of high service load.

1. Strategy: Rotation (default)

upstream Servers {
    server ip:port;
    server ip:port;
}

location /path/ {
    proxy_pass   http://Servers/;
}

2. Strategy: Weight

Upstream: Servers {server IP :port weight=1; server ip:port weight=2; }

3. IP-based hash

# Assign the result of each request according to the IP hash, and agree that the client request will be assigned to the same service provider, which can solve the session problem. upstream Servers { ip_hash; server ip:port; server ip:port; }

Six, static and static separation

  • Static resources: HTML, JS,icon and other files are routed by NGINX
  • Dynamic resources: Servlets, the interface to the service’s Tomcat

    # static resource location /static/ {root /app/static/ # static resource directory}

Seven, the command

./ nginx-s/nginx-s/nginx-s/nginx-s/nginx-s/nginx-s

8. Memory model

  • The Master process controls the woker process
  • Worker processes are truly responsible for the work and are independent of each other

Nine, configuration,

# all configuration to; # end user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' #  '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; Upstream Server {Server 127.0.0.1:8080; Server 127.0.0.1:8081; } server { listen 80; # Listen on port server_name localhost; # monitor IP #charset koi8-r; #access_log logs/host.access.log main; Location / {# / represents the default reverse proxy # #root HTML; #index index.html index.htm; # forward to 127.0.0.1:8080 proxy_pass http://127.0.0.1:8080/; } # according to the different paths forward # syntax: location [= | | | ~ ~ * ^ ~] / URI {... } https://segmentfault.com/a/1190000022315733 location /path/ { #root html; #index index.html index.htm; # forward to 127.0.0.1:8081 # proxy_pass http://127.0.0.1:8081/; # load balancing proxy_pass http://Server/; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; }}}