“This is my 20 days to participate in the Gwen Challenge. See the details: Gwen Challenge”

Static Resource service

Core configuration and file compression

  1. Configuration syntax: file read

    Configuration: sendfile on | off. Default: sendFile off. Scope: HTTP, server, location, if in location

    Sendfile is an efficient mode for transferring files. Sendfile is set to on to enable efficient file transfer mode. Sendfile allows Nginx to transfer data directly between disk and TCP socket when transferring files. If this parameter is not enabled, a buffer is requested in user space (Nginx process space), and data is read from disk to cache, then read from cache to user-space buffer, and then write from user-space buffer to kernel buffer with write function. Finally, the TCP socket. When this parameter is enabled, data does not need to pass through the user buffer.

  2. Configuration syntax: tcp_nopush

    Configuration: tcp_nopush on | off

    Default: tcp_nopush off

    Scope: HTTP, server, location

    Tcp_nopush improves the transmission efficiency of network packets when sendFile is enabled. Tcp_nopush directive that enables TCP_CORK in Linux when connecting sockets. This option tells the TCP stack to attach packets and send them when they are full or when the application sends packets by explicitly removing the TCP_CORK instruction. This results in an optimal number of packets being sent, and thus improves the efficiency of network packet transmission. That is, when tcp_nopush= ON, the result is that packets are not immediately sent out, and when they are at their maximum, they are sent out in one go, which helps solve network congestion, albeit with a little delay.

  3. Configuration syntax: tcp_nodelay

    Configuration: tcp_nodelay on | off;

    Default: tcp_nodelay on;

    Scope: HTTP, server, location

    Tcp_nodelay improves the real-time transmission of network packets under keepalive connection. The tcp_nodelay option is the opposite of tcp_nopush. Packets are sent to the user in real time without waiting.

  4. Configuration syntax: gzip compression

    Compression transmission, improve transmission efficiency. Enabling compression speeds up resource response and saves network bandwidth resources

    Gizp configuration syntax:

    Configuration: gzip on | off;

    Default: gzip off;

    Scope: HTTP, server, location, if in location

    Configure the compression ratio: Configure the compression level (the higher the compression level, the more server resources are consumed).

    Configuration: gzip_comp_level level;

    Default: gzip_comp_level 1;

    Scope: HTTP, server, location

    Gzip Protocol version configuration:

    Configuration: gzip_http_version | 1.0 1.1;

    Default: gzip_http_version 1.1;

    Scope: HTTP, server, location;

    Compression extension module: ngx_http_gzip_STATIC_module prereads gzip functionality

    Configuration: gzip_static on | off | always;

    Default: gzip_static off;

    Scope: HTTP, server, location

    Gunzip compression mode ngx_HTTP_gunzip_module

    Configuration: gunzip on | off;

    Default: gunzip off;

    Scope: HTTP, server, location

    Configure: gunzip_buffers number size;

    Default: gunzip_buffers 32 4 k | 8 k;

    Scope: HTTP, server, location;

Configuration of the sample

/opt/work/code/images: /opt/work/code/images: /opt/work/code/images: /opt/work/code/images Open gzip and request the image again to see that the image is compressed to a smaller size.

location ~ .*\.(jpg|gif|png)$ {
            gzip on;
            gzip_http_version 1.1;
            gzip_comp_level 2;
            gzip_types text/plain application/javascript application/x-javascript text/css application/xml text/javascript application/x-httpd-php image/jpeg image/gif image/png;
            root  /opt/work/code/images;
}
Copy the code

For TXT, HTML and other files, the same as above

location ~ .*\.(txt|xml)$ {
            gzip on;
            gzip_http_version 1.1;
            gzip_comp_level 1;
            gzip_types text/plain application/javascript application/x-javascript text/css application/xml text/javascript application/x-httpd-php image/jpeg image/gif image/png;
            root  /opt/work/code/doc;
}
Copy the code

Configure access to compressed static resources

location ~ ^/download {
            gzip_static on;
            tcp_nopush on;
            root /opt/work/code;
}
Copy the code

Browser cache

Expiration inspection mechanism

  1. Verify expiration: Expires, Cache-Controller(max-age)
  2. Etag header verification in the protocol: Etag
  3. Last-modified header verification: last-modified

Expires configuration

Set: Expires Time

Default: Expires off

Scope: HTTP, server, location, if in lacation

Configuration of the sample

location ~ .*\.(txt|xml)$ {
	expires 24h;
	root /opt/work/html;
}
Copy the code

Set the expiration time for TXT and HTML files to 24 hours. After the expires file is configured, access the expires file and view it through the browser development tool. You can see that the first request status is 200 and the second request status is 304, indicating that the cache has been read. The response header should display the following message:

Cache-Control: max-age=86400
TETag: "5ee9b0ef-12d"
Expires: Sat, 20 Jun 2020 08:11:21 GMT
Last-Modified: Wed, 17 Jun 2020 05:58:07 GMT
Copy the code

Cross-domain access processing

Add_header allows you to add the access-Controller-Allow-Origin header to the response header to Allow cross-domain Access.

The configuration syntax

Configuration: add_header name value;

Scope: HTTP, server, location, if in location;

Configuration of the sample

location / {
	#All domain names can be accessed across domains. You can also specify a domain name, such as http://www.baidu.com
	add_header Access-Controller-Allow-Origin *;
	#Allow these methods to have cross-domain access
	add_header Access-Controller-Allow-Methods GET,POST,PUT,DELETE,OPTIONS;
	root /opt/work/html;
	index index.html index.htm;
}
Copy the code

Cross-domain access can also be implemented through server-side code, such as Java, Golang and other server-side languages.

Preventing hotlinking

The resource that prevents pilfer chain can prevent a website is used by pilfer. Nginx can implement simple anti-theft functions.

Anti-theft chain configuration module based on http_REFER

location ~ .*\.(txt|xml)$ {
	valid_referers none blocked *.maishuren.top maishuren.top server_names ~\.google\. ~\.baidu\.;
    if ($invalid_referer) {
    	return 403;
    }
	gzip on;
	gzip_http_version 1.1;
	gzip_comp_level 1;
	gzip_types text/plain application/javascript application/x-javascript text/css application/xml text/javascript application/x-httpd-php image/jpeg image/gif image/png;
	root  html;
}
Copy the code

The proxy service

In real life, there is the phenomenon of daigou, that is, we buy some things through daigou. The same nginx-based proxy service is that requests and responses are completed between clients and service segments through the Nginx proxy. As shown in figure

Nginx can delegate many services, and the proxy service has forward proxy and reverse proxy

Forward agent

A forward proxy is a proxy that follows the direction of the request, a proxy server that you configure to serve you, to request the address of the destination server.

The reverse proxy

The reverse Proxy is the opposite of the forward Proxy. The Proxy Server serves the target Server, although the overall request return route is the same: from Client to Proxy to Server.

Using the configuration:

server { listen 80; server_name www.xxx.com; Location ~ /test.html${proxy_pass http://127.0.0.1:8080; # proxy forwarding}}
#Internally accessible services
server {
    listen       8080;
    server_name  127.0.0.1;
    location / {
        root   /opt/work/html;
        index  index.html;
}
Copy the code

Buffer, redirection, header information, timeout configuration

server { listen 80; server_name www.xxx.com; Location ~ /test.html${proxy_pass http://127.0.0.1:8080; Proxy_redirect default; proxy_set_header Host $http_host; proxy_set_header x-Real-IP $remote_addr; proxy_connect_timeout 30; proxy_send_timeout 60; proxy_read_timeout 60; proxy_buffer_size 32k; proxy_buffering on; proxy_buffers 4 128k; proxy_busy_buffers_size 256k; proxy_max_temp_file_size 256k; }}Copy the code

proxy_redirect default;

  • Set to default if there is no redirection, unless set to a specific redirection address if the backend returns 301

proxy_set_header Host $http_host;

proxy_set_header X-Real-IP $remote_addr;

  • The Settings field is redefined or appended to the request header passed to the proxy server

proxy_connect_timeout 30;

  • Set the proxy connection timeout period

proxy_read_timeout 60;

  • Sets the timeout for reading the response from the proxy server

proxy_send_timeout 60;

  • Set the timeout period for sending requests to the proxy server

proxy_buffering on;

  • Set to enable or disable response buffering from the proxy server

proxy_buffer_size 32k;

  • Sets the size of the buffer used to read the first part of the response received from the proxy server

proxy_buffers 4 128k

  • Sets the number and size of buffers used to read responses from proxy servers for a single connection.

proxy_busy_buffers_size 256k;

  • Sets a limit on the total size of buffers that may be busy sending responses to responding clients before the response is fully read when response buffering from a proxy server is enabled.

proxy_max_temp_file_size 256k;

  • Set When response buffering from the proxy server is enabled and the entire response does not fit the buffers set by the proxy_buffer_size and proxy_buffers directives, part of the response can be saved to a temporary file. This directive sets the maximum size of temporary files. The size of data written to temporary files at one time is set by the proxy_temp_FILe_write_size directive.

Load balancing

  1. Nginx load balancing

  1. GSLB(Global Load Balancing)

  • Scheduling center node: a global scheduling node;
  • Scheduling node: a local scheduling node;
  • Application service center node: a global application service scheduling node.
  • Application service: a local application service node;
  • The dispatching center node manages the dispatching node.
  • The application service center node manages application services.
  1. SLB load balancing

Scheduling nodes and service nodes are in a logical unit, which has a very good influence on the real-time performance of some services. Nginx uses SLB load balancing

  1. Layer 4 load balancing

    According to the OSI network model, the fourth layer is the transport layer. The transport layer supports TCP/IP, so you only need to forward TCP/IP packets to achieve load balancing. Very good performance, you only need to apply processing at the very bottom level, you don’t need to do complex logic, just need to forward packets.

5. Layer-7 load balancing

As the OSI model in the figure above, the seven-layer load balancing is mainly used in the application layer, so it can complete many protocol requests of the application layer, such as HTTP protocol load balancing, which can realize the rewriting of HTTP information, header information rewriting, and application rule control. Nginx is a typical SLB with seven layers of load balancing.

Nginx load balancing configuration

Create three server configuration files in the conf.d folder: server1.conf, server2.conf, and server3.conf

#server1.confserver { listen 8080; server_name localhost; location / { root /opt/work/html; index server1.html server1.htm; }}
#server2.confserver { listen 8081; server_name localhost; location / { root /opt/work/html; index server2.html server2.htm; }}
#server3.confserver { listen 8082; server_name localhost; location / { root /opt/work/html; index server3.html server3.htm; }}Copy the code

It is configured in the main configuration file nginx.conf and uses the polling policy by default

Upstream test {server 192.168.74.129:8080; Server 192.168.74.129:8081; Server 192.168.74.129:8082; } include /etc/nginx/conf.d/*.conf; server { listen 80; server_name www.maishuren.com; Resolver 8.8.8.8; location / { proxy_pass http://imotestoc; proxy_redirect default; }Copy the code

Status of back-end servers in load balancing scheduling

Down: Indicates that the current server does not participate in load balancing

Backup: reserved backup server

Max_fails: The number of requests allowed to fail

Fail_timeout: period during which services are suspended after max_fails

Max_conns: limits the maximum number of received connections

Example:

Upstream test {server 192.168.74.129:8080 down; 8081 backup server 192.168.74.129:; Server 192.168.74.129:8082 max_fails = 1 fail_timeout = 30 s; } include /etc/nginx/conf.d/*.conf; server { listen 80; server_name www.maishuren.com; Resolver 8.8.8.8; location / { proxy_pass http://imotestoc; proxy_redirect default; }Copy the code

Nginx load balancing algorithm

Polling: Time by time to different back-end servers

Weighted polling: The greater the weight value, the higher the probability of access assigned

Ip_hash: Each request is allocated based on the hash result of the IP address accessed, so that access from the same IP address is fixed to the same backend server

Url_hash: Allocates servers based on the hash result of accessed urls, so that each URL is directed to the same backend server

Hash Key value: hash user-defined key

Example:

upstream test { #ip_hash; # $request_uri server 192.168.74.129 hash: 8080; Server 192.168.74.129:8081 weight = 2; Server 192.168.74.129:8082 weight = 5; } include /etc/nginx/conf.d/*.conf; server { listen 80; server_name www.maishuren.com; Resolver 8.8.8.8; location / { proxy_pass http://imotestoc; proxy_redirect default; }Copy the code

The dynamic cache

Nginx cache service

As shown in the figure above, when nginx enables caching, it can cache server data to improve response efficiency

The configuration syntax

Proxy_cache_path Configuration syntax

Configuration: proxy_cache_path path [levels=levels] [use_temp_path=on|off] keys_zone=name:size [inactive=time] [max_size=size] [manager_files=number] [manager_sleep=time] [manager_threshold=time] [loader_files=number] [loader_sleep=time] [loader_threshold=time] [purger=on|off] [purger_files=number] [purger_sleep=time] [purger_threshold=time];

Default: –

Scope: HTTP

Proxy_cache configuration syntax

Configuration: proxy_cache zone | off;

Default: proxy_cache off;

Scope: HTTP, server, location

Proxy_cache_valid Configuration syntax

Syntax: proxy_cache_valid [code …] time;

Default: –

Context: http, server, location

Proxy_cache_key Configuration syntax

Configuration: proxy_cache_key string;

Default: proxy_cache_key schemeschemeschemeProxy_host $request_URI;

Scope: HTTP, server, location

Configure the instance

Upstream test {server 192.168.74.129:8080; Server 192.168.74.129:8081; Server 192.168.74.129:8082; } # you need to configure the cache directory first, file directory level 2, the size of the directory name 10m, the maximum size of the directory (beyond the start of nginx's own rules of elimination), within 60 minutes if not accessed will be cleared. Storing temporary files Proxy_cache_PATH /opt/app/cache levels=1:2 KeYS_zone = nginx_Cache :10m MAX_size = 10G INACTIVE = 60M USE_TEMP_PATH =off;  server { listen 80; server_name localhost www.maishuren.com; If the URL contains the following path parameters, the cookie_nocache value is 1-----1. Set up some access path does not cache the if ($request_uri ~ ^ / (url3 | login | register | password \ / reset)) {set $cookie_nocache 1; } location / { proxy_cache off; Keys_zone = iMOOC_cache :10m; Proxy_pass http://test; proxy_cache_valid 200 304 12h; #200 and 304 header expiration time 12 hours proxy_cache_valid any 10m; Proxy_cache_key $host$URI $is_args$args Nginx-Cache "$upstream_cache_status"; The cookie_nocache parameter configured above is not 0 or empty if it is not cached -----2. Set some access paths not to cache proxy_NO_cache $cookie_nocache $arg_nocache $arg_COMMENT; proxy_no_cache $http_pragma $http_authorization; 		#One service reported an error requesting the nextproxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504; include proxy_params; }}Copy the code

Dynamic and static separation

  1. Start tomcat and write an index.jsp
<%@ page language="java" import="java.util.*" pageEncoding="utf-8"%>  
<HTML>
    <HEAD>
        <TITLE>JSP Test Page</TITLE>
    </HEAD>
    <BODY>
        <%
            Random rand = new Random();
            out.println("<h1>Random number:</h1>");
            out.println(rand.nextInt(99) +100);
        %>
    </BODY>
</HTML>
Copy the code
  1. Write Nginx HTML pages
<html lang="en">  
<head>  
<meta charset="UTF-8" />  
<title>Test static and static separation</title>  
<script src="http://libs.baidu.com/jquery/2.1.4/jquery.min.js"></script>  
</head>  
<script type="text/javascript">  
$(document).ready(function(){  
    $.ajax({  
        type: "GET".url: "http://www.maishuren.com/java_test.jsp".success: function(data) {$("#get_data").html(data)
        },
        error: function() {  
            alert("fail!!! Please refresh and try again!"); }}); });</script>  
<body>  
    <h1>Test static and static separation</h1>
    <img src="http://www.maishuren.com/img/nginx.png"/>
    <div id="get_data"><div>
</body>
</html> 
Copy the code
  1. Configure the nginx configuration file
Upstream java_api {server 127.0.0.1:8080; } server { listen 80; server_name localhost www.maishuren.com; #charset koi8-r; access_log /var/log/nginx/log/host.access.log main; root /opt/app/code; Location ~ \.jsp${# go to proxy_pass http://java_api; index index.html index.htm; } location ~ \.(jpg|png|gif)$ { expires 1h; gzip on; } location /{ index index.html index.htm; } error_page 500 502 503 504 404 /50x.html; location = /50x.html { root /usr/share/nginx/html; }}Copy the code

Static and static separation is configured to reduce the pressure on the server side. When only static resources are requested, static resources can be obtained directly through the middleware, without the need to obtain the program framework and logic.