Welcome to glmapper_2018

The introduction

Let’s take a look at nginx’s trends in web server rankings:

So why use Nginx? We’ll see what Nginx can do for us.

First of all, nginx can do reverse proxy. For example, I want to access www.taobao.com locally using the domain name www.glmapper1.com. Then we can use nginx to do this.

Furthermore, nginx implements load balancing. What is load balancing? This means that applications are deployed on different servers, but with unified domain name access, Nginx distributes requests to different servers for processing, which effectively reduces the pressure on a single server.

In both cases, the nginx server serves only as a distributor, so that the real content can be hosted on other servers, and thus can act as a layer of security next door, with Nginx acting as a layer of isolation.

Resolve cross-domain problems

Same-origin: A URL consists of the protocol, domain name, port, and path. If the protocol, domain name, and port of two urls are the same, they are of the same origin.

Browser’s same-origin policy: The browser’s same-origin policy, which restricts “document” or scripts from different sources from reading or setting certain properties on the current “document”. Scripts loaded from one domain do not allow access to document properties in another domain.

Because nginx and Tomcat cannot share the same port, the URL is the same, but the port is different, which can cause cross-domain problems.

PS: point to, here this test did not involve, do not belittle yourself!! !

Configuration file Parsing

The configuration file consists of four parts:

  • Main (Full area setting)
  • Server (Host configuration)
  • HTTP (controls all the core features of HTTP processing in Nginx)
    • Location (URL matches a specific location setting).
  • Upstream (Load Balancing Server setup)

The following uses the default configuration file to describe the specific configuration file attributes:

#Nginx worker processes run users and user groups
#user nobody;

# number of processes started by Nginx
worker_processes  1;

# define the type of global error log definition, [the debug | info | notice | warn | crit]
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;

# specify the location where the process ID is stored
#pid logs/nginx.pid;


# Event configuration
events {
    
    
    #use [ kqueue | rtsig | epoll | /dev/poll | select | poll ];
    The #epoll model is a high performance network I/O model in the Linux kernel, or kqueue model if it is on the MAC.
    use kqueue;
    
    Worker_processes *worker_connections = worker_processes*worker_connections = worker_processes*worker_connections Theoretical value: worker_Rlimit_nofile /worker_processes
    worker_connections  1024;
}

# HTTP parameters
http {
    File extension and file type mapping table
    include       mime.types;
    The default file type
    default_type  application/octet-stream;
    
    # Log related definitions
    #log_format main '$remote_addr - $remote_user [$time_local] "$request" '
    # '$status $body_bytes_sent "$http_referer" '
    # '"$http_user_agent" "$http_x_forwarded_for"';
    
    The specified log format is placed last.
    #access_log logs/access.log main;

    Enable efficient transmission mode
    sendfile        on;
    
    Prevent network congestion
    #tcp_nopush on;

    Client connection timeout, in seconds
    #keepalive_timeout 0;
    keepalive_timeout  65;

    # Enable gzip compressed output
    #gzip on;

    # Virtual host basic Settings
    server {
        # Listen to the port number
        listen       80;
        Access domain name
        server_name  localhost;
        
        If the page format is different from the current configuration, it will be automatically transcoded
        #charset koi8-r;

        Virtual host access log definition
        #access_log logs/host.access.log main;
        
        # match the URL
        location / {
            Access path, relative or absolute
            root   html;
            # Home file, matching order according to the configuration order
            index  index.html index.htm;
        }
        
        Error message returned to the page
        #error_page 404 /404.html;
        
        # redirect server error pages to the static page /50x.html
        #
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }
        
        # access urls ending in.php are automatically forwarded to 127.0.0.1
        # proxy the PHP scripts to Apache listening on 127.0.0.1:80
        #
        #location ~ \.php$ {
        # proxy_pass http://127.0.0.1;
        #}
        
        The # PHP script requests are all forwarded to FastCGI
        # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
        #
        #location ~ \.php$ {
        # root html;
        # fastcgi_pass 127.0.0.1:9000;
        # fastcgi_index index.php;
        # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
        # include fastcgi_params;
        #}

        # Disable access to the.ht page
        # deny access to .htaccess files, if Apache's document root
        # concurs with nginx's one
        #
        #location ~ /\.ht {
        # deny all;
        #}
    }

    # second virtual host configuration
    # another virtual host using mix of IP-, name-, and port-based configuration
    #
    #server {
    # listen 8000;
    # listen somename:8080;
    # server_name somename alias another.alias;

    # location / {
    # root html;
    # index index.html index.htm;
    #}
    #}


    #HTTPS virtual host definition
    # HTTPS server
    #
    #server {
    # listen 443 ssl;
    # server_name localhost;

    # ssl_certificate cert.pem;
    # ssl_certificate_key cert.key;

    # ssl_session_cache shared:SSL:1m;
    # ssl_session_timeout 5m;

    # ssl_ciphers HIGH:! aNULL:! MD5;
    # ssl_prefer_server_ciphers on;

    # location / {
    # root html;
    # index index.html index.htm;
    #}
    #}
    include servers/*;
}

Copy the code

Reverse proxy instance

Suppose I now need local access to www.baidu.com; The configuration is as follows:

server {
    # listen on port 80
    listen 80;
    server_name localhost;
     # individual nginx logs for this web vhost
    access_log /tmp/access.log;
    error_log  /tmp/error.log ;

    location / {
        proxy_pass http://www.baidu.com;
    }
Copy the code

Verification results:

As you can see, I used localhost in my browser to open baidu’s home page…

Load Balancing instance

The following focuses on verifying the three most commonly used load strategies. Virtual host configuration:

server {
    # listen on port 80
    listen 80;
    server_name localhost;
    
    # individual nginx logs for this web vhost
    access_log /tmp/access.log;
    error_log  /tmp/error.log ;

    location / {
        # Load balancing
        # polling
        #proxy_pass http://polling_strategy;
        # weight weight
        #proxy_pass http://weight_strategy;
        #ip_hash
        # proxy_pass http://ip_hash_strategy;
        #fair
        # proxy_pass http://fair_strategy;
        #url_hash
        # proxy_pass http://url_hash_strategy;
        # redirect
        #rewrite ^ http://localhost:8080;
    }
Copy the code

Polling strategy

# 1, Polling (default)
Each request is allocated to a different backend server one by one in chronological order. If the backend server goes down, it will be automatically removed.
upstream polling_strategy { 
    server glmapper.net:8080; Application server 1
    server glmapper.net:8081; # Application Server 2
} 
Copy the code

Test result (distinguishing current access by port number) :

8081: hello 8080: hello 8081: hello 8080: helloCopy the code

Weight strategy

#2, specify the weight
# specify the polling probability, weight proportional to the access ratio, for cases where back-end server performance is uneven.
upstream  weight_strategy { 
    server glmapper.net:8080 weight=1; Application server 1
    server glmapper.net:8081 weight=9; # Application Server 2
}
Copy the code

Test results: The total number of visits is 15. According to the above weight configuration, the access proportion of two machines is 2:13. Meet expectations!

IP hash strategy

#3, IP binding ip_hash
Allocate each request based on the hash result of the access IP, so that each visitor has fixed access to one back-end server.
# Address session issues; Without considering the introduction of distributed sessions,
# Native HttpSession is valid only for the context of the current servlet container
upstream ip_hash_strategy { 
    ip_hash; 
    server glmapper.net:8080; Application server 1
    server glmapper.net:8081; # Application Server 2
} 
Copy the code

Iphash algorithm: IP is a basic dotted decimal system. The first three ends of the IP address are added to the hash function as parameters. This ensures that the first three users with the same IP address will be assigned to the same back-end server after hash calculation. This consideration is highly desirable, so having the same first three IP addresses usually means that the same LAN or adjacent area, using the same back-end services, gives Nginx some degree of consistency.

Why do I have to explain iphash? Because I mined the pit; In this strategy test, Zhu di and Zhu Di used 5 machines, all of which were in the same LAN [192.168.3.x]. During the test, it was found that 5 machines were routed to the same server each time. At first, it was thought that there was a configuration problem, but this possibility was also ruled out after investigation. Finally, considering that the IP addresses in the same network segment may be treated specially, the guess is confirmed after verification.

Other load balancing policies

Because we need to install three-party plug-ins here, we will not verify the time is limited, you can know!

#4 fair (Third Party)
Allocate requests based on the response time of the back-end server, with priority given to those with short response times.
upstream fair_strategy { 
    server glmapper.net:8080; Application server 1
    server glmapper.net:8081; # Application Server 2
    fair; 
} 

#5 url_hash (third party)
Allocate requests based on the hash results of urls accessed, directing each URL to the same backend server.
This is more effective when the backend server is cached.
upstream url_hash_strategy { 
    server glmapper.net:8080; Application server 1
    server glmapper.net:8081; # Application Server 2
    hash $request_uri; 
    hash_method crc32; 
} 
Copy the code

Redirect rewrite

location / {
    # redirect
    #rewrite ^ http://localhost:8080;
}
Copy the code

Nginx uses port localhost:80 for local access. If the redirection does not take effect, the current path localhost:80 will remain. The address in the browser address bar will not change. If it does, the address in the address bar changes to localhost:8080;

Pass verification, meet expectations!

conclusion

This article first on the role of nginx and basic configuration to do a simple description; Then the feedback results of different load balancing algorithms are tested by an example of load balancing. Help yourself to a deeper understanding of the configuration details of nginx server. Thanks to Liu Mi for the helloWorld program [based on springboot scaffolding, you can contact him to get; there is liu Mi is a man…😜]

reference

  • http://nginx.org/
  • https://www.nginx.com/
  • http://www.sohu.com/a/161411719_324809