Nginx, the real sparrow is small, all the five organs.. It’s really powerful..

Nginx not only can be used as a powerful web server, also can be used as a reverse proxy server, and nginx can also according to the operation rules for dynamic and static page separation, can according to the poll, IP hash, URL hash, weight and so on a variety of ways to the back-end server load balancing, at the same time also supports the backend server health check.

If there is only one server and that server dies, it is a disaster for the site. Therefore, this is where load balancing comes into its own and automatically weeds out failed servers.

Here is a brief introduction to my experience using Nginx as a load

Download – Nginx installation will not be covered, covered in the previous article.

Configure Nginx load on Windows and Linux.

Nginx load balancing

Upstream currently supports 4 different allocation methods: 1), polling (default) Each request is allocated to a different backend server in chronological order. If the backend server goes down, it is automatically rejected. 2), weight specifies the polling probability, weight is proportional to the access ratio, used in the case of uneven back-end server performance. 2) Ip_hash Each request is allocated according to the hash result of the access IP, so that each visitor has a fixed access to the back-end server, which can solve the session problem. 3) Fair (third party) allocates requests based on the response time of the back-end server, with priority given to those with short response times. 4) url_hash (third party)

Configuration:

Upstream myServer {server 127.0.0.1:9090 down; upstream myServer {server 127.0.0.1:9090 down; Server 127.0.0.1:8080 weight = 2; Server 127.0.0.1:6060; 7070 backup server 127.0.0.1:; } add proxy_pass http://myServer under the Server node that needs to use the load; Upstream Status of each device: Down indicates that the server does not participate in the current load weight. The default value is 1. The larger the weight, the larger the load weight. Max_fails: Specifies the number of failed requests allowed. The default value is 1. When the maximum number of failed requests is exceeded, an error defined by the proxy_next_upstream module is returned. Backup: Request the backup machine when all other non-backup machines are down or busy. So this machine will have the least pressure.

Nginx also supports multi-group load balancing, where multiple upstreams can be configured to serve different servers.

Configuring load balancing is relatively simple, but the most critical problem is how to share sessions among multiple servers

There are several ways to do this (the following is from the web, the fourth method is not in practice).

  1. Instead of using session, use cookies

If you can change session to cookie, you can avoid some of the drawbacks of session. In a J2EE book I read before, it is also pointed out that session cannot be used in a cluster system, otherwise it will be difficult to do. If the system is not complex, the priority is to remove the session, if it is too troublesome to change, then use the following method.

  1. The application server implements sharing by itself

Asp.net can store sessions in a database or memcached to create a session cluster in asp.net itself. This way, sessions are stable and will not be lost even if a node fails. It is suitable for strict but not demanding situations. But its efficiency is not very high, is not suitable for high efficiency requirements of the occasion. Nginx has nothing to do with nginx. Here’s how to use nginx:

  1. ip_hash

Nginx uses the ip_hash technology to redirect requests from an IP address to the same backend, thus establishing a solid session between a client and a backend. The ip_hash is defined in the upstream configuration: Upstream Backend {server 127.0.0.1:8080; Server 127.0.0.1:9090; ip_hash; } ip_hash is easy to understand, but because only the IP factor can be used to assign the back end, ip_hash is flawed and cannot be used in some cases: 1/ nginx is not the most front-end server. Ip_hash requires that nginx be the front-end server. Otherwise, nginx cannot hash based on the IP if it fails to obtain the correct IP address. For example, if squid is used as the front end, then nginx can only get the IP address of squid server, using this IP address is definitely wrong. There are other ways to load balance the 2/ Nginx backend. If there are other load balancers on the Nginx backend that divert requests in a different way, then a client’s requests cannot be directed to the same session application server. The nginx backend can only point directly to the application server, or a SQUID, and then point to the application server. The best way to do this is to use location as a single stream, where part of the requests that require a session are routed through the IP_hash, and the rest go to the other backend.

  1. upstream_hash

To resolve some of the ip_hash issues, you can use the upstream_hash third-party module, which is mostly used for URl_hash, but does not prevent it from being used for session sharing: If the front end is squid, add the IP to the x_forwarded_for http_header, and use upstream_hash to factor the request to the specified backend.www.sudone.com/nginx/nginx…It is used in documentationhttp_x_forwarded_for; Hash $cookie_jsessionID; hash $cookie_jsessionID; hash $cookie_jsessionID; hash $cookie_jsessionID; If a session is configured in PHP as cookie-free, nginx can generate a cookie with a userid_module of its own.Wiki.nginx.org/NginxHttpUs…Also available is the module upstream_jVM_route written by Weibin Yao:Code.google.com/p/nginx-ups…

= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =

Blog.csdn.net/chao_1990/a…

The shared session application scenario is based on distribution. The principles of session are as follows (Take Tomcat as an example).

Tomcat has its own session management mechanism. Every time a connection is created, it obtains the session corresponding to the session ID from the session connection pool through the JsessionID sent by the client. If there is a session connection information, it directly obtains the session connection information from the pool. Determine whether to create a new session and save it to the pool if there is no setup in the program.

The way nginx contributes sessions is generally ip_hash(native to Nginx), redis shares sessions, based on cookies, based on DB (various relational and non-relational databases).

1. Based on cookies

Simple and convenient, each time by judging the user state information in the cookie to determine the user login state; However, if the user information exists in the client, there are security risks, unless there are quite safe encryption measures, if the password load, it will also increase the cost of operation.

2, based on DB (all kinds of relational database and non-relational database)

Login information is stored in the DB. Although it is safe, IO operations need to be performed from the DB every time to verify the user login status. A high number of concurrent operations is bound to increase the database load.

3. Share session based on Redis +cookie

In this way, user login information will be stored in Redis, because it is memory-based reading, so efficiency will not be the bottleneck of response efficiency. Cookies store JsessionID, which does not need encryption or processing. It only needs to store the key in REDis to save the login information or other effective information that customers can accurately obtain through the key in cookies. In this way, the storage of cookies does not need the cost of encryption calculation. Secondly, Redis stores the information in the cache with high access efficiency.

4. Based on ip_hash

This configuration method is the simplest, but has some limitations. The principle is that all requests from the same IP address will be calculated with the IP_hash by Nginx, and the result will be used to locate the specified backend server. If the IP address of a user remains unchanged, each request actually requests the same backend server. First, the outermost proxy ensures that the source IP will not be changed during the request process. If you have an architecture in which the outermost mode is not only nginx service, but a service server that is similar to the request distribution, then ip_hash will not make sense, meaning that one person’s request can be located on a different server.

5. Based on Tomcat container Session

In this way, session sharing is fundamentally realized. The actual situation is that the session under one TOMCT is copied to the session pool of other Tomcat servers through Tomcat management configuration to realize the real session sharing. This approach needs to be compatible with Tomcat configuration and needs to be extended, which is too dependent.

Implemented sharing modes:

Configure 3 Web containers tomcat1(3301), Tomcat2 (3302), tomcat3(3303)

1, the ip_hash

Nginx configuration is not described here, mainly describes the IP_hash configuration, simply to the node

  1. upstream backend {
  2. ip_hash;
  3. server localhost:3301 max_fails=2 fail_timeout=30s ;
  4. server localhost:3302 max_fails=2 fail_timeout=30s ;
  5. server localhost:3303 max_fails=2 fail_timeout=30s ;
  6. }

The nginx configuration file nginx.conf imports upstram data.

Validation:

Have access to the web application, for example http://localhost:8080/login.jsp store session information after login, visit this page every time need to validate the session state, the page is tomcat1 said, This IP address is forwarded to the server of Tomcat1. No matter how much the page is refreshed, the page is displayed as Tomcat1. Stop Tomcat1 and refresh the page again. But sessions are not shared; The limitations are obvious.

2. Redis + cookies share session

Principle realization:

1) When the login is successful, the key with REDis will be saved in the cookie, and the login user’s information will be saved in redis;

2) Configure the interceptor to intercept the request requiring login state verification, obtain the key of REDis from cookie, obtain the user information corresponding to the current cookie from REDis, and verify its login state.

Validation:

Cancel the ip_hash nginx configuration, log on to the web, for example, for example http://localhost:8080/login.jsp store session information after login, the homepage tomcat1, at this time do not need to close any tomcat, If you refresh the home page repeatedly, you will find that the home page switches between Tomcat1, tocmat2, and Tomcat3, indicating that the session is normally shared. Personally think this kind of way is quite recommended.

View:

From the core of the function, its purpose is to share the session. The reason is that a unique value is needed between the client and the service to establish the association relationship, and this value is sessionId. No matter which website is requested to check the cookie through the tool, it will be found that there is a value of JsessionID. This value is basically unchanged after the server establishes a connection (the background actively creates a new session). Session management based on the session connection pool is also dependent on the sessionID. By checking the source code of tomcatSession management, it is not difficult to find that the sessionID is also obtained through the cookie. How do I get the jsessionId if the client forcibly disables cookies? I have checked the resources before (not confirmed personally). When the browser disables cookies, the requested address will be redirected, and the jsessionID will be splicted in the requested URL after the redirect. This container can obtain the SessionID, which can also verify the session status. This scenario has yet to be confirmed.

The articles

Nginx series of tutorials (1) Basic introduction and installation of Nginx series of tutorials (2) Nginx set up static resource Web server (3) Nginx cache static files on the server Nginx ensures high availability of Nginx (Keepalived). Nginx ensures high availability of Nginx (keepalived). Nginx ensures high availability of Nginx (Keepalived) Nginx: How to configure security certificates in SSL: How to configure session consistency in Nginx