This site is not updated. Please visit blog.coocap.com

 

Relevant knowledge to search, directly on dry goods…

Resources used:

One nginx primary server, one Nginx standby server, keepalived for downtime switching.

Two Tomcat servers. Nginx performs reverse proxy and load balancing. You can set up a server cluster here.

One Redis server, used for separated sharing of sessions.

Nginx primary server: 192.168.50.133

Nginx secondary server: 192.168.50.135

Tomcat project server 1:192.168.50.137

Tomcat project server 2:192.168.50.139

Redis server: 192.168.50.140

Notice You need to configure firewall rules or disable the firewall

 

The first common installation:

A total of five servers need to be simulated, using vmware and centos6.5 64-bit. JDK is installed on all five servers. I used jdk1.8.

1. Install VMware vm, install Linux OS, use centOS6.5 64-bit, install Linux cli tool, upload file tool, and use SecureCRT and SecureFX. Installation tutorial no longer repeat, Baidu a lot of……….

If you have any questions about this step, please push harder:www.baidu.com

 

 

2. Install JDK on Linux:

Install JDK: uninstall its version, upload decompression JDK, configure the environment variables – reference: jingyan.baidu.com/article/ab0…

 

Nginx reverse proxy and load Balancing

Architecture diagram:

 

Now need to use three servers, a nginx server, two formal deployment project server: choice is Lord 192.168.50.133 nginx and 192.168.50.137 192.168.50.139 two tomcat server

Start by installing Tomcat on both servers: this one is easy, too

Sh startup in the bin directory, and shutdown.sh is disabled

Configure firewall ports: vim /etc/sysconfig/iptables edit, open port 8080, port 80 and other common ports. Of course, some ports need to be configured and open later. Do not close the firewall

Service iptables restart reloads the firewall configuration

 

If it is troublesome to configure your own test, turn off the firewall: Service iptables stop The firewall is enabled after the restart. If the firewall is enabled after the restart, run the chkconfig iptables off command to disable the firewall after the restart

Start Tomcat access: 192.168.50.137:8080, 192.168.50.139:8080.

Then write the testproject and deploy it on two tomcat servers. Eclipse creates a new web project named testproject. Create a JSP page named index.jsp under webapp and add the following content

Move the access order <welcome-file>index.jsp</welcome-file> from the web. XML in the project up to the first access

 

Then right-click to export the war package, testProject.war, and upload the war package to tomcat webapps on both servers

 

And then modify the tomcat server. The XML files in the tomcat conf directory: can use notepad + + plug-in NppFTP even directly on Linux, and then use notepad + + modify the file oh, save remember use utf-8 no BOM format, specific to the baidu, ha ha

Add jvmRoute to the Engine TAB to identify which server tomcat nginx accesses. 137 server id 137Server1, 139 server ID 139Server2

In both Tomcat server. XML files, add <Context path=”” docBase=” testProject “/> to the Host tag. Path indicates the access path and docBase indicates the project name

At this time, restart Tomcat, access 192.168.50.137:8080, 192.168.50.139:8080, and display the index.jsp content

 

The two Tomcat servers are set up.

 

Install nginx on host 192.168.50.133:

# yum install pcre, zlib, openssl

?

1 2 3 4 yum install -y gcc yum install -y pcre pcre-devel yum install -y zlib zlib-devel yum install -y openssl openssl-devel

Create the nginx-src directory in /usr/local/, place nginx-1.8.0.tar.gz in this directory, and decompress it

?

1 The tar - ZXVF nginx - 1.8.0 comes with. Tar. Gz

The decompressed directory is displayed

Execute the commands in sequence:

?

One, two, three, four, five ./configure   make   mkae install

The nginx installation directory is /usr/local/nginx. By default, nginx uses port 80

The sbin directory is the nginx command, and the nginx.conf directory is the default configuration file

Nginx start:

?

1 ./sbin/nginx

Close the nginx:

?

1 ./sbin/nginx -s stop

Nginx welcome page: 192.168.50.133:80

 

At this point, nginx is installed.

 

 

3. Configure reverse proxy and load balancing

There are two servers, one is 192.168.50.137, the other is 192.168.50.139, each server has a Tomcat, port 8080, there is nginx on 192.168.50.133, after configuring nginx, 192.168.50.133:80 is monitored by nginx. When a request is received from 192.168.50.133:80, Proxy to 192.168.50.137:8080, 192.168.50.139:8080 random one can be, namely, nginx reverse proxy function, at the same time, nginx can forward requests to ensure a portal, forwarding all requests to two servers also reduces the load pressure of any one. When there is a large number of requests, you can set up a large number of servers and use Nginx on the portal proxy server for forwarding, which is the load balancing function.

 

To configure the nginx.conf file in the conf directory of the nginx installation directory, configure the nginx.conf file as follows

#user niumd niumd; # number of working child processes (usually equal to or twice the number of cpus) worker_processes 2 #error_log logs/error.log; #error_log logs/error.log notice; error_log logs/error.log info; Pid logs/nginx.pid; # epoll # kqueue # epoll # kqueue # epoll; # worker_connections 1024; } http { include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] $request '# '"$status" $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log off; access_log logs/access.log; client_header_timeout 3m; client_body_timeout 3m; send_timeout 3m; client_header_buffer_size 1k; large_client_header_buffers 4 4k; sendfile on; tcp_nopush on; tcp_nodelay on; #fastcgi_intercept_errors on; error_page 404 /404.html; #keepalive_timeout 75 20; gzip on; gzip_min_length 1000; gzip_types text/plain text/css application/x-javascript; Configure the proxy server upstream blank {
        #ip_hash;
        server 192.168.50.137:8080;
        server 192.168.50.139:8080;
    }
Server {#nginx listens on port 80 and forwards requests to the real target listen 80. Server_name localhost; The blank name must be the same as proxy_pass http://blank; #proxy_redirect off; Proxy_set_header Host $Host; proxy_set_header Host $Host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 300; proxy_send_timeout 300; proxy_read_timeout 300; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; add_header Access-Control-Allow-Origin *; } # error_page 500 502 503 504 error_page 500 502 503 504/50x.html; /usr/local/nginx/ HTML /usr/local/nginx/ HTML / }}}Copy the code

 

Start two Tomcat servers and restart nginx:

Access to 192.168.50.133:80 will randomly access either 192.168.50.137:8080 or 192.168.50.139:8080. (Problem: Nginx server address sessionID changes every time refresh, session cannot be shared.)

 

 

Nginx polling strategy:

The default polling policy is used when nginx loads are balanced across multiple servers:

Common strategies:

1, polling

Each request is allocated to a different backend server one by one in chronological order. If the backend server goes down, the request is automatically removed.

2. Weight specifies the polling probability. Weight is proportional to the access ratio and is used in the case of uneven back-end server performance. Upstream {server 192.168.0.14 weight=2; Server 192.168.0.15 weight = 1; }

2. Ip_hash Each request is allocated according to the hash result of the access IP address. In this way, each visitor has a fixed access to the back-end server, which can solve the session problem. Upstream {ip_hash; Server 192.168.0.14:88; Server 192.168.0.15:80; }

Nginx also has many other configurable items, such as static resource caching, redirection, etc. If you want to go further, please learn by yourself

A: nginx configuration blog.csdn.net/tjcyjd/arti…

 

Practical problem: although solved, not very understanding, record

192.168.50.133:80 is mapped from the external network. 55.125.55.55:5555 from the external network is mapped to 192.168.50.133:80. When 55.125.55.55:5555 is used to access 192.168.50.133:80, it is mapped to 192.168.50.133:80. The system then forwards the file to 192.168.50.137:8080 or 192.168.50.139:8080. However, static files such as images, JS, and CSS cannot be accessed.

<1>. Map non-80 ports

Let 55.125.55.55:5555 map 192.168.50.133 to a non-80 port. For example, 55.125.55.55:5555 maps 192.168.50.133:5555 to a non-80 port. Then configure the following in the nginx configuration file

?

12 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 . upstream blank {         #ip_hash;         Server 192.168.50.137:8080;         Server 192.168.50.139:8080;     }       server {         #nginx listens on port 5555 and forwards requests to the real destination             listen       5555;         Configure the access domain name             Server_name 192.168.11.133;                       location / {         The blank name must be the same for the two proxied targets defined above                 proxy_pass http: //blank;                           #proxy_redirect          off;         The purpose is to transmit the user information received by the proxy server to the real server         proxy_set_header        Host $host:$server_port;         proxy_set_header        X-Real-IP $remote_addr;     proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;         client_max_body_size    10m;         client_body_buffer_size 128k;         proxy_connect_timeout   300;         proxy_send_timeout      300;         proxy_read_timeout      300;         proxy_buffer_size       4k;         proxy_buffers           4 32k;         proxy_busy_buffers_size 64k;         proxy_temp_file_write_size 64k;         add_header Access-Control-Allow-Origin *;             }...

In this case, access 55.125.55.55:5555, map it to 192.168.50.133:5555, and forward it to 192.168.50.137:8080 or 192.168.50.139:8080. In this case, static files can be accessed.

 

<2>. Use the domain name to forward using nginx on the extranet server

If 55.125.55.55 is bound to the domain name test.baidu.com, use nginx on the 55.125.55.55 server.

?

12 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 . location / {         If the domain name is test.baidu.com, forward to192.168 . 50.133:80$host = nginx; $host = nginx         if($host = "test.baidubaidu.com" ) {         proxy_pass http:/ / 192.168.50.133:80;         }                           #proxy_redirect          off;         # the80Port use, the purpose of the proxy server received user information to the real server, I do not understand         proxy_set_header        Host $host:$server_port;         proxy_set_header        X-Real-IP $remote_addr;         proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;         client_max_body_size    10m;         client_body_buffer_size 128k;         proxy_connect_timeout   300;         proxy_send_timeout      300 ;         proxy_read_timeout      300 ;         proxy_buffer_size       4k;         proxy_buffers           4 32k;         proxy_busy_buffers_size 64k;         proxy_temp_file_write_size 64k;         add_header Access-Control-Allow-Origin *;             }...

The above is nginx reverse proxy and load balancing introduction, after this learning, found that Nginx is indeed extensive and profound, a configuration file I do not want to do…

 

Ii. Session Sharing Problem

Nginx is a random allocation request. If a user logs in to a website and is assigned to 192.168.50.137:8080 and then logs in, the server will have the session information of the user. After logging in, the user is redirected to the homepage or personal center. If the user is assigned to 192.168.50.139:8080, the server does not have the user’s session information, and the user is not logged in. Therefore, nginx load balancing may cause session sharing problems.

Solutions:

1. Nginx provides the IP_hash policy, which can keep the user IP address for hash calculation and allocate it to a certain server. Then, the IP address will be allocated to the server as long as it is the same server to ensure that the user accesses the same server. This is also a way of addressing session sharing, also known as sticky sessions. However, if a Tomcat server is down, the session will also be lost. So a good solution is to extract sessions.

2. The session is stored in memcache or Redis. In this way, the session is synchronized, extracted and stored in the memory database, which solves the problem of session sharing, and at the same time, the reading speed is very fast.

 

In this case:

 

 

Redis addresses session sharing:

Set up redis on redis server 192.168.50.140. The default redis port is 6379

Redis build:

Redis relies on GCC.

?

1 yum install -y gcc-c++

/usr/local/redis-src/

Go to the decompressed directory redis-3.2.1 and run the make command to compile the file

Install to the /usr/local/redis directory

Perform:

?

1 make PREFIX=/usr/local/redis install

After the installation is complete, copy the redis configuration file to the installation directory. Redis. conf is the redis configuration file.

Execute command:

?

1 cp /usr/local/redis-src/redis-3.2.1/redis.conf /usr/local/redis/

Start and close Redis in the Redis installation directory:

Activation:

?

1 ./bin/redis-server ./redis.conf

This startup mode is called front-end startup and must remain in the current window. If CTRL + C exits, redis exits and is not recommended

So back end boot:

Daemonize yes = daemonize yes = daemonize yes = no You can also change the redis default port 6379 to a different value in this configuration file.

Close the redis:

?

1 ./bin/redis-cli shutdown

 

At this point, the Redis server is set up.

 

Integration of Tomcat and Redis to achieve session sharing:

Tomcat7 + JDk1.6

In the Tomcat directory of all servers that need to share sessions:

Add the following jar packages to the lib directory. Note that the versions should be the same as those of the following jar packages. Otherwise, errors may occur.

Add the redis service to content. XML in conf directory

?

One, two, three, four, five, six <Valve className="com.radiadesign.catalina.session.RedisSessionHandlerValve"/>  <Manager className="com.radiadesign.catalina.session.RedisSessionManager" host="192.168.50.140" port="6379" database="0"   maxInactiveInterval="60" />

 

Tomcat7 + jdk1.7 or 1.8

In the Tomcat directory of all servers that need to share sessions:

Add the following jar packages to the lib directory and pass the test:

Add the redis service to content. XML in conf directory

?

One, two, three, four, five, six <Valve className="com.orangefunction.tomcat.redissessions.RedisSessionHandlerValve" />        <Manager className="com.orangefunction.tomcat.redissessions.RedisSessionManager" host="192.168.50.140"   port="6379"   database="0"               maxInactiveInterval="60"/>

 

According to my test, it is JKD1.8 + Tomcat7, add jar package in 137 and 139 Tomcat and make the above configuration:

Upload the jar package

Modify the content. The XML

Start the Redis service, restart all Tomcat, start nginx, refresh the nginx page, you can see that the sessionID value of two Tomcat pages is unchanged, close one Tomcat, the sessionID of nginx is unchanged, indicating that the session is shared.

Question:

You can find bind 127.0.0.1 in redis. You can change this IP address to the visitor’S IP address.

If you have multiple visitors, you can also comment out bind 127.0.0.1 and find the protected-mode in the configuration file and change protected-mode yes to protected-mode no to turn off the redis protected mode

Details you can refer to this: www.cnblogs.com/liusxg/p/57…

 

After the guidance of Daniel: add two points of attention:

1. According to the above configuration, using redis database, objects added to session must implement java.io.Serializable interface, memcache can not implement Serializable interface

Session setAttribute(String Key, Object Value); Storing Object types

Object into the Redis to be able to retrieve, can only be serialized for storage, and then when the retrieval deserialization.

So any object that we store in our session has to implement the serialization interface.

2. If redis is used as the session storage space, the unit of the session-time of the Web application is [seconds] instead of [minutes].

The reason is: because tomcat uses a utility class to place sessions into redis, the tomcat container time is converted during storage.

In Redis, the expiration time is set in seconds. There is a command called expire that sets the expiration time for redis keys, so it is important to specify the session expiration time in the context.xml configuration file (60 seconds by default, 30 minutes by 1800).

 

Please note !!!!

Context. XML configuration description:

?

1, 2, 3, 4, 5, 6, 7, 8, 9, 10 <Valve className="com.orangefunction.tomcat.redissessions.RedisSessionHandlerValve" />       <Manager className="com.orangefunction.tomcat.redissessions.RedisSessionManager" // Here is the redis server address host="192.168.50.140" // This is the redis port. The default redis port is 6379 port="6379" // this is the id of the redis database database="0"             // Note that the default redis expiration time is 60 seconds, and the session expiration time is 30 minutes. Therefore, set it to 1800 for 30 minutes maxInactiveInterval="1800"/>

 

Keepalived high availability:

Architecture diagram:

The asymmetry in the picture above is ugly, so live with it

Nginx: 192.168.50.135:192.168.137:139:192.168.50.135:192.168.137:139:192.168.50.135:192.168.50.135:192.168.137:139:192.168.50.135 It’s easy after one

Install nginx on 192.168.50.135 and configure the nginx configuration as follows:

The configuration is the same as above

?

12 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 . upstream blank {         #ip_hash;         server 192.168. 50.137 :8080;         server 192.168. 50.139 :8080; }           server {         listen       80 ;         server_name  localhost;           #charset koi8-r;           #access_log  logs/host.access.log  main;           location / {             proxy_pass http://blank;             root   html;             index  index.html index.htm;         }...

 

So now there are two sets of nginx, the proxy server is the same, why two sets?

Assuming there is only one Nginx, the nginx server is down. So what?

So you need a backup Nginx.

Under normal circumstances, the primary Nginx can act as the reverse proxy server. If the nginx server is down, it can be switched to the backup machine immediately to ensure that users can access it. Then, after the operation and maintenance personnel repair the primary Nginx server, they can automatically switch to the primary Nginx for service. Keepalived to monitor two servers. In normal case, bind the nginx primary server IP (192.168.50.133) to a virtual IP defined by Keepalived (I set it to 192.168.50.88). Then the standby server (192.168.50.135) does nothing, is every short period of time (set to 1 second) Keepalived will tell the standby server, do not care, I am still alive, if suddenly the standby server dies, then there will be more than a second no message from the standby server, then the standby server immediately take over. Keeplived ties the virtual IP to the standby body and the site continues to serve.

Suddenly the host revives (the o&M personnel troubleshoot the fault), and the standby machine receives the message that the host is alive again, so the management right is handed back to the host, and the virtual IP is bound to the host. This is probably the process, I understand.

 

Install Keepalived on both nginx servers:

Download: use RPM installation here, is to distinguish 32,64 bits, do not get wrong oh

Keepalived 1.2.7-3. El6. X86_64. RPM

Openssl – e – 30 1.0.1. El6_6. 4. X86_64. RPM

RPM -q openssl: RPM -q openssl: RPM -q openssl: RPM -q openSSL: I already have 1.0.1E 48, so I won’t install it

Upload the two RPM packages to the two nginx servers, go to the directory where you uploaded them, and run the following command to install

If you need to install OpenSSL

?

1 The RPM - Uvh -- nodeps. / openssl -1.0.1e-30.el6_6. 4 .x86_64.rpm

Install keepalived:

?

1  rpm -Uvh --nodeps ./keepalived-1.2.7 - 3.el6.x86_64.rpm

Keepalive. conf is the core configuration file of keepalived server:

Key: Keepalived configuration, the upper part of the configuration file according to the following configuration on the line, the content behind the configuration file can be ignored, has not gone to study other parts

Configure keepalived for 192.168.50.133 and follow the following steps

?

12 34 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 ! Configuration File for keepalived   This is the global configuration global_defs {    Keepalived Specifies the object to send email to when a switch occurs, one in a row    notification_email {      acassen@firewall.loc      failover@firewall.loc      sysadmin@firewall.loc    }    # select sender    notification_email_from Alexandre.Cassen@firewall.loc    # Specify the SMTP server address    #smtp_server 192.168.200.1    # specify SMTP connection timeout    #smtp_connect_timeout 30    # a sign to run keepalived machine    router_id LVS_DEVEL }   Active/standby configuration vrrp_instance VI_1 {     The BACKUP machine is in the BACKUP state     state MASTER     Run the ifconfig command to check the server nic id of the keepalived instance     interface eth0     The virtual_Router_id must be the same in the same instance     virtual_router_id 51     MASTER is more important than BACKUP100The maximum value of BACKUP is99     priority 100     Set the synchronization check interval between MASTER and BACKUP load balancers, in seconds1seconds     advert_int 1     # set authentication     authentication {         # Primary/secondary server authentication mode. PASS indicates plain text password authentication         auth_type PASS         # your password         auth_pass 1111     }     # Set the virtual IP address, which is in the same network segment as our active and standby computers     virtual_ipaddress {         192.168.50.88     } }

 

Keepalived configuration for standby 192.168.50.135:

Note the following: State must be changed to BACKUP, priority must be lower than MASTER, and the value of virtual_router_id must be the same as that of MASTER

?

12 34 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 ! Configuration File for keepalived   This is the global configuration global_defs {    Keepalived Specifies the object to send email to when a switch occurs, one in a row    notification_email {      acassen@firewall.loc      failover@firewall.loc      sysadmin@firewall.loc    }    # select sender    notification_email_from Alexandre.Cassen@firewall.loc    # Specify the SMTP server address    #smtp_server 192.168.200.1    # specify SMTP connection timeout    #smtp_connect_timeout 30    # a sign to run keepalived machine    router_id LVS_DEVEL }   Active/standby configuration vrrp_instance VI_1 {     The standby server is BACKUP     state BACKUP     Eth0 = eth0     interface eth0     #virtual_router_id must be the same as the host     virtual_router_id 51     The weight of the standby machine must be smaller than that of the host     priority 99     Set the synchronization check interval between MASTER and BACKUP load balancers, in seconds1seconds     advert_int 1     # Authentication, consistent with host     authentication {         auth_type PASS         auth_pass 1111     }     The bound virtual IP address is the same as that of the host     virtual_ipaddress {         192.168.50.135     } }

 

Keepalived is configured.

Keeplived startup shutdown command:

?

1 2 service keepalived start service keepalived stop

 

Start two Nginx, keepalived for the host and keepalived for the standby.

At this point, the nginx host is providing services, the standby machine is idle, the virtual IP is 192.168.50.88, run commands on the host and standby machine

?

1 ip addr

It can be found:

Host: 192.168.50.133 has virtual ip192.168.50.88. Enter 192.168.50.88 in the browser to access the main nginx192.168.50.133. It is then forwarded to the Tomcat server

 

When the browser accesses the virtual IP address 192.168.50.88, the result is as follows

 

 

 

Standby nginx: IP addr Command execution: No virtual IP address is bound to the standby Nginx

 

This is the initial state of the situation, is the normal service situation.

 

Nginx server is down. If keepalived is dead on the host, then the standby server cannot tell it is alive. If the standby server does not receive a message from the host for more than 1 second, immediately take over the virtual IP address. At the same time, an email is configured in the configuration file when the active/standby switchover is performed. At this time, the development team knows that the host is down after receiving the email and immediately goes to troubleshoot the host.

Stop the keepalived service on the host, service keepalived stop, and then check the virtual IP binding.

Host hangs: You can see that the virtual IP is not tied to the host

Standby server: The virtual IP address has been bound to the standby server, but the host is switched to the standby server even though the host hangs up (the time difference between fault detection and switchover is the maximum of 1 second). The virtual IP address is also bound to the standby server. If you access the virtual IP address, nginx requests the standby server and forwards the request to the Web server for high availability.

 

 

After receiving the email, the maintenance staff went to troubleshoot the host fault. When the server is ready, the host tells the standby server that I am alive again, so the standby server gives the management right to the host (switch to the host nginx to provide services) :

After the host keepalived service is started and the host is maintained, you can see that the virtual IP is automatically attached to the host

 

 

In the case of the standby server, the standby server transfers management rights to the standby server and the virtual IP address is not bound to the standby server. It seems that keepalived service cannot be switched back to the standby server immediately after it is started

 

This is keepalived’s highly available simulation.

 

Attention issues:

If the virtual IP address is switched to the host, the host nginx will not be able to forward it. So nginx boot is added to boot.

 

4. Nginx service automatically starts upon startup:

 

To create an nginx file in /etc/init.d/ on Linux, use the following command:

 

?

1 vi /etc/init.d/nginx

 

 

 

The nginxd value is the nginx path to start nginx, the nginx_config value is the nginx configuration file nginx.conf path, the nginx_pid value is the nginx. If I install it the way I do, it will be in logs in the nginx installation directory

?

12 34 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 #! /bin/bash # nginx Startup script for the Nginx HTTP Server # it is v.0.0.2 version. # chkconfig: - 85 15 # description: Nginx is a high-performance web and proxy server. #              It has a lot of features, but it's not for everyone. # processname: nginx # pidfile: /usr/local/nginx/logs/nginx.pid # config: /usr/local/nginx/conf/nginx.conf nginxd=/usr/local/nginx/sbin/nginx nginx_config=/usr/local/nginx/conf/nginx.conf nginx_pid=/usr/local/nginx/logs/nginx.pid RETVAL=0 prog="nginx" # Source function library. . /etc/rc.d/init.d/functions # Source networking configuration. . /etc/sysconfig/network # Check that networking is up. [ ${NETWORKING} = "no" ] && exit 0 [ -x $nginxd ] || exit 0 # Start nginx daemons functions. start() { if [ -e $nginx_pid ]; then    echo "nginx already running...."    exit 1 fi    echo -n $"Starting $prog: "    daemon $nginxd -c ${nginx_config}    RETVAL=$?    echo    [ $RETVAL = 0 ] && touch /var/lock/subsys/nginx    return $RETVAL } # Stop nginx daemons functions. stop() {         echo -n $"Stopping $prog: "         killproc $nginxd         RETVAL=$?         echo         [ $RETVAL = 0 ] && rm -f /var/lock/subsys/nginx /var/run/nginx.pid } # reload nginx service functions. reload() {     echo -n $"Reloading $prog: "     #kill -HUP `cat ${nginx_pid}`     killproc $nginxd -HUP     RETVAL=$?     echo } # See how we were called. case "$1" in start)         start         ;; stop)         stop         ;; reload)         reload         ;; restart)         stop         start         ;; status)         status $prog         RETVAL=$?         ;; *)         echo $"Usage: $prog {start|stop|restart|reload|status|help}"         exit 1 esac exit $RETVAL

 

Then set access to the file: Run the following command to make it accessible to all users

 

?

1 chmod a+x /etc/init.d/nginx

 

Finally, add ngix to the rc.local file so that nginx starts by default at boot time

 

?

1 vi /etc/rc.local

 

add

 

?

1 /etc/init.d/nginx start

 

 

 

Save and exit, the next restart will take effect, nginx boot automatically. The tests were all right.

 

 

Nginx process and Keepalived do not exist simultaneously.

 

Keepalived determines whether the server is down by detecting the presence of a Keepalived process. If a Keepalived process is present, but nginx process is not, keepalived will not switch to a standby server. Keepalived will not switch to the standby machine because nginx is down and can not be used as a proxy.

So keep checking to see if Nginx is still there, and if not, let Keepalived stop too, live and die together.

Note: Only on the host, there is no need to check nginx on the standby machine, because the main machine is basically in service.

Solution: Write a script to monitor whether the Nginx process exists, if not kill keepalived process.

Note: Keepalived does not need to be booted up, if it is booted up automatically, if keepalived is booted up faster than Nginx, script check will stop Keepalived, so it is not necessary, just need nginx booted up, after the host manually start keepalived service.

 

Write the nginx process check script (check_nginx_dead.sh) on the main nginx and create the script in the keepalived configuration file directory:

 

?

1 vi /etc/keepalived/check_nginx_dead.sh

 

Add the following contents to the script file:

 

?

One, two, three, four, five, six #! /bin/bash Kill keepalived process if there is no nginx in the process A = ` ps - C nginx - no - the header | wc -l ` # # check whether there is nginx process The value assigned to the variable A if [ $A -eq 0 ]; Then ## is worth zero if there is no process        Service keepalived stop ## ends the Keepalived process fi

 

Give me access: otherwise, I’m stuck here for half an hour

?

1 chmod a+x /etc/keepalived/check_nginx_dead.sh

 

 

Let’s test the script:

When nginx is stopped, Keepalived is still running, so it does not switch and the virtual IP cannot access the Web server

Then execute the script:

The host script detects that nginx is not there and stops Keepalived. From the output, it can be seen that the host virtual IP is not bound

Standby node: The virtual IP address is bound successfully

 

 

So, just keep the script running, that is, always check whether the Nginx process is present, if not, then simply stop the host keepalived, switch to the standby machine, to ensure access to the Web server.

Modify keepalived configuration file keepalive. conf as follows and add script definition detection:

Just add the red part in the right place: the script is executed every two seconds, and once the host nginx is found gone, keepalived stops and switches to the standby machine

?

12 34 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 ! Configuration File for keepalived   This is the global configuration global_defs {    Keepalived Specifies the object to send email to when a switch occurs, one in a row    notification_email {      acassen@firewall.loc      failover@firewall.loc      sysadmin@firewall.loc    }    # select sender    notification_email_from Alexandre.Cassen@firewall.loc    # Specify the SMTP server address    #smtp_server 192.168.200.1    # specify SMTP connection timeout    #smtp_connect_timeout 30    # a sign to run keepalived machine    router_id LVS_DEVEL }   vrrp_script check_nginx_dead {     ## Monitor script path     script "/etc/keepalived/check_nginx_dead.sh"     ## Time interval,2seconds     interval 2     # # the weight     weight 2                                             }   Active/standby configuration vrrp_instance VI_1 {     The BACKUP machine is in the BACKUP state     state MASTER     Run the ifconfig command to check the server nic id of the keepalived instance     interface eth0     The virtual_Router_id must be the same in the same instance     virtual_router_id 51     MASTER is more important than BACKUP100The maximum value of BACKUP is99     priority 100     Set the synchronization check interval between MASTER and BACKUP load balancers, in seconds1seconds     advert_int 1     # set authentication     authentication {     # Primary/secondary server authentication mode. PASS indicates plain text password authentication         auth_type PASS     # your password         auth_pass 1111     }           track_script {     # monitor script         check_nginx_dead          }           # Set the virtual IP address, which is in the same network segment as our active and standby computers     virtual_ipaddress {         192.168.50.88     } }

 

After saving, restart the keepalived service on the host.

 

Testing:

Return to the initial state of high availability of load balancing, and ensure that keepalived and Nginx are all started on the master and slave.

To stop the main nginx service:

The host checks the Keepalived process and finds none, indicating that it has stopped and the virtual IP is not tied to the host

 

Standby node: Bind the virtual IP address and the switchover succeeds.

 

 

If the host nginx fails, keepalived will also fail and switch to the standby server.

 

All of the above processes are tested, so they should be successful without a few other factors, such as personality.

I haven’t written a blog for a long time, and this one took ten hours from start to finish. Harvest is also a lot of, just stay at the level of use, without in-depth.

Reprint please indicate the author, source, link

 

 

This site is not updated. Please visit blog.coocap.com