1. Configure multiple Tomcat servers to run simultaneously

1.1. Modify environment variables and make two copies of Tomcat

# copy and rename the tomcat folder 'apache-tomcat_2', 'apache-tomcat_3'
cp -r apache-tomcat apache-tomcat_2
cp -r apache-tomcat apache-tomcat_3
Copy the code

Modify environment variables to include the following

vi /etc/profile
Copy the code
######### tomcat 1 ###########
CATALINA_BASE=/usr/local/tomcat/apache-tomcat
CATALINA_HOME=/usr/local/tomcat/apache-tomcat
TOMCAT_HOME=/usr/local/tomcat/apache-tomcat
export CATALINA_BASE CATALINA_HOME TOMCAT_HOME

########## tomcat 2 ##########
CATALINA_BASE_2=/usr/local/tomcat/apache-tomcat_2
CATALINA_HOME_2=/usr/local/tomcat/apache-tomcat_2
TOMCAT_HOME_2=/usr/local/tomcat/apache-tomcat_2
export CATALINA_BASE_2 CATALINA_HOME_2 TOMCAT_HOME_2

########## tomcat 3 ###########
CATALINA_BASE_3=/usr/local/tomcat/apache-tomcat_3
CATALINA_HOME_3=/usr/local/tomcat/apache-tomcat_3
TOMCAT_HOME_3=/usr/local/tomcat/apache-tomcat_3
export CATALINA_BASE_3 CATALINA_HOME_3 TOMCAT_HOME_3

Copy the code

Save the Settings and exit to make the configuration file take effect immediately.

source /etc/profile
Copy the code

1.2. Modify catalina.sh in Tomcat and add environment variables

Take the second Tomcat

vi catalina.sh
Copy the code
# Modify the same variable as /etc/profile
export CATALINA_BASE=$CATALINA_BASE_2
export CATALINA_HOME=$CATALINA_HOME_2
Copy the code

1.3. Modify the configuration file server. XML in conf

You can modify the ports in the above three places, so that multiple Tomcat machines can run at the same time.

2. Use Nginx to build tomcat cluster

This is mainly in the configuration file, simple configuration can achieve load balancing effect

    Upstream sets up proxy server (load balancing pool). The default load balancing is polling and the other is IP_hash
    upstream mynginx{
        List of server addresses that you want to loadServer 192.0.0.179:8088; Server 192.0.0.179:8089; Server 192.0.0.179:8090; } server { listen 8888; Server_name 192.0.0.179; location / { root html; index index.html index.htm; add_header backendIP$upstream_addr;
            proxy_pass  http://mynginx;Use the same name as the upstream module you defined}}Copy the code

This direct access to the address 192.0.0.179:8888 would have given us one of the defined server lists based on polling.

However, there is a problem that the same IP access to the same page will be allocated to different servers, if the session is not synchronized, there will be a lot of problems, such as the most common login status

3. Solutions

Also saw a lot of solutions on the net, here is also simple to say

3.1. Ip_hash: The IP_hash technology can direct requests from a certain IP address to the same backend, so that a client under this IP address can establish a solid session with a certain backend

Ip_hash is defined in the upstream configuration

    upstream mynginx{
        List of server addresses that you want to loadServer 192.0.0.179:8088; Server 192.0.0.179:8089; Server 192.0.0.179:8090; ip_hash; }Copy the code

Ip_hash is easy to understand, but because the only factor used to assign the back end is IP, ip_hash is flawed and does not capture the benefits of clustering

3.2, Tomcat – Redis – Session_manager

Using tomcat-redis-session-manager to achieve session sharing in Tomcat cluster deployment is feasible, but it requires compiling the source code and modifying the Tomcat configuration file, which is cumbersome, so it is not recommended.

3.3, spring session

Spring-session provides support for clustered sessions without binding to a Web container. And provides transparent integration for:

  • HttpSession: The HttpSession that allows the replacement of the Web container
  • WebSocket: Provides Session activity when WebSocket is used for communication
  • WebSession: A WebSession that allows webFlux to be replaced in an application-neutral manner

3.4 How to implement in Springboot:

Springboot integration

1. Add Maven dependencies:

<! <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-redis</artifactId> </dependency> <! -- Spring Session and redis application basic environment configuration, need to enable redis can be used. Otherwise start Spring boot complains - > < the dependency > < groupId > org. Springframework. Session < / groupId > <artifactId>spring-session-data-redis</artifactId> </dependency>Copy the code

2. Add @enableredisHttpSession to enable Spring Session support as follows:

@enableredisHttpSession public class RedisSessionConfig {}Copy the code

3, because the redis cluster before, here can also be used, but before digging a lot of holes, it is not easy to fill……

If you were simply using springsession, this would be done, but……. Session management has been handed over to Shiro, so we have to do a bit of a reverse and let Shiro work with redis clusters

Shiro + Redis cache plugin was replaced with version ‘3.1.0’, shiro- Redis 3.1.0 now supports redis cluster configuration

< the dependency > < groupId > org. Crazycake < / groupId > < artifactId > shiro - redis < / artifactId > < version > 3.1.0 < / version > </dependency>Copy the code

Here are the relevant configuration files:

redis:
    database: 0 # Redis specifies the shard to be used. The default is 0
# host: 127.0.0.1
# port: 6379
    # redis.cluster
    password:
    cluster:
      nodes: 192.0.0.179:7001,192.0.0.179:7002,192.0.0.179:7003,192.0.0.179:7004,192.0.0.179:7005,192.0.0.179:7006

    lettuce:
      pool:
          # Maximum number of connections
        max-active: 1000
          # Maximum blocking wait time (negative number indicates no limit)
        max-wait: -1
          # Maximum free time
        max-idle: 1000
          # Minimum idle
        min-idle: 100
      Connection timeout
    timeout: 1000

  If shiro is not integrated, it should be specified.
  #session:
  # store-type: redis
Copy the code

Modify ShiroConfig to replace RedisManager with RedisClusterManager. Some places that use the RedisManager method are directly replaced with RedisClusterManager

    Add poJO to the configuration file
    @Autowired
    private RedisConf2 properties2;
    
    ---------------------------------
    
    @Bean
    public RedisClusterManager redisClusterManager(){
        RedisClusterManager redisClusterManager = new RedisClusterManager();
        redisClusterManager.setHost(properties2.getNodes());
        return redisClusterManager;
    }

Copy the code

Here the host can directly obtain the configuration file redis node configuration, which is more convenient and flexible

# config file read
@Configuration
public class RedisConf2 {

    @Value("${spring.redis.cluster.nodes}")
    private String nodes;

    public String getNodes() {
        return nodes;
    }

    public void setNodes(String nodes) { this.nodes = nodes; }}Copy the code

There are several ways to read configuration files, depending on your actual situation, I have abstracted a separate POJO here, in the read configuration parameters through the ‘@autowired’ injection

There is a need to pay attention to the places, so direct a kick-off meeting for error, the configuration file can’t read, on the Internet to find the solution for a long time, found a simpler one, modify the shiro getLifecycleBeanPostProcessor method, combined with the static, make its static method

    @Bean
    public static LifecycleBeanPostProcessor getLifecycleBeanPostProcessor() {
        return new LifecycleBeanPostProcessor();
    }
Copy the code

That’s pretty much it. Package the project up for testing

4, test, for convenience, do not put in nginx boot, directly packaged in the local use a different port boot

Empty the redis:

192.0.0.179:7002> flushall
OK
192.0.0.179:7002> keys *
(empty list or set) 192.0.0.179:7002 >Copy the code

The first one is responsible for storing sessions

The second one reads the session

If you look at Redis, the session information is already stored

Sessionid is a session key. When the browser accesses the server for the first time, it generates a session on the server side. The session IDS generated by Tomcat are called jsessionids. Session is created when the Tomcat server HttpServletRequest getSession(True) is accessed. The Tomcat ManagerBase class provides methods for creating a sessionID: random number + time + jVMID; Tomcat’s StandardManager class stores sessions in memory and can persist them to file, database, memcache, redis, etc. The client only saves the sessionID into the cookie, but does not save the session. The session destruction can only be done by invalidate or timeout. Closing the browser does not close the session.