The article directories

  • High concurrency
    • 1. High concurrency processing indicators
    • 2. High concurrency solutions
  • multithreading
  • The relationship and difference between high concurrency and multithreading
    • 1. High concurrency scenarios
    • 2. Multi-threaded scenarios

High concurrency

High Concurrency is a set of Concurrency problems that occur when a web system is set to respond to a large number of requests within a short period of time. For example, the Concurrency problem for 12306 is called Concurrency. Tmall Double 11 event).

The occurrence of this situation willThe system performs a large number of operations during this period, such as resource requests, database operations, and so on.

1. High concurrency processing indicators

Some commonly used metrics related to high concurrency are response time, throughput, query rate per second QPS, and number of concurrent users

  • The response time(Response Time)

    The time the system takes to respond to a request. For example, if it takes 200ms for the system to process an HTTP request, this 200ms is the system response time
  • throughput(Throughput)

    Number of requests processed per unit of time.
  • Query rate per second QPS(Query Per Second)

    Number of requests per second. In the Internet domain, the distinction between this metric and throughput is not so obvious.
  • Number of concurrent users

    The number of users that use system functions normally. For example, in an instant messaging system, the number of simultaneous online users to some extent represents the number of concurrent users of the system.

2. High concurrency solutions

  • Static resources with CDN to solve the image file access
  • Distributed cache: Redis, memcached, etc.
  • Message queue middleware: activeMQ, etc., to solve the asynchronous processing ability of a large number of messages.
  • Application split: A project is split into multiple project deployments using Dubbo to resolve communication between multiple projects.
  • Database vertical split and horizontal split (sub – library sub – table).
  • Database read and write separation, solve the query problem of big data.
  • Use noSQL, such as mongoDB and mysql.
  • Establish service degradation and traffic limiting mechanism in the case of big data access.

multithreading

Multithreading is a feature of Java, because now the CPU is multi-core multithreading, can execute several tasks at the same time, in order to improve the execution efficiency of JVM, Java provides this multithreading mechanism, to enhance the efficiency of data processing. Multithreading corresponds to the CPU, and high concurrency corresponds to access requests. You can use a single thread to process all access requests, or you can use multiple threads to process access requests at the same time.

The relationship and difference between high concurrency and multithreading

“High concurrency and multithreading” are often mentioned together, giving the impression that they are equal, but in fact, high concurrency does not equal multithreading

1. High concurrency scenarios

When we do a Java Web project, a large number of users log in to the system after it goes online, which is highly concurrent. For the system, there will be a large number of requests, each request corresponds to a thread, and each thread is independent [Tomcat supports 200-300]. Although it is the same code, it is executed in different threads. Each other.

2. Multi-threaded scenarios

The use of multithreading is either asynchronous, or running sub-tasks, each user thread is the main thread, asynchronous is another thread, the purpose of running sub-tasks is to appear a large data set, a single thread will be slow to run, split into multiple sub-tasks, multiple threads to run, fast.