Race Condition and Critical Section in Java

Race condition, namely race, is a special state that may occur in critical section. Critical section is generally translated as critical section. I personally think critical section is not very intuitive, so I don’t do the translation and still use English. The critical section here is actually a section of code that may be executed concurrently by multiple threads. The different execution order of concurrent threads directly affects the execution result of the entire concurrent program.

Since the different execution sequences of concurrent threads directly affect the execution results of this part of code, this part of code is called critical section, and when the code of critical section is executed concurrently by multiple threads, it is in the state of being executed by competition, namely, race condition.

Critical section

In fact, the same code executing concurrently by multiple threads does not necessarily cause problems, but rather can cause a Race condition when multiple threads compete for access to a shared resource at the same time. Resources here include memory resources (variables, arrays, or objects, etc.), system resources (databases, Web services, etc.), files, etc.

In fact, problems can only occur when multiple threads concurrently write to a shared resource. Multiple threads reading a shared resource do not cause problems.

The following code is an example of a critical section that causes problems when executed concurrently by multiple threads:

public class Counter {
  protected long count = 0;
  public void add(long value) {
    this.count = this.count + value; }}Copy the code

Suppose two different threads, A and B, concurrently execute the add method on the same Counter object. It is uncertain when the operating system switches threads. This line of code inside the add method is not an atomic operation when compiled into bytecode and executed inside the JVM, but rather broken down into words like the following:

  1. From main memory willthis.countRead into the CPU register;
  2. willvalueThe value added to a register;
  3. Writes the value in the register back to main memory.

If this process is concurrently executed by two threads A and B, it may be executed in the following order:

this.count = 0; A: read the value of this.count from main memory to register (0) B: read the value of this.count from main memory to register (0) B: add value 2 to register B: write the value of register (2) back to main memory This. count = 2 A: writes value 3 to register A: writes the value of register (3) back to main memory. Here's this.count with a value of 3Copy the code

So these two threads A and B wanted to add 2 and 3 to count, and they should have expected 5. However, since the two threads execute concurrently and cross each other, the result 3 of thread A is finally written back to main memory, and also overwrites the result 2 of field B’s write to main memory. The execution result did not meet expectations. Of course, the flow of execution presented here is only one possible scenario, or the result of execution could be 2 or 5. But as long as there is a possibility that this race will occur, this code segment is a critical section, which we should try to eliminate.

How to avoid race condition

So how can we avoid the occurrence of race condition? The answer is atomicity.

We need to wrap the critical section that has the potential to race into an atomic operation, meaning that if one thread is executing that part of the code, the other threads can’t start executing until the thread has finished executing.

Specifically, we can do this by synchronizing threads with each other. Such as:

  1. synchronizedThe code block;
  2. The lock;
  3. Atomic variables such asjava.util.concurrent.atomic.AtomicInteger.

Throughput of critical sections

For critical sections with simple logic, avoiding races with synchronized blocks is fine, but for critical sections with complex logic and large code volume, this approach can reduce overall system throughput. At this point we can try to break it down into multiple independent, smaller critical sections.

Here’s an example:

public class TwoSums {
  private int sum1 = 0;
  private int sum2 = 0;
  public void add(int val1, int val2) {
    synchronized(this) {
      this.sum1 += val1;
      this.sum2 += val2; }}}Copy the code

Here we use synchronized to wrap the code snippet to avoid race release. When multiple threads execute concurrently, the threads can only take turns executing the code snippet. However, if we think about it more closely, we can actually split the code into two independent sub-code segments that do not affect each other:

public class TwoSums {
  private int sum1 = 0;
  private int sum2 = 0;
  
  private Integer sum1Lock = new Integer(1);
  private Integer sum2Lock = new Integer(2);
  
  public void add(int val1, int val2) {
    synchronized(this.sum1Lock) {
      this.sum1 += val1;
    }
    synchronized(this.sum2Lock) {
      this.sum2 += val2; }}}Copy the code

This way, if two threads execute the ADD method concurrently, one thread can execute the first subsection while the other thread executes the second, thus avoiding longer waits for each other and increasing throughput.

Of course, this example is very simple, and just to illustrate the principle, a real project might require more careful analysis to know how to split.

Race Conditions and Critical Sections