A couple of concepts

Critical resource: When multiple threads access the same object, this object is called a critical resource

Atomic operations: Operations that are inseparable from critical resources are called atomic operations

Thread insecurity: Data inconsistencies can occur when multiple threads access the same object at the same time, breaking indivisible operations

The threading world of the jungle

Hello everyone, my name is Wang Dachi, and my goal is to become the CEO… Uh, sorry about the wrong script. Hello everyone, my name is 0x7575, I am a thread, my ideal is to always get the fastest CPU.

Introduce you to thread the world first, a thread is a dog-eat-dog world, always scarce resources, what all want to rob, this a few nanoseconds I was lucky enough to get the CPU, for int a = 20 plus one operation at a time, when I removed from memory. A, add 1 after lose the CPU, rest after ready to write to memory, I was surprised to find that the A in memory had now become 22.

Some thread must have modified the data while I was away, so I was in a dilemma. Many threads also advised me not to write, but I was forced to write 21 to memory and overwrite 22 that didn’t fit my logic.

That’s just a minor incident, and things like that happen all the time in the threaded world, so while every single one of us is doing our job, from the human point of view we’re the ones causing data insecurity.

What an injustice. The threading world has always been a competitive world, especially when there are shared variables, shared resources (critical resources), and multiple threads competing for use at the same time. Unless the shared resource is eliminated, which is not possible, things start to get stuck.

A lock has appeared in the thread world

Fortunately, someone smart came up with a good way to solve the problem. I don’t know who came up with the idea, but it does solve part of the problem. The solution is to lock.

Do you want to operate on a set of locked codes? If you want to grab the lock, you can do whatever you want with the locked code. If you can’t get the lock, you have to wait at the door of the code block because there are too many threads waiting. This has become a social phenomenon (state) named thread blocking.

This sounds simple, but there are many detailed rules for locking. The government published the “Rules for Synchronzied use” and later the “Rules for Lock Use.”

Threads share memory with each other, and there are several problems that are unavoidable when multiple threads operate on shared memory: race conditions and memory visibility.

** Race conditions: ** When multiple threads access and manipulate the same object, the end result depends on the execution timing. Correctness cannot be controlled artificially, and may or may not be correct. (As in the example above)

The above mentioned lock is to solve this problem, common solutions are:

  • Use the synchronized keyword
  • Using explicit locks
  • Using atomic variables

** Memory visibility: * * about memory visibility from the fit of memory and CPU, memory is a hardware, execution speed is slower than the CPU hundreds, so in a computer, CPU when performing operations, not every computation and memory data interaction, but some of the data to the CPU cache first zone (register and cache) at all levels, Write to memory after completion. This process is extremely fast and there are no problems with a single thread.

However, there is a problem in multithreading. A thread modifies a data in memory, but it is not written to memory in time (temporarily stored in the cache). When another thread makes a change to the same data, it gets the unmodified data in memory, which means that a change made by one thread to a shared variable cannot be immediately seen by another thread, or even never.

This is a memory visibility problem.

Common solutions to this problem are:

  • Use the volatile keyword
  • Use the synchronized keyword or explicit lock

Thread synchronization

Traditional lock synchronzied

Synchronized code block

Each Java object has a mutex marker that can be assigned to a thread. Synchronized (o){} can only access the synchronized block that locks O.

Synchronized methods

Methods that synchronized modifies as method modifiers are called synchronized methods and represent synchronized blocks of code that lock this (the whole method is a block).

JDK1.5 Lock Lock

ReentrantLock

ReentrantLock has similar effects to synchronized, but is more flexible and powerful.

It is a synchronized lock, which means that you can enter the same function repeatedly.

Consider a scenario in which a recursive function’s lock is allowed to enter only once. What should the thread do when it needs to recursively call the function? There is no going back. There are functions that cannot enter the lock repeatedly, and a new deadlock is formed.

Reentrant lock to solve this problem, the way to implement reentrant is also very simple, is to add a counter to the lock, a thread after the lock, each time the counter is increased by 1, each release is reduced by 1, if equal to 0 then the lock is really released.

// Create a lock object
Lock lock = new ReentrantLock();

// Lock (enter sync code block)
lock.lock();

// Unlock (sync code block)
lock.unlock();

// Try to get the lock. If there is a lock, it will not block. Return false
tryLock();
Copy the code

ReadWriteLock

Read/write lock, read/write separation. There are readLock and writeLock locks. For readLock, it is a shared lock that can be assigned multiple times. But a call to writeLock blocks when a readLock is locked and vice versa, and a writeLock is a normal mutex that can only be allocated once.

The difference between synchronized and ReentrantLock
  1. Both are mutexes: only one thread holding the lock can access the locked shared resource at a time, and all other threads must block
  2. Both are reentrant locks, implemented with counters
  3. ReentrantLock has unique features
    1. ReenTrantLock can specify whether the lock is fair or unfair. Synchronized can only be an unfair lock. The so-called fair lock is that the line that waits first gets the lock first
    2. ReenTrantLock provides a Condition class that can be used to wake up threads in groups, as opposed to synchronized, which wakes up either a random thread or all threads
    3. ReenTrantLock provides a mechanism to interrupt a thread waiting for a lock, using lock.lockInterruptibly()
The volatile keyword

The volatile modifier is used to ensure visibility

When a shared variable is volatile, it guarantees that the variable will be updated in memory immediately after the change, and that another thread will need to read the new value from memory.

Note that while volatile guarantees memory visibility, it does not preserve atomicity. For b++, it is not a one-step operation, but rather several steps: read the variable, define constant 1, increment b by 1, and synchronize the result to memory. Although the latest value of the variable is retrieved at each step, it cannot be thread-safe without guaranteeing the atomicity of b++