Processes and threads

A process is an “ongoing program”, which is an independent unit of the system for resource allocation and scheduling

A thread is an entity of a process, and a process generally has multiple threads.

The difference between threads and processes

  • Process is the smallest unit of operating system allocation of resources, thread is the smallest unit of operating system scheduling.
  • Processes have independent address Spaces that do not affect each other, and threads are just different execution paths for processes
  • Threads have no independent address space, and multi-process programs are more robust than multithreaded programs
  • Process switching is more expensive than thread switching, so thread context switching is much faster than process context switching.

Why is a thread context switch faster than a process context switch?

  • The process switchover involves saving the CPU environment of the current process and setting the CPU environment of the new scheduled process
  • During thread switching, only a small number of register contents need to be saved and set. No storage management operations are involved

Threads do not have a separate address space, which has its own resources?

Yes, ThreadLocal stores thread-specific objects, that is, resources belonging to the current thread.

How processes communicate

There are pipes (both nameless and named), message queues, semaphores, shared storage, sockets, Streams, and so on

Common communication modes between processes are as follows:

  • The process communication between different machines is realized by using Socket Socket
  • Communicate by mapping a segment of shared memory that can be accessed by multiple processes
  • Pipes are used to communicate through the write and read processes

How threads communicate

  • volatile
  • Use the wait() and notify() methods of the Object class
  • CountDownLatch is implemented
  • join + synchronize/Reentrantlock

volatile

Volatile has two characteristics: visibility, which allows threads to communicate, and order, which prevents instruction reordering.

Volatile semantics guarantee thread visibility by two principles

  • All volatile variables must be flushed to main memory as soon as they are changed by a thread
  • All volatile variables must be re-read from main memory before they can be used

Volatile ensures visibility schematics

Working memory 2 can sense the bus by which working memory 1 updates the value of A. Working memory 1 must pass through the bus when refreshing the value of the main memory. The bus can inform other threads that the value has been changed, and then other threads will actively read the value of the main memory to update.

public class VolatileDemo {

    private static volatile boolean flag = true;

    public static void main(String[] args) {

        new Thread(new Runnable() {
            @Override
            public void run() {
                while (true){
                    if (flag){
                        System.out.println("trun on");
                        flag = false;
                    }
                }
            }
        }).start();

        new Thread(new Runnable() {
            @Override
            public void run() {
                while (true){
                    if (!flag){
                        System.out.println("trun off");
                        flag = true;
                    }
                }
            }
        }).start();
    }
}
Copy the code

As shown in the code above, the output switches between trun on and TRUn off because threads are communicating with each other across the bus and are aware of the change in flag values. If volatile is removed, the change in flag values will not be felt after a certain number of thread switches.

Wait and notify methods

The wait and notify methods implement the wait/notification mechanism. Call the wait method of the thread lock object within a thread, and the thread will enter the wait queue and wait until it is notified or awakened to complete the communication between the threads.

CountDownLatch

Thread 1 adds 10 elements to the container. Thread 2 monitors the number of elements. When the number reaches 5, thread 2 prompts and terminates

public class WithoutVolatile1 { private volatile List list=Collections.synchronizedList (new LinkedList <> ()); public void add(Object i){ list.add (i); } public int size(){ return list.size (); } public static void main(String[] args) { WithoutVolatile1 w=new WithoutVolatile1(); // Why create two latches? // If you use a CountDownLatch, there will be a CPU context switch while thread 2 is executing, resulting in inconsistent data. CountDownLatch CountDownLatch = new CountDownLatch (1); CountDownLatch latch = new CountDownLatch (1); Thread thread = new Thread (() -> { try { for (int i = 0; i < 10; i++) { if (w.size ( ) == 5) { countDownLatch.countDown ( ); latch.await (); } w.add (i); System.out.println(" add "+ I); } } catch (InterruptedException e) { e.printStackTrace ( ); }}); Thread thread1 = new Thread (() -> { try { if(w.size ()! =5){ countDownLatch.await ( ); } system.out. println (" listen, end... ") ); latch.countDown (); } catch (InterruptedException e) { e.printStackTrace ( ); }}); thread.start (); thread1.start (); }}Copy the code

What are the states of threads?

Thread states include new state, running state, blocked wait state and dead state. The BLOCKED wait states are divided into BLOCKED, WAITING and TIMED_WAITING states.

New ->ready ->running ->blocked ->waiting ->time_waiting ->terminated

  • When a thread is created, it is in the new state and starts running after calling the start() method. The thread in the ready state is in the running state after obtaining the CPU timeslice
  • After executing wait(), a thread enters the waiting state. After it enters the waiting state, it needs to be notified by other threads to return to the running state
  • The time_WAITING state is equivalent to adding a timeout limit to the waiting state. When the timeout period is reached, the thread returns to the runnning state
  • When a thread calls a synchronous method, it enters the blocked state if it does not acquire the lock
  • The thread enters the terminated state after executing the run() method

Comparison of common functions in multithreaded programming

What are the differences between sleep and wait?

  • Sleep method: static method of Thread class, the current Thread will sleep n milliseconds, the Thread enters the blocking state. When it is time to sleep, the block is unblocked and the CPU is ready to run. Sleep does not release locks (if any).
  • The wait method is an Object method and must be used together with synchronized. The thread blocks. When notify or NotifyAll is invoked, the block is cleared. However, the mutex will not be runnable until it is reoccupied. When you sleep, the mutex is released.

Join method: called by the current thread, all other threads stop, wait for the current thread to complete the execution, and then execute.

Yield method: This method causes the thread to give up its current allocation of CPU time. However, the thread is not blocked, that is, the thread is still in an executable state, and can be allocated CPU time again at any time.

A deadlock

Deadlock: Multiple threads are blocked at the same time, one or all of them waiting for a resource to be released. Because the thread is blocked indefinitely, the program cannot terminate normally.

The cause of deadlocks can be summed up in three sentences:

  • The current thread has resources that other threads need
  • The current thread waits for resources already owned by another thread
  • They don’t give up what they have

The following four conditions must be met for a deadlock to occur:

  • Resource exclusive: A resource can only be used by one thread at a time
  • Request and hold conditions: when a thread is blocked requesting a resource, it holds on to the acquired resource
  • Non-expropriation condition: a thread cannot forcibly expropriate a resource it has acquired before it has used it up
  • Circular waiting condition: a circular waiting resource relationship is formed between several threads

Ways to avoid deadlocks

  • Fixed lock order (for lock order deadlocks)
  • Open invocation (for deadlocks caused by collaboration between objects)
  • Use timing lock –>tryLock()
    • If waiting to acquire a lock times out, throw an exception instead of waiting!

Fixed the order of locking

Let’s start with an example

// transfer public static void transferMoney(Account fromAccount, Account toAccount, DollarAmount amount) throws InsufficientFundsException {/ / locking hui zhang account synchronized (fromAccount) {/ / lock to synchronized account account (toAccount) {/ / sentenced to balance is greater than zero if (fromAccount. The getBalance () compareTo (amount) < 0) {throw new InsufficientFundsException (); } else {// Debit (amount); Toaccount.credit (amount); }}}}Copy the code

Deadlocks are possible:

  • If two threads simultaneously call transferMoney()
  • Thread A transfers money from account X to account Y
  • Thread B transfers money from account Y to account X
  • Then a deadlock occurs.

A:transferMoney(myAccount,yourAccount,10);

B:transferMoney(yourAccount,myAccount,20);

Problem: The lock order is inconsistent

Solution: If all threads acquire locks in a fixed order, there will be no lock order deadlock in the program!

Public class InduceLockOrder {private static final Object tieLock = new Object(); public class InduceLockOrder {private static final Object tieLock = new Object(); public void transferMoney(final Account fromAcct, final Account toAcct, final DollarAmount amount) throws InsufficientFundsException { class Helper { public void transfer() throws InsufficientFundsException { if (fromAcct.getBalance().compareTo(amount) < 0) throw new InsufficientFundsException(); else { fromAcct.debit(amount); toAcct.credit(amount); Int fromHash = system.identityHashCode (fromAcct); int toHash = System.identityHashCode(toAcct); If (fromHash < toHash) {synchronized (fromAcct) {synchronized (toAcct) {new Helper().transfer(); }}} else if (fromHash > toHash) {synchronized (toAcct) {fromAcct) {new Helper().transfer(); Synchronized (tieLock) {fromAcct (toAcct) {toAcct (toAcct) {toAcct (toAcct) { new Helper().transfer(); } } } } } }Copy the code

Get the hash value to fix the lock order so we don’t have deadlocks!

Supplementary knowledge:

Lock sorting: specifies the order in which locks are acquired. For example, A thread must acquire locks A and B before it can operate on A resource.

Deadlocks can be avoided by specifying the order in which locks are acquired, such as specifying that only the thread that acquired lock A is eligible to acquire lock B. This is often considered a good way to resolve deadlocks.

Open call

In the case of deadlocks between collaborating objects, it is mainly because the lock is held when a method is called, and other methods with locks are called inside the method!

If you don’t need to hold a lock when calling a method, the call is called an open call!

Note: Synchronous code blocks are best used only to protect operations that involve shared state!

class CooperatingNoDeadlock { @ThreadSafe class Taxi { @GuardedBy("this") private Point location, destination; private final Dispatcher dispatcher; public Taxi(Dispatcher dispatcher) { this.dispatcher = dispatcher; } public synchronized Point getLocation() { return location; } public synchronized void setLocation(Point location) { boolean reachedDestination; Synchronized (this) {this.location = location; reachedDestination = location.equals(destination); } / / execution synchronized code block after finished, release the lock the if (reachedDestination) / / add the Dispatcher Dispatcher. The built-in lock notifyAvailable (this); } public synchronized Point getDestination() { return destination; } public synchronized void setDestination(Point destination) { this.destination = destination; } } @ThreadSafe class Dispatcher { @GuardedBy("this") private final Set<Taxi> taxis; @GuardedBy("this") private final Set<Taxi> availableTaxis; public Dispatcher() { taxis = new HashSet<Taxi>(); availableTaxis = new HashSet<Taxi>(); } public synchronized void notifyAvailable(Taxi taxi) { availableTaxis.add(taxi); } public Image getImage() { Set<Taxi> copy; // Dispatcher synchronized (this) {copy = new HashSet<Taxi>(taxis); Image Image = new Image(); For (Taxi t: copy) // Add Taix image.drawMarker(t.goetLocation ()); return image; } } class Image { public void drawMarker(Point p) { } } }Copy the code

Using a timing lock

Use an explicit Lock Lock, using the tryLock() method when obtaining the Lock. TryLock () does not wait forever when waiting exceeds the time limit, but returns an error message.

Use tryLock() to avoid deadlock problems.

Deadlock detection

The JDK provides two ways to do this:

  • JConsole JDK comes with graphical interface tools, using the JDK to give us the tool JConsole
  • Jstack is the JDK’s own command-line tool for thread Dump analysis.

Atomicity, visibility and order

Background the preview

  • Atomicity: Either to Synchronize or not to Synchronize, using mutex or lock to ensure atomicity.
  • Visibility: Changes made by one thread to a shared variable can be immediately seen by another thread. (Specifically, synchronization of variable changes back to main memory can be implemented in two ways: volatile. Volatile variables are flushed to main memory immediately after their changes. Use Synchronize or lock to flush changes to main memory before a variable unlock.
  • Orderliness: The order in which a program is executed is the order in which the code is executed. (To be more specific: in the Java memory model, the compiler and processor are allowed to reorder instructions, but reordering does not affect the results of single-thread execution, but does affect the correctness of multi-thread concurrent execution. There are two main ways to ensure orderliness: volatile and Synchronize. Volatile disables instruction reordering by adding a memory barrier, meaning that subsequent instructions cannot be executed before the barrier. Synchronize is to ensure that only one thread executes synchronized code at a time, similar to the sequential execution of code in series.

Atomicity:

Definition: An operation that involves access to a shared variable is an atomic operation if it is indivisible from any thread other than the executing thread. The operation is atomic. That is, other threads do not “see” the intermediate result of the partially performed operation.

For example, in the bank transfer process, if account A decreases by 100 yuan, account B will increase by 100 yuan. These two actions are one atomic operation. We’re not going to see the intermediate result where A goes down by 100 dollars, but B stays the same.

Atomicity is implemented by:

  • The exclusivity of locks ensures that only one thread is operating on a shared variable at a time
  • Compare And Swap (CAS) is used for guarantee
  • The Java language specification guarantees that writes to any variable other than long and double are atomic operations
  • The Java language specification also states that volatile variables are written atomically

What you should notice about atomicity:

  • Atomicity is for variables shared by multiple threads, so there is no sharing problem for local variables, and it doesn’t matter if the operation is atomic
  • It makes no sense to discuss atomic operations in a single-threaded environment
  • The volatile keyword guarantees atomicity only for variable writes, not for compound operations such as reads and writes

Visibility:

Definition: Visibility is the issue of whether a thread’s update to a shared variable is visible to subsequent threads accessing it.

To illustrate visibility issues, let’s start with a brief introduction to the concept of processor caching.

The processing speed of modern processors is much faster than that of main memory, so registers, caches, write buffers and invalidation queues are added between main memory and processor to speed up read and write operations. That is, our processor can interact with these parts, called processor caches, for reading and writing operations.

When the processor reads or writes to memory, it simply interacts with the processor cache. A processor cache content on another processor cannot be read, so another processor cache coherence protocol must be put through to read other processors in the cache data, and the synchronization to own processor cache, this ensures that the rest of the processor updates to other processors of the variable is visible.

Why do we have visibility issues in a uniprocessor?

In a single processor, due to the concurrent programming of multiple threads, there will be context switch of threads. The thread will store the update to the variable as the context, so that the other threads cannot see the update to the variable. So multithreaded concurrent programming with a single processor can also have visibility problems.

How is visibility guaranteed?

  • The current processor needs to flush the processor cache so that changes made to variables by the remaining processors can be synchronized to the current processor cache
  • After the current processor updates a shared variable, the processor cache needs to be flushed so that the update can be written to the processor cache

Order:

Definition: Orderality refers to the problem that memory accesses performed by threads running on one processor are viewed as ordered by threads running on another processor.

Reorder:

In order to improve the performance of program execution, the Java compiler may adjust the source code order to some extent without affecting the correctness of the program, resulting in the program running order is inconsistent with the source code order.

Reordering is an optimization of memory read and write operations. It does not cause program correctness problems in single-threaded environments, but may affect program correctness in multi-threaded environments.

Reorder example: Instance Instance = new Instance() What happened?

The specific steps are as follows:

  • Allocates memory space for objects on heap memory
  • Initializes objects on heap memory
  • Set instance to the newly allocated memory address

The second and third steps may reorder, resulting in a reference variable pointing to a non-null but incomplete object. (In multithreaded singletons, we must use volatile to disable instruction reordering.)

Resolution:

  • Atomicity is a set of operations that either completely occur or do not occur, and the rest of the threads do not see the presence of the intermediate process. Note that atomic operations + atomic operations are not necessarily atomic operations.
  • Visibility is the question of whether one thread’s updates to a shared variable are visible to another thread.
  • Orderliness refers to the issue of the order in which updates to shared variables by one thread appear to be executed by the rest of the threads.
  • You can think of it as atomicity + visibility -> order

The synchronized keyword

Synchronized is a keyword in Java that is an internal lock. It can be used on methods and method blocks to represent synchronized methods and synchronized code blocks. In a multithreaded environment, a synchronized method or block of synchronized code allows only one thread to execute at a time, while the rest of the threads are waiting to acquire the lock, which is a partial serial in the overall concurrency.

Internal lock underlying implementation:

  • On entry, Monitorenter is executed, adding the counter +1 and releasing the lock Monitorexit, counting -1
  • When a thread determines that the counter is 0, the current lock is free and can be occupied. Otherwise, the current thread enters the wait state

Guarantee of atomicity by synchronized internal lock:

After the first thread has acquired the lock, no other thread is allowed to acquire the lock and manipulate the shared data until it has finished executing, thus ensuring the atomicity of the program. Synchronized guarantees atomicity. Synchronized guarantees that only one thread acquires the lock and can access the synchronized code block

Guaranteed visibility by synchronized internal locks:

Synchronized internal locks ensure visibility by flushing the processor cache with the writer thread and flushing the processor cache with the reader thread.

  • Once the lock is acquired, the processor cache needs to be flushed so that updates made by the previous writer thread can be synchronized to the current thread.
  • Releasing the lock requires flushing the processor cache so that changes made by the current thread to the shared data can be pushed to the cache of the next thread processor.

Guarantee of order by synchronized internal lock:

Because of the guarantee of atomicity and visibility, the sequence of operations performed by the writer thread in the critical region is made to look like the critical region performed by the reader thread exactly in the source code order, that is, order is guaranteed.


Note: Internal locks can be used on methods and code blocks. The area decorated by internal locks is also called a critical area


Fair locks and unfair locks

  • Fair scheduling mode:

Grant exclusive rights to resources in order of request.

  • Unfair scheduling mode:

In this strategy, when the holding thread of a resource releases the resource, a thread in the waiting queue is awakened, and it may take some time for the thread to continue executing. During that time, the new thread (the active thread) can be granted exclusive rights to the resource first.

If the new thread does not occupy the resource for a long time, it is entirely possible to release the resource before the awakened thread continues to execute without affecting the awakened thread’s request for the resource.

Advantages and disadvantages analysis:

Unfair scheduling strategy:

  • Advantages: High throughput, can allocate resources for more applicants per unit of time
  • Disadvantages: The time required for resource applicants to apply for resources may vary greatly, and thread hunger may occur

Fair scheduling strategy:

  • Advantages: The time deviation required by threads to apply for resources is small; There is no thread hunger; It can be used when the resource holding thread occupies a relatively long time or the average application interval of the resource is relatively long, or the time deviation required by the resource application is required.
  • Disadvantages: Low throughput

JVM scheduling of synchronized internal locks:

The JVM schedules internal locks in an unfair way, assigning an Entry Set to each internal lock to keep track of the threads waiting to acquire the corresponding internal lock. When the lock is released by the thread holding it, any thread in the entry set of the lock will be woken up and given the opportunity to apply for the lock again. While the processor is running, there may be other new active threads that preempt the released lock with this thread.

The volatile keyword

The volatile keyword is a lightweight lock that guarantees visibility and order, but not atomicity.

  • Volatile ensures that main and working memory interact directly, read and write operations, and visibility
  • Volatile guarantees atomicity only for variable writes, not for reads and writes.
  • Volatile can prohibit instruction reordering (by inserting a memory barrier), typically in singleton mode.

Cost of volatile variables:

Volatile does not cause a thread context switch, but it is expensive to read variables because they need to be read from cache or main memory each time, rather than directly from registers.

When can volatile replace locking?

Volatile is a lightweight lock that allows multiple threads to share a state variable, or locks that allow multiple threads to share a set of state variables. Instead of locking, you can combine a set of state variables shared by multiple threads into an object that can be referenced by a volatile variable.

The difference between ReentrantLock and synchronized

  • ReentrantLock is a display lock that provides some features that internal locks do not, but is not a replacement for internal locks. Explicit lock supports fair and unfair scheduling. By default, unfair scheduling is adopted.
  • Synchronized internal locks are simple, but not flexible. Display locks can be applied for in one method and released in another. The display lock defines a tryLock () method that attempts to acquire the lock and returns true on success. Failure does not cause the thread of execution to be suspended but returns false to avoid deadlocks.

The thread pool

Thread pool is to create a number of executable threads into a pool (container), when there is a task to be processed, it will be submitted to the task queue in the thread pool, after processing the thread will not be destroyed, but still waiting for the next task in the thread pool.

ThreadPoolExecutor takes seven parameters, namely

  • CorePoolSize: The number of resident core threads in the thread pool
  • MaximumPoolSize: The maximum number of concurrent threads in the thread pool, which must be greater than or equal to 1
  • KeepAliveTimes: Indicates the lifetime of extra idle threads. When the number of thread pools exceeds corePoolSize, when keepAliveTime is reached, excess threads are destroyed until there are only corePoolSize threads left
  • Unit: keepAliveTime unit
  • WorkQueue: Queue of tasks submitted for execution
  • ThreadFactory: represents a threadFactory that generates worker threads in the thread pool and is used to create threads, usually by default
  • Handler: Reject policy, which indicates how to reject runnable requests when the queue is full and the number of worker threads is greater than or equal to the maximum number of threads in the thread pool

Four Rejection strategies

  • AbortPolicy (default) : Reject RejectExecutionException directly
  • CallerRunsPolicy: No exception is thrown and no task is discarded, but the task is rolled back to the caller
  • DiscardOldestPolicy: Discards the longest waiting task in the queue and then adds the current task to the queue to try to submit the current task again
  • DiscardPolicy: Silently discards unhandled tasks without throwing exceptions or doing anything

Setting the maximum number of threads maximumPoolSize

MaximumPoolSize is usually set to the number of native threads. Runtime.getruntime ().availableProcessors is used to check the number of native threads

Common thread pool types:

newCachedThreadPool( )

  • The core thread pool size is 0, the maximum thread pool size is not limited, to create one thread at a time
  • This is ideal for performing a large number of short and frequently submitted tasks

newFixedThreadPool( )

  • Fixed size thread pool
  • When the thread pool size reaches the core thread pool size, there is no increase or decrease in the fixed size thread pool of worker threads

newSingleThreadExecutor( )

  • Facilitate the realization of single (multi) producer-consumer model

See how thread pools work from the perspective of queuing strategies

Queuing strategy

When we submit tasks to the thread pool, we need to follow certain queuing policies, which are as follows:

  • If fewer threads are running than corePoolSize, Executor will always prefer to add new threads rather than queue them
  • If the number of running threads is equal to or greater than corePoolSize, Executor will always prefer to queue requests rather than add new threads
  • If the request cannot be queued, that is, the queue is full, a new thread is created, unless creating this thread exceeds maxinumPoolSize, in which case the task is rejected by default.

How thread pools work

What happens when the pool of threads opened from the blocking queue is full

Common blocking queues

ArrayBlockingQueue:

  • The interior uses an array as its storage space, which is pre-allocated
  • The advantage is that the PUT and take operations do not add to the burden of the GC (because space is pre-allocated)
  • The disadvantage is that the PUT and take operations use the same lock, which may lead to lock contention and more context switches.
  • ArrayBlockingQueue is suitable for use when concurrency between producer and consumer threads is low.

LinkedBlockingQueue:

  • Is an unbounded queue (queue length is integer.max_value)
  • The internal storage space is a linked list, and the storage space required by the linked list nodes is allocated dynamically
  • The advantage is that the PUT and take operations use two explicit locks (putLock and takeLock)
  • The downside is an increased burden on the GC because space is allocated dynamically.
  • LinkedBlockingQueue is suitable for use when there is a high level of concurrency between producer and consumer threads.

SynchronousQueue will:

  • SynchronousQueue can be considered a special kind of bounded queue. After producing a product, the producer thread will wait for the consumer thread to pick up the product before producing the next product. This is suitable for situations where there is no significant difference in processing power between the producer thread and the consumer thread.

NewCachedThreadPool is created on a per-thread basis because its internal queue uses SynchronousQueue, so there is no queuing.

Pay attention to the point

  • Using shortcuts provided by the JDK to create thread pools such as newCachedThreadPool can cause some memory overflow problems because queues can be filled with many tasks. So, in most cases, we should customize the thread pool.
  • Thread pools provide monitoring apis that make it easy to monitor the number of current and queued tasks as well as the number of completed tasks in the current thread pool.

CountDownLatch and CyclicBarrier

The two keywords are often compared and examined together

CountDownLatch is a countdown coordinator that allows one or more threads to wait for the rest of the threads to complete a specific set of operations before continuing.

CountDownLatch’s internal implementation is as follows:

  • CountDownLatch maintains a counter internally, and countdownlatch.countdown () decreases the counter value by 1 each time it is executed.
  • When the counter is not zero, the call to the countdownlatch.await () method will cause the execution threads, called the waiting threads on the CountDownLatch, to be suspended.
  • Countdownlatch.countdown () is a notification method that wakes up all waiting threads when the counter value reaches zero. There is also the countdownlatch.await (long, TimeUnit) method that specifies the wait length.

CyclicBarrier a CyclicBarrier is a barrier that allows multiple threads to wait for each other to execute to a specified point, at which point they will execute again, and can be used to simulate high-concurrency request testing in practice.

It can be thought that when we climb a mountain and reach a flat area, the group in front of us will take a rest and wait for the group behind us to follow. When the last climbing partner reaches the rest area, all of us will start to climb the mountain at the same time.

The internal implementation of CyclicBarrier is as follows:

  • The thread that uses CyclicBarrier to wait is called a Party, and the Party can wait by simply executing CyclicBarrier.await (). The fence maintains a display lock that identifies the last Party, and when the last Party calls await (), All previous waiting participants are awakened, and the last participant is not suspended.
  • CyclicBarrier maintains a counter variable count = the number of participants, and calling await makes count -1. When the last participant is determined, singalAll is called to wake up all threads.

ThreadLocal

When ThreadLocal is used to maintain variables, it provides a separate copy of the variable for each thread that uses the variable, so each thread can independently change its own copy without affecting the corresponding copy of other threads.

Internal implementation of ThreadLocal:

  • Each thread maintains a hashmap-like object, called ThreadLocalMap, which contains several entries (K-V key-value pairs). The corresponding thread is called the main thread of these entries
  • The Key of an Entry is a ThreadLocal instance, and the Value is a thread-specific object. The purpose of Entry is to establish a mapping between an instance of ThreadLocal for its main thread and a thread-specific object
  • An Entry reference to a Key is a weak reference. An Entry reference to a Value is a strong reference.

Atmoic

The introduction of

Classic question: is i++ thread-safe?

The i++ operation is not thread-safe, it is a compound operation consisting of three steps:

Copy the value of I to temporary variable temporary variable ++ back to original variable I

This is a compound operation and atomicity is not guaranteed, so this is not a thread-safe operation.

So how do you do things like atomic increment?

Here with the JDK in Java. Util. Concurrent. The atomic package under AtomicInteger wait for atomic classes. The AtomicInteger class provides atomic increments and decrement operations such as getAndIncrement and incrementAndGet. Atomic classes such as Atomic use CAS internally to ensure atomicity.

Subsequent updates

  • What is the Happened -before principle?
  • What are the optimizations for internal locking in the JVM virtual machine?
  • How to do lockless programming?
  • CAS and how to solve THE ABA problem?
  • The principle and realization of AQS (AbstractQueuedSynchronizer).