preface

Concurrent programming techniques are important in Java. How much do you know about the following?

Process, thread, coroutine relationship overview

Process: Essentially an independent execution of the program, process is the basic concept of the operating system for resource allocation and scheduling, an independent unit of the operating system for resource allocation and scheduling. Thread: The smallest unit in which an operating system can schedule operations. It is contained within the process and is the actual operating unit within the process. A process can be concurrent with multiple threads, each thread performing different tasks, switching is controlled by the system. Coroutines: Also known as micro threads, thread is a lightweight user mode, coroutines, unlike the context switching on the system kernel threads and processes needed, coroutines context switching is decided by the users themselves, have their own context, so lightweight threads, also known as user-level thread, a thread can have many collaborators, threads and processes are synchronized mechanism, Coroutines are asynchronous. Coroutines are not implemented in Java’s native syntax and are currently supported by languages such as Python, Lua, and GO. Relationship: A process can have more than one thread, which allows the computer to run two or more programs simultaneously. Thread is the smallest execution unit of a process, and CPU scheduling switches between process and thread. If there are more processes and threads, scheduling will consume a lot of CPU. What really runs on CPU is thread, and threads can correspond to multiple coroutines.

Advantages and disadvantages of coroutines for multithreading

Advantages:

  • Very fast context switch, no system kernel context switch, reduce overhead
  • Single thread can achieve high concurrency, single-core CPU can support tens of thousands of coroutines
  • Since there is only one thread and there is no conflict of writing variables at the same time, no locks are required to control shared resources in coroutines

Disadvantages:

  • Coroutines cannot utilize multi-core resources and are essentially single threaded
  • Coroutines need to work with processes to run on multiple cpus
  • There are no mature third party libraries in Java and there are risks
  • Debugging debugging is difficult and is not conducive to problem discovery

The difference between concurrency and parallelism

Concurrency: A processors handle multiple tasks at the same time, this is actually the alternant processing multiple tasks at the same time, the program can have two or more threads at the same time, when there are multiple threads in operation, if the system is only one CPU, it is impossible to truly for more than one thread at the same time, it can only divide the CPU run time into several time periods, Time slots are then allocated to individual threads for execution. Parallel (Parallellism) : multiple CPU processing multiple tasks at the same time, a CPU to execute a process, another CPU can execute another process, two processes do not seize CPU resources, can be carried out at the same time.

And the tendency to deal with multiple tasks at a macro level over time. Parallelism is when multiple tasks are actually running at the same time.

Java in several ways to achieve multithreading

1. Inherit the Thread class

Inherit Thread, override the run method, create an instance, and execute the start method.

Advantages: The simplest direct operation code disadvantages: no return value, after inheriting a class, cannot inherit other classes, poor scalability

Public class ThreadDemo1 extends Thread {@override public void run() {system.out.println (" ThreadDemo1 extends Thread; "+Thread.currentThread().getName()); }}Copy the code
public static void main(String[] args) { ThreadDemo1 threadDemo1 = new ThreadDemo1(); threadDemo1.setName("demo1"); threadDemo1.start(); System.out.println(" thread.currentThread ().getName() "); }Copy the code
2. Implement the Runnable interface

The custom class implements the Runnable interface, implements the run method inside, creates the Thread class, passes the implementation object of the Runnable interface as a parameter to the Thread object, and calls the start method. Advantages: Thread class can implement several interfaces, can inherit a class disadvantages: no return value, can not be directly started, need to construct a Thread instance passed into the start

Public class ThreadDemo2 implements Runnable {@override public void run() {system.out.println (" implements Runnable, "); "+Thread.currentThread().getName()); }}Copy the code
public static void main(String[] args) { ThreadDemo2 threadDemo2 = new ThreadDemo2(); Thread thread = new Thread(threadDemo2); thread.setName("demo2"); thread.start(); System.out.println(" thread.currentThread ().getName() "); }Copy the code

Lambda expressions are used after JDK8

Public static void main(String[] args) {Thread Thread = new Thread(()->{system.out.println (); "+Thread.currentThread().getName()); }); thread.setName("demo2"); thread.start(); System.out.println(" thread.currentThread ().getName() "); }Copy the code
3. Use Callable or FutureTask

Create the callable interface implementation class, and implement the Call method, combined with FutureTask class package callable object, realize multithreading.

Cons: jdK5 won’t support it until later. You need to rewrite the Call method and combine multiple classes such as FutureTask and Thread

public class MyTask implements Callable<Object> { @Override public Object call() throws Exception { Println (" thread.currentThread ().getName() "); Return "This is the return value "; }}Copy the code
public static void main(String[] args) { MyTask myTask = new MyTask(); FutureTask<Object> futureTask = new FutureTask<>(myTask); Thread Thread = new Thread(FutureTask); FutureTask = new Thread(FutureTask); thread.setName("demo3"); thread.start(); System.out.println(" thread.currentThread ().getName() "); try { System.out.println(futureTask.get()); } catch (InterruptedException e) {e.printStackTrace(); } catch (ExecutionException e) {// Process send exception thrown e.printStackTrace(); }}Copy the code

Lambda expression

public static void main(String[] args) { FutureTask<Object> futureTask = new FutureTask<>(()->{ Println (" thread.currentThread ().getName() "); Return "This is the return value "; }); Thread Thread = new Thread(FutureTask); FutureTask = new Thread(FutureTask); thread.setName("demo3"); thread.start(); System.out.println(" thread.currentThread ().getName() "); try { System.out.println(futureTask.get()); } catch (InterruptedException e) {e.printStackTrace(); } catch (ExecutionException e) {// Process send exception thrown e.printStackTrace(); }}Copy the code
4. Create a thread from a thread pool

Custom Runnable interface, implement run method, create thread pool, call execute method and pass object.

Advantages: Security, high performance, and multiplexing. Disadvantages: Supported only after JDK5, it needs to be used with Runnable

Public class ThreadDemo4 implements Runnable {@override public void run() {system.out.println (" ThreadDemo4 implements Runnable; "+Thread.currentThread().getName()); }}Copy the code
public static void main(String[] args) { ExecutorService executorService = Executors.newFixedThreadPool(3); for(int i=0; i<10; i++){ executorService.execute(new ThreadDemo4()); } system.out.println (" thread.currentThread () + thread.currentThread ().getName()); Executorservice.shutdown (); }Copy the code
  • Common Runnable and the fourth thread pool +Runnable, easy to scale, and high performance (pooling idea)

Common base states of Java threads

There are six thread states in the JDK and nine in the JVM.

There are five common states

Create (NEW) : A thread object is generated, but start() is not called. Runnable: When the start() method of a thread object is called, the thread enters the ready state, but the thread scheduler has not set the thread to the current thread, or has not obtained CPU usage. If the thread runs and comes back from waiting or sleeping, it will also enter the ready state.

Running: The program sets the thread in the ready state to the current thread, which is CPU usage. At this point, the thread enters the Running state and starts Running the logic in run. Blocked: A thread in this state must wait for another thread to take some action (notification or interrupt). In this state, cpus are not allocated, they need to be woken up, and may wait indefinitely. For example, call wait(the state will become WAITING), or sleep(the state will become TIMED_WAITING), join, or issue an IO request, and the thread will return to the ready state after blocking.

Synchronized blocking: WHEN a thread fails to acquire a synchronized lock, that is, when the lock is occupied by another thread, it enters the synchronized blocked state.

Note: The following subdivision is known as a state WAITING: a thread in this state is WAITING for another thread to do something specific (notification or interrupt). TIMED_WAITING: This state is different from WAITING. It can return after a specified time.Copy the code

Death: A thread whose run method is TERMINATED is dead and cannot enter the ready state.

Multithreaded development common methods

sleep
Methods belonging to Thread; Let the thread suspend execution and wait for an estimated time before resuming; [Fixed] Giving up CPU usage will not release the lock. Enter the blocking state TIME_WAITGING, end of sleep becomes ready Runnable;Copy the code
yield
Methods belonging to Thread; Suspends the current thread's object to execute another thread. [Fixed] Surrender CPU usage without releasing the lock, like sleep. Function: Allows threads of the same priority to be executed in turn, but does not guarantee certain rotation. Note: the thread will not be blocked and will become Runnable, just need to regain CPU usage;Copy the code
join
Methods belonging to Thread; Calling this method on the main thread puts the main thread to sleep and does not release the object locks it already holds. Let the line calling the JOIN method finish before the other threads execute;Copy the code
wait
Methods that belong to Object; The current thread calls the object's wait method, which releases the lock and enters the thread's wait queue. You need to use notify, notifyAll, or wait(timeout) to wake up automatically.Copy the code
notify
Methods that belong to Object; Wake up a single thread waiting on the object monitor, the selection is arbitrary;Copy the code
notifyAll
Methods that belong to Object; Wake up all threads waiting on the object monitor;Copy the code

Thread state transition diagram

A way to ensure thread-safety in Java

  • Lock, such as the synchronize/already
  • Using volatile to declare variables, lightweight synchronization, and no guarantee of atomicity
  • Use thread safety, atomic AtomicXXX, concurrent container, synchronous CopyOnWriteArrayList/ConcurrentHashMap, etc
  • ThreadLocal Local private variables/Semaphore, etc

Resolve the volatile keyword

Volatile is lightweight synchronized, which ensures the visibility of shared variables. If the value of a variable modified by the volatile keyword changes, other threads will immediately see it, preventing dirty reads. Why do dirty reads occur? The JAVA Memory model (JMM) specifies that all variables are stored in main memory, and that each thread has its own working memory. The thread does not operate on variables directly in main memory, uses volatile to modify variables, and must obtain the latest values from main memory attributes before each read. Each write needs to be written to main memory immediately. Volatile modificated variables have their most recent values visible at any time. Thread 2 can see them immediately if thread 1 modificated variable V. Volatile: guarantees visibility, but not atomicity Synchronized: guarantees visibility and atomicity

What is order reordering

Instruction reordering comes in two categories: compiler reordering and runtime reordering. When a JVM compiles Java code or the CPU executes JVM bytecode, it reorders existing instructions to optimize performance (without changing program results)

For example: int a = 3 // step 1 int b = 4 // Step 2 int c =5 // Step 3 int h = ABC // Step 4 define the order 1,2,3,4 Calculate the order 1,3,2,4 and 2,1,3,4 all the results are the same

What is happens-before and why is it needed

A happens before B, defined as HB (A, B). In the Java memory model, happens-before means that the result of a previous operation can be retrieved by subsequent operations. The JVM will compile and optimize the code, resulting in instruction reordering. In order to avoid the impact of compile optimization on the security of concurrent programming, happens-before rules should define some scenarios that prohibit compile optimization to ensure the correctness of concurrent programming.

Happens-before Eight rules

1. Program order rule: the execution result of a piece of code in a thread is ordered. It will rearrange the instructions, but whatever it does, the results will be generated in the same order as our code. 2. Pipe lock rule: no matter in single-thread or multi-thread environment, for the same lock, after one thread unlocks the lock, another thread can see the operation result of the previous thread! 3. The rule for volatile variables: If a volatile variable is written by a program and then read by a thread, the result of the write operation must be visible to the thread reading it. 4. Thread start rule: During the execution of the main thread A, child thread B is started, so the modification results of shared variables made by thread A before starting child thread B are visible to thread B. 5. Thread termination rule: During the execution of the main thread A, the child thread B terminates, so the modification results of shared variables made by thread B before termination are visible in thread A. Also called the thread Join () rule. 6. Thread interrupt rule: A call to the interrupt() method occurs when the interrupted Thread code detects that an interrupt event has occurred, which can be detected by thread.interrupted (). 7. The transitivity rule: The simple happens-before principle is transitive, that is, HB (A, B), HB (B, C), then HB (A, C). 8. Object finalization rule: This is also simple, the completion of an object initialization, that is, the completion of constructor execution must happens-before its Finalize () method.

Three elements of concurrent programming

Atomicity: a particle that is no longer divisible. Atomicity means that one or more operations either succeed or fail without interruption, and there is no context switch, which can cause atomicity problems

int num = 1; // atom operation num++; // Read num from main memory to thread working memory, perform +1, and write num to main memory. The Java. Util. Concurrent. The atoms in the atomic variable classes The solution is can use synchronized or Lock (already, for example) to have the multi-step operation “into” atomic operations

public class Test { private int num = 0; // ReentrantLock = new ReentrantLock(); public void add1(){ lock.lock(); try { num++; }finally { lock.unlock(); Public synchronized void add2(){num++; public synchronized void add2(){num++; }}Copy the code

Solution idea: Treat a method or block of code as a whole, ensuring that it is indivisible

Orderliness: The order in which a program is executed is the order in which the code is executed, because the processor may reorder the instructions

When the JVM compiles Java code or the CPU executes JVM bytecode, it reorders existing instructions with the primary purpose of optimizing performance (without changing program results).

Int a = 3 // step 1 int b = 4 // step 2 int c =5 // step 3 int h = a*b*c // step 4 4Copy the code

In the example above, the order 1,2,3,4 is the same as the order 2,1,3,4. Instruction reordering can improve execution efficiency, but multithreading may affect the result

// thread 1 before(); Flag = true; flag = true; // thread 2 while(flag){run(); // Core business code}Copy the code

Instruction reordering, resulting in the order changed, the program problems, and difficult to troubleshoot

// thread 1 flag = true; // thread 2 while(flag){run(); } before(); // Handle the initialization before you can run the following run methodCopy the code
Visibility: Changes made to A shared variable by one thread A are immediately visible to another thread B
Int num = 0; // thread A executes num++; System.out.print("num: "+ num);Copy the code

Thread A executes i++ and thread B executes, and thread B might have 2 outcomes, maybe 0 and 1. Because i++ is doing the operation in thread A, and it’s not immediately updated to main memory, and thread B is going to read and print from main memory, and then it prints 0; Or maybe thread A has done the update to main memory, and thread B has A value of 1. Therefore, thread visibility is required, and synchronization, lock, and volatile ensure thread visibility.

Common interprocess scheduling algorithms

First-come-first-served scheduling algorithm: Scheduling is performed according to the arrival order of jobs/processes, that is, jobs with the longest waiting time in the system are given priority, while short processes with the longest waiting time are ranked after long processes, which is not conducive to short-job/process short-job first scheduling algorithm: Short processes/jobs (requiring the shortest service time) account for a large proportion in the actual situation. In order to make them preferentially executed, the high-response ratio priority scheduling algorithm is not friendly to long jobs: In each scheduling, the priority of each job is calculated first: Priority response than = = (wait time + service time)/request service time, because the sum of the waiting time and service time is the system response time on the job, so the priority response than = = / response time request service time, choose the service needs to be calculated with high priority priority information, increase the cost of the system

Time slice rotation scheduling algorithm: it serves each process in turn so that each process can get a response within a certain time interval. Due to the high frequency of process switching, it will increase the cost and does not distinguish the priority of urgency of tasks: The task is scheduled according to the urgency of the task. The high-priority task is processed first and the low-priority task is processed slowly. If the high-priority task is numerous and continuously generated, the low-priority task may be processed slowly

Common scheduling algorithms between threads

Thread scheduling refers to the process in which the system allocates CPU rights to threads. There are two main types: Collaborative thread scheduling (time-sharing scheduling mode) : Thread execution time is controlled by the thread itself. After the thread completes its own work, it must inform the system to switch to another thread. The biggest advantage is that the implementation is simple, and the switching operation is known to the thread itself, there is no thread synchronization problem. The downside is that thread execution time is out of control, and if a thread has a problem, it can stay blocked. Preemptive Thread scheduling: Each Thread is allocated execution time by the system, and Thread switching is not determined by the Thread itself. (In Java, Thread.yield() yields execution time, but does not obtain it.) Thread execution time is controlled by the system, and the entire process will not be blocked by a single thread. Java thread scheduling is preemptive scheduling, giving priority to the threads with the highest priority in the runnable pool, and randomly selecting a thread if the threads in the runnable pool have the same priority. So if we want some threads to allocate more time and some threads to allocate less time, we can do this by setting the thread priority.

The priority of a JAVA thread, as an integer from 1 to 10. When multiple threads can run, the JVM typically runs threads with the highest priority (thread.min_priority through thread.max_priority). When both threads are in the ready runnable state at the same time, the higher the priority, the easier it is for the system to select and execute, but priority is not 100% available, just more likely.

Java multithreading inside the common lock

Pessimistic locking: when a thread manipulates data, it assumes that another thread will modify the data, so it locks it every time it retrieves data and blocks it every time another thread retrieves data. Synchronized optimistic locking: Every time to fetch the data that other people will not modify, update will determine whether is others back to update the data, judging by version, if the data is modified and then refused to update, such as the CAS is optimistic locking, but it is not strictly a lock, is guaranteed by atomic data synchronization, such as a database of optimistic locking, by version control, CAS does not guarantee thread synchronization, and optimistic locks assume that no other threads will affect data during update. Summary: Pessimistic locks are suitable for scenarios with many write operations, optimistic locks are suitable for scenarios with many read operations, and optimistic locks have higher throughput than pessimistic locks. Fair lock: multiple threads acquire locks in the order they apply for them. In simple terms, each thread in a thread group can be guaranteed to acquire locks. For example, ReentrantLock(FIFO:First Input First Output) The method of acquiring the lock is random, which cannot ensure that every thread can get the lock. In other words, some threads starve to death and cannot get the lock all the time. For example, synchronized and ReentrantLock summary: Unfair lock performs better than fair lock and can reuse CPU time. Reentrant lock: Also called recursive lock, in which the inner layer is still available after the outer layer has been used, and deadlocks do not occur. Non-reentrant lock: If the current thread has already acquired the lock by executing a method, then attempts to acquire the lock again in the method do not obtain the blocked summation: Reentrant locks can avoid deadlocks to a certain extent. Synchronized and ReentrantLock are reentrant locks. Spin lock: when a thread is acquiring a lock, if the lock has already been acquired by another thread, the thread will loop and then continuously determine whether the lock can be successfully acquired. The thread will exit the loop until the lock is acquired. At most one execution unit can obtain the lock summary at any time: The spin lock does not occur thread state switch, always in user mode, reducing the consumption of thread context switch, the disadvantage is that the loop will consume CPU. Common spinlocks: TicketLock CLHLock, MSCLock.

Shared lock: also called S lock/read lock, a data lock that can be viewed but cannot be modified or deleted. After being locked, other users can concurrently read and query data, but cannot modify, add or delete data. This lock can be held by multiple threads for resource data sharing. Mutex: Also called X lock/exclusive lock/write lock/exclusive lock/exclusive lock/This lock can only be held by one thread at a time, and any thread attempting to lock again will be blocked until the current thread is unlocked. For example, if thread A applies an exclusive lock to data1, no other thread can apply any type of lock to data1. The thread that acquires the mutex can both read and modify the data. Deadlock: A condition in which two or more threads are blocked during execution, either by competing for resources or by communicating with each other, so that they cannot continue without external action. The following three optimizations are made by JVM to improve the efficiency of lock acquisition and release. For Synchronized lock upgrade, the lock status is indicated by the field in the object header of the object monitor, which is an irreversible process.

Biased locking: a thread that keeps access to a piece of synchronized code automatically acquires the lock at a lower cost.

Lightweight lock: When a biased lock is accessed by another thread, the biased lock will be upgraded to lightweight. Other threads will try to acquire the lock through spin, but will not block, and performance will be higher.

Heavyweight lock: When the lock is lightweight, other threads spin, but the spin will not continue, when a certain number of spins and has not acquired the lock, the lock will enter the block, upgrade to heavyweight lock, heavyweight lock will allow other threads to enter the block, performance will be reduced. Segment lock, row lock, table lock

Write a multithreaded deadlock example

Deadlock: when A thread obtains lock A and does not release lock B, another thread has acquired lock B and must acquire lock A before releasing lock B, so A closed loop occurs and A deadlock loop occurs.

public class DeadLockDemo { private static String locka = "locka"; private static String lockb = "lockb"; Public void methodA(){synchronized (locka){system.out.println () + thread.currentThread ().getName()); Thread.sleep(2000); thread.sleep (2000); } catch (InterruptedException e) { e.printStackTrace(); } synchronized(lockb){system.out.println (thread.currentThread ().getname ()); }}} public void methodB(){synchronized (lockb){system.out.println (thread.currentThread ().getname ());  Thread.sleep(2000); thread.sleep (2000); } catch (InterruptedException e) { e.printStackTrace(); } synchronized(locka){system.out.println (thread.currentThread ().getname ()); }} public static void main(String [] args){system.out.println (" thread.currentThread ().getName() "); DeadLockDemo deadLockDemo = new DeadLockDemo(); new Thread(()->{ deadLockDemo.methodA(); }).start(); new Thread(()->{ deadLockDemo.methodB(); }).start(); System.out.println(" thread.currentThread ().getName() "); }}Copy the code

There are two common solutions to how to resolve deadlocks in the example above:

  • Adjust the range of lock applications
  • Adjust the sequence of applying for locks
public class FixDeadLockDemo { private static String locka = "locka"; private static String lockb = "lockb"; Public void methodA(){synchronized (locka){system.out.println () + thread.currentThread ().getName()); Thread.sleep(2000); thread.sleep (2000); } catch (InterruptedException e) { e.printStackTrace(); }} synchronized(lockb){system.out.println (thread.currentThread ().getname ()); }} public void methodB(){synchronized (lockb){system.out.println () + thread.currentThread ().getName()); Thread.sleep(2000); thread.sleep (2000); } catch (InterruptedException e) { e.printStackTrace(); }} synchronized(locka){system.out.println (thread.currentThread ().getname ()); }} public static void main(String [] args){system.out.println (" thread.currentThread ().getName()); FixDeadLockDemo deadLockDemo = new FixDeadLockDemo(); for(int i=0; i<10; i++){ new Thread(()->{ deadLockDemo.methodA(); }).start(); new Thread(()->{ deadLockDemo.methodB(); }).start(); } system.out.println (" thread.currentThread ().getName()); }}Copy the code

Four necessary conditions for deadlocks:

  • Mutually exclusive: A process does not allow other processes to access the allocated resource. If other processes access the resource, they can only wait until the process that occupies the resource releases the resource
  • Request and hold conditions: after a process has obtained a certain resource, it makes a request for another resource, but the resource may be occupied by another process. The request is blocked, but the obtained resource is held
  • Inalienable conditions: Resources acquired by a process cannot be taken away before they are used and can only be released after they are used
  • Loop waiting condition: after a process is deadlocked, several processes form a round-ending waiting resource relationship

These four conditions are necessary for deadlocks, and they must be true whenever a deadlock occurs on the system, and no deadlock occurs unless one of these conditions is met.

Design a simple example of a non-reentrant lock

Non-reentrant lock: If the current thread has already acquired the lock by executing a method, attempts to acquire it again in another method will not be blocked.

Private void methodA(){// get TODO methodB(); } private void methodB(){private void methodB(){private void methodB(){ */ Public class UnreentrantLock {private Boolean isLocked = false; Public synchronized void lock() throws InterruptedException {system.out.println (" Enter lock and lock "+Thread.currentThread().getName()); While (isLocked){system.out.println (" enter wait "+ thread.currentThread ().getName()); wait(); } // lock isLocked = true; } public synchronized void unlock(){system.out.println (" unlock "+ thread.currentThread ().getName()); isLocked = false; // Wake up a thread in the object lock pool notify(); } } public class Main { private UnreentrantLock unreentrantLock = new UnreentrantLock(); Finally public void methodA(){try {unreentrantLock.lock(); System.out.println("methodA method is called "); methodB(); }catch (InterruptedException e){ e.fillInStackTrace(); } finally { unreentrantLock.unlock(); } } public void methodB(){ try { unreentrantLock.lock(); System.out.println("methodB method is called "); }catch (InterruptedException e){ e.fillInStackTrace(); } finally { unreentrantLock.unlock(); }} public static void main(String [] args){new main().methoda (); }} // The same thread fails to acquire the lock repeatedly, resulting in a deadlockCopy the code

Design a simple example of a reentrant lock

Reentrant lock: Also called recursive lock, after the lock is used on the outer layer, the inner layer can still be used without deadlock

/** * reentrant lock: Public class ReentrantLock {private Boolean isLocked = false; public class ReentrantLock {private Boolean isLocked = false; Private Thread lockedOwner = null; Private int lockedCount = 0; private int lockedCount = 0; Public synchronized void lock() throws InterruptedException {system.out.println (" Enter lock and lock "+Thread.currentThread().getName()); Thread thread = Thread.currentThread(); While (isLocked && lockedOwner! Thread.out.println (" wait "+ thread.currentThread ().getName()); System.out.println(" isLocked = "+isLocked); Println (" lockedCount = "+lockedCount); wait(); } // lock isLocked = true; lockedOwner = thread; lockedCount++; } public synchronized void unlock(){system.out.println (" unlock "+ thread.currentThread ().getName()); Thread thread = Thread.currentThread(); If (thread == this.lockedOwner){lockedCount--; if(lockedCount == 0){ isLocked = false; lockedOwner = null; // Wake up a thread in the object lock pool notify(); }}}}Copy the code
public class Main { //private UnreentrantLock unreentrantLock = new UnreentrantLock(); private ReentrantLock reentrantLock = new ReentrantLock(); // Finally public void methodA(){try {reentrantlock. lock(); System.out.println("methodA method is called "); methodB(); }catch (InterruptedException e){ e.fillInStackTrace(); } finally { reentrantLock.unlock(); } } public void methodB(){ try { reentrantLock.lock(); System.out.println("methodB method is called "); }catch (InterruptedException e){ e.fillInStackTrace(); } finally { reentrantLock.unlock(); } } public static void main(String [] args){ for(int i=0 ; i<10; I++){// new Main().methoda (); }}}Copy the code

Welcome to add!!

The last

Thank you for reading here, after reading what do not understand, you can ask me in the comments section, if you think the article is helpful to you, remember to give me a thumbs up, every day we will share Java related technical articles or industry information, welcome to pay attention to and forward the article!