1. Thread safety

Solution to atomicity problem caused by thread switching: use synchronized or lock between multiple threads.

2, cache caused visibility problem solution: synchronized, volatile, LOCK, can solve the visibility problem

The happens-before rule solves the orderliness problem

What is the difference between parallelism and concurrency?

Concurrency: Multiple tasks on the same CPU core are executed in turn (alternately) in subdivided slices of time. Logically, those tasks are executed simultaneously. Parallelism: The simultaneous processing of multiple tasks by multiple processors or multi-core processors per unit of time is truly “simultaneous”.

3. Advantages and disadvantages of multi-threading

Advantages: Improves CPU utilization. In multithreaded programs, when one thread has to wait, the CPU can run other threads instead of waiting, which greatly improves the efficiency of the program. This allows a single program to create multiple threads of parallel execution to complete their respective tasks.

Disadvantages:

Threads are also programs, so threads need to take up memory, and the more threads they have, the more memory they take up;

Multithreading requires coordination and management, so it requires CPU time to track threads;

Access to shared resources affects each other between threads, and the problem of competing for shared resources must be resolved.

4. The difference between processes and threads

Fundamental difference: Processes are the basic unit of operating system resource allocation, while threads are the basic unit of processor task scheduling and execution

Resource overhead: Each process has its own code and data space (program context), and switching between programs can be expensive; Threads can be regarded as lightweight processes. The same type of threads share code and data space, and each thread has its own independent running stack and program counter (PC). Switching between threads has little overhead.

Inclusion: If there are multiple threads in a process, the execution process is not one line, but multiple lines (threads). Threads are part of a process, so they are also called lightweight or lightweight processes.

Memory allocation: Threads of the same process share the address space and resources of the process, while the address space and resources of each process are independent of each other

Impact relationship: The crash of one process in protected mode has no impact on other processes, but the crash of one thread can cause the death of the entire process. So multi-processing is more robust than multi-threading.

Execution: Each independent process has entry points for program execution, sequential execution sequences, and program exits. However, threads cannot be executed independently and must be dependent on the application, which provides control over the execution of multiple threads, both of which can be executed concurrently

5. What is context switching? In multithreaded programming, the number of threads is generally greater than the number of CPU cores, and a CPU core can only be used by one thread at any time. In order to make these threads can be effectively executed, the CPU adopts the strategy of allocating time slices for each thread and rotating them. When a thread runs out of time, it is ready to be used by another thread. This process is a context switch.

To summarize, the current task saves its state before switching to another task after executing the CPU time slice, so that it can be reloaded when switching back to the task next time. The process from saving to reloading a task is a context switch.

Context switching is usually computationally intensive. In other words, it requires a significant amount of processor time, with each switch taking nanosecond of tens or hundreds of times per second. Therefore, context switching means consuming a lot of CPU time for the system, in fact, it may be the most time consuming operation in the operating system.

6. Four necessary conditions for deadlocks

1. Mutually exclusive condition: A resource is occupied by only one process in a period of time. If there are other processes requesting the resource at this time, they can only wait until the process holding the resource is free. 2. Hold and wait condition: the process has held at least one resource, but makes a new resource request, and the resource has been occupied by other processes. At this time, the requesting process blocks, but does not release other resources it has obtained. 3, do not preempt conditions: others have occupied a resource, you can not because you also need the resource, to take over the resources of others. 4. Circular waiting condition: a circular waiting resource relationship is formed between several processes. (For example, A set of processes, A waiting for B, B waiting for C, C waiting for A)

How to avoid thread deadlocks 1, avoid a thread to acquire multiple locks at the same time 2, avoid a thread to occupy multiple resources in the lock, try to ensure that each lock occupies only one resource 3, try to use timing lock, using lock.tryLock(timeout) instead of using internal lock mechanism

8. Four ways to create threads

1. Inherit Thread;

2. Implement Runnable interface;

3. Realize Callable interface;

4. Use anonymous inner classes

What is the difference between run() and start() for a thread? Each Thread completes its operations through the run() method corresponding to a particular Thread object, called the Thread body. Start a Thread by calling the start() method of the Thread class.

The start() method is used to start the thread, and the run() method is used to execute the thread’s runtime code. Run () can be called repeatedly, whereas start() can only be called once.

The start() method is used to start a thread, truly implementing multithreading. Calling the start() method does not wait for the run body to finish executing, but can continue to execute other code. The thread is in the ready state and is not running. The Thread then completes its running state by calling the run() method through the Thread class. The run() method ends and the Thread terminates. The CPU then schedules other threads.

The run() method is local to the thread, just a function in the thread, not multithreaded. If you call run() directly, it is just like calling a normal function. If you call run() directly, you have to wait for the run() method to complete before you can execute the following code. Therefore, there is only one execution path, and there is no thread characteristic at all. So use the start() method instead of the run() method for multithreaded execution.

10. Why do we call run() when we call start()? Why can’t we call run() directly?

New a Thread enters the new state. Calling the start() method starts a thread and puts it into a ready state, ready to run when the time slice is allocated. Start () performs the appropriate preparation of the thread and then automatically executes the contents of the run() method, which is true multithreading.

Executing the run() method directly, however, treats the run method as a normal method in the main thread and does not execute it in a thread, so it is not multithreading.

Conclusion: The start method is called to start a thread and put it into a ready state, while the run method is just a normal method call on thread and is executed in the main thread.

11. Thread status

1, Create (new) : create a thread object.

2. Runnable: After a thread object is created, when the start() method of the thread object is called, the thread is in the ready state, waiting to be selected by the thread scheduler to obtain the use of the CPU.

Running: Runnable threads get timeslice and execute program code. Note: Ready state is the only entry into the running state, that is, in order to enter the running state execution, the thread must first be in the ready state;

4. Blocked: A running thread temporarily gives up its use of the CPU and stops executing for some reason. At this point, the thread enters the blocked state and is not called by the CPU to enter the running state until it enters the ready state.

There are three types of blocking:

(I). Waiting to Block: The running thread executes wait(), and THE JVM places the thread in the Waitting Queue, making it enter the waitting state; (2). Synchronized blocking: if a thread fails to acquire a synchronized lock (because the lock is occupied by other threads), then THE JVM will put the thread into the lock pool and the thread will enter the synchronized blocking state; (iii). Other blocking: the thread will enter the blocking state when it calls its sleep() or join() or makes an I/O request. When the sleep() state times out, when the join() wait thread terminates or times out, or when I/O processing is complete, the thread goes back to the ready state.

5. Dead (End) : The run() and main() methods of the thread end, or exit the run() method abnormally, the thread ends its life cycle. Dead threads cannot be resurrected.

12. What are the process job scheduling algorithms

Short job priority scheduling algorithm 3. High response ratio priority scheduling algorithm Response ratio = priority = (waiting time + service time required)/Service time required = Total response time/Service time required. Disadvantages: Computations and resource consumption. Time slice rotation scheduling algorithm 5. Priority scheduling algorithm Disadvantages: Low priority takes a long time to process

13. What is the common scheduling algorithm between threads? Which one is Java?

Computers typically have only one CPU and can execute only one machine instruction at any time, and each thread can execute instructions only if it has access to the CPU. The concurrent running of the so-called multithreading, in fact, is to point to from the macro view, each thread in turn to obtain the use of CPU, respectively to execute their respective tasks. In the run pool, there are multiple threads in a ready state waiting for the CPU, and one of the tasks of the JAVA virtual machine is to be responsible for thread scheduling, which allocates CPU usage to multiple threads according to a specific mechanism. (Java uses thread counters in the JVM to implement thread scheduling)

There are two scheduling models: time-sharing and preemptive scheduling.

1. Time-sharing scheduling model means that all threads take turns to acquire CPU usage and allocate the CPU time slices occupied by each thread equally.

2. Java VIRTUAL machines use a preemptive scheduling model, which means that the threads with the highest priority in the runnable pool occupy the CPU first. If the threads in the runnable pool have the same priority, a random thread is selected to occupy the CPU. A running thread runs until it has to abandon the CPU.

14. The thread scheduler selects the thread with the highest priority to run, but terminates the thread if:

(1) The thread body uses the yield method to yield CPU usage

(2) The thread body calls the sleep method to make the thread enter the sleep state

(3) The thread is blocked due to IO operations

(4) Another thread of higher priority appears

(5) In the system that supports time slice, the time slice of the thread is used up

What’s the difference between sleep() and yield() on a thread?

(1) The sleep() method does not consider the priority of the thread when giving other threads a chance to run, so it gives the lower priority thread a chance to run; The yield() method only gives threads of the same or higher priority a chance to run;

(2) The thread is blocked after executing sleep(), and ready after executing yield();

(3) The sleep() method declares to throw InterruptedException, while the yield() method does not declare any exceptions;

(4) The sleep() method is more portable than the yield() method (which is related to operating system CPU scheduling), and it is generally not recommended to use yield() to control the execution of concurrent threads.

How do I stop a running thread?

There are three ways to terminate a running thread in Java:

1. Use exit flags to make the thread exit normally, that is, when the run method completes, the thread terminates. 2. Use the stop method to force termination, but it is not recommended because stop, like suspend and resume, is an expired method. Use the interrupt method to interrupt the thread.

17, What is the difference between notify() and notifyAll()?

If a thread calls an object’s wait() method, it is in that object’s wait pool, and the threads in the wait pool do not compete for the lock on that object.

NotifyAll () wakes up all threads; notify() wakes up only one thread.

NotifyAll () calls all threads from the wait pool to the lock pool, and then participate in the lock competition. If the competition is successful, the thread continues to execute. If the competition is unsuccessful, the thread stays in the lock pool and waits for the lock to be released. Notify () wakes up only one thread, which thread is up to the virtual machine.

18. The difference between as-if-serial and happens-before rules

The as-if-serial semantics guarantee that the execution result of a program in a single thread will not be changed, and the happens-before relation guarantee that the execution result of a properly synchronized multithreaded program will not be changed. 2. The as-if-serial semantics create the illusion for programmers who write single-threaded programs that they are executed in sequence. The happens-before relationship creates an illusion for programmers who write properly synchronized multithreaded programs: properly synchronized multithreaded programs are executed in the order specified by happens-before. 3. The purpose of this is to increase the parallelism of the program execution as much as possible without changing the result of the program execution.

19. The underlying realization principle of synchronized?

The semantic underlayer of Synchronized is accomplished through a monitor object. Synchronized does not need to manually obtain or release the lock. It is simple to use and automatically releases the lock when an exception occurs, preventing deadlocks.

Each object has a monitor lock. Each Synchronized modified code is locked when its monitor is occupied and attempts to acquire ownership of the monitor. Procedure:

1. If the number of entries to monitor is 0, the thread enters monitor and sets the number of entries to 1. The thread is the owner of Monitor. 2. If the thread already owns the monitor and just re-enters, the number of entries into the monitor is increased by 1. 3. If the monitor is occupied by another thread, the thread blocks until the number of monitor entries is zero, and then tries again to acquire ownership of the monitor.

20. What is spin

A lot of synchronized code is just some very simple code, the execution time is very fast, in this case, the waiting thread locking may not be a worthwhile operation, because thread blocking involves user state and kernel state switch issues. Since synchronized code executes so fast, it’s a good idea not to block a thread waiting for a lock, but to do a busy loop at synchronized’s boundary, which is known as spin. If the lock is still not acquired after multiple loops, then blocking may be a better strategy.

Busy loop: When a programmer uses a loop to make a thread wait. Unlike the traditional methods wait(), sleep(), or yield(), which give up CPU control, a busy loop does not give up CPU; it simply runs an empty loop. The purpose of this is to preserve the CPU cache, and in a multi-core system, a waiting thread may be running in another kernel when it wakes up, thus rebuilding the cache. It can be used to avoid rebuilding the cache and reduce waiting time for rebuilding.

21. The role of the volatile keyword

For visibility, Java provides the volatile keyword to ensure visibility and to prohibit instruction reordering. Volatile provides a happens-before guarantee that changes made by one thread are visible to other threads. When a shared variable is volatile, it guarantees that the value is immediately updated to main memory, and that it will read the new value when another thread needs to read it.

From the point of the practice, is an important role of volatile and CAS, guarantees the atomicity and detail can see Java. Util. Concurrent. The atomic classes under the package, such as AtomicInteger.

Volatile is commonly used for single operations (read or write) in multithreaded environments.

22, optimistic lock and pessimistic lock understanding and how to achieve, what are the implementation methods?

Pessimistic locking: Always assume the worst, every time you go to get the data you think someone else will change it, so every time you get the data you lock it, so that someone else tries to get the data it will block until it gets the lock. Traditional relational database inside used a lot of this locking mechanism, such as row lock, table lock, read lock, write lock, etc., are in the operation before the first lock. Another example is the implementation of the synchronized keyword in Java.

Optimistic lock: as the name implies, is very optimistic, every time to get the data, I think others will not modify, so I will not lock, but when updating, I will judge whether others have to update the data during this period, you can use the version number and other mechanisms. Optimistic locks are suitable for multi-read applications to improve throughput. Optimistic locks are provided by databases similar to write_condition. In Java. Java util. Concurrent. Atomic package this atomic variable classes is to use the optimistic locking a way of implementation of CAS.

What does thread pool do?

Thread pools are designed for sudden bursts of threads, increasing efficiency by reducing the time it takes to create and destroy threads by having a limited number of fixed threads serving a large number of operations.

24. What are the advantages of thread pools?

1. Reduce resource consumption: reuse existing threads to reduce the overhead of object creation and destruction.

2, improve the response speed. It can effectively control the maximum number of concurrent threads, improve system resource usage, and avoid excessive resource competition and congestion. When a task arrives, it can be executed immediately without waiting for the thread to be created.

3. Improve thread manageability. Threads are scarce resources. If they are created without limit, they will not only consume system resources, but also reduce system stability. Thread pools can be used for unified allocation, tuning, and monitoring.

4, additional functions: provide timing execution, periodic execution, single thread, concurrency control and other functions.

ThreadPoolExecutor parameter

CorePoolSize Specifies the number of core threads. MaximumPoolSize Specifies the maximum number of threads. KeepAliveTime Specifies the keeptime of a thread. ThreadFactory threadFactory handler thread pool rejection policy

26, What are the differences and characteristics of the four kinds of thread pool construction?

1. newCachedThreadPool

Features: newCachedThreadPool creates a cacheable thread pool with the flexibility to recycle idle threads if the current thread pool exceeds the required length for processing, and to add new threads as needed without limiting the pool size

Disadvantages: Although it can create an infinite number of new threads, it is prone to out-of-heap memory overflow, because its maximum value is set to integer.max_value at initialization, generally machines do not have enough memory for it to use continuously. Of course, knowing where the problem might be, you can rewrite a method to limit the maximum

Summary: The thread pool is infinite, and when the second task is executed, the first task has already been completed, so the thread that executed the first task is reused instead of creating a new thread each time.

2.newFixedThreadPool

Features: Create a thread pool with a fixed length. You can control the maximum number of concurrent threads. The excess threads will wait in the queue. The size of the fixed-length thread pool is best set based on system resources.

Disadvantages: The number of threads is fixed, but the blocking queue is unbounded. If there are a lot of requests backlogged, the blocking queue becomes longer and longer, which can easily result in OOM.

Summary: The requested squeeze must match the allocated thread pool size, which is best set according to system resources. Such as the Runtime. GetRuntime (). AvailableProcessors ()

3.newScheduledThreadPool

Features: Creates a fixed length thread pool and supports timed and periodic task execution, similar to a Timer class in Java.

Disadvantages: Because all tasks are scheduled by the same thread, all tasks are executed in serial. Only one task can be executed at a time. Delays or exceptions of the previous task will affect subsequent tasks (for example, if one task fails, subsequent tasks cannot be continued).

4.newSingleThreadExecutor

Features: Create a single threaded thread pool that uses only one worker thread to perform tasks. If the only worker thread terminates due to an exception, a new thread will replace it and ensure that the first task completes before it can execute the next one. Ensure that all tasks are executed in the specified order (FIFO, LIFO, priority).

Cons: Cons, obviously, it’s single-threaded and a bit weak for high concurrency

Summary: To ensure that all tasks are executed in the specified order, if the unique thread terminates due to an exception, a new thread will replace it

27. What are the states of thread pools?

RUNNING: This is the most normal state, accepting new tasks and processing tasks in the waiting queue. SHUTDOWN: No new task submissions are accepted, but tasks in the wait queue continue to be processed. STOP: does not accept new task submissions, stops processing tasks in the waiting queue, and interrupts the thread executing the task. TIDYING: All tasks are destroyed, workCount is 0, and the thread pool state terminated() executes the hook method when it transitions to TIDYING state. TERMINATED: The state of the thread pool will change to this after the TERMINATED () method.

What is the difference between submit() and execute() methods in a thread pool?

Similarities: The similarities are that threads can be started to execute tasks in the pool.

Differences: Receive parameter: execute() Only Runnable tasks can be executed. Submit () performs tasks of type Runnable and Callable. Return value: The submit() method returns a Future object holding the calculated results, while execute() has no Exception handling: submit() facilitates Exception handling

29. What are the ThreadPoolExecutor rejection policies?

ThreadPoolExecutor. AbortPolicy: throw RejectedExecutionException to reject the processing of a new task. ThreadPoolExecutor. CallerRunsPolicy: call own thread running tasks. You do not task requests. However, this strategy will reduce the speed of submitting new tasks and affect the overall performance of the program. In addition, this policy likes to increase the queue capacity. You can choose this strategy if your application can withstand this delay and you cannot task drop a single task request. ThreadPoolExecutor. DiscardPolicy: does not handle the new task, discarded directly. ThreadPoolExecutor. DiscardOldestPolicy: this policy will discard the first requests pending tasks.

Thread pool processing flow

Submit a task to a thread pool. The thread pool process is as follows:

1. Determine whether all the core threads in the thread pool are executing tasks. If not, create a new worker thread to execute tasks. If the core threads are all performing tasks, the next process is moved on.

2. The thread pool determines whether the work queue is full, and if it is not, it stores the new submitted tasks in the work queue. If the work queue is full, it goes to the next process.

Check whether all the threads in the thread pool are working. If not, create a new worker thread to execute the task. If it is full, the saturation policy is assigned to handle the task.

How to set the number of threads in the thread pool?

The guiding idea is to consider IO – intensive and CPU – intensive, and actually to rely on pressure measurement.

IO intensive tasks, CPU computing only takes up a small part of the time, most of the time is consumed in IO operations, can open a few more threads, 2CPU core number.

Cpu-intensive tasks, which require a lot of CPU processing time, will consume time in context switching, generally set to roughly equal to the CPU core data.

In fact, in the project it was by pressure test, the 8-core CPU was set to 50 core threads.

32. What is the difference between SynchronizedMap and ConcurrentHashMap?

SynchronizedMap locks the entire table at a time to ensure thread-safety, so that only one thread can access the map at a time.

ConcurrentHashMap uses segmented locking to ensure performance in multiple threads.

ConcurrentHashMap locks one bucket at a time. ConcurrentHashMap Divides the hash table into 16 buckets by default. Common operations such as GET, PUT, and remove lock only the buckets that are currently used.

As a result, a single thread can now be entered by 16 writers at the same time, and the concurrency improvement is noticeable.

ConcurrentHashMap also uses a different iteration approach. In this kind of iteration method, when the iterator is created after the collection and change is no longer throw ConcurrentModificationException, instead of changing the new new data which does not affect the original data, When the iterator is done, it replaces the header pointer with the new data so that the iterator thread can use the old data and the writer thread can make the changes concurrently.

Blocking queues and non-blocking queues

ArrayDeque (ArrayDeque) is an implementation of a two-ended queue in the JDK. ArrayDeque uses arrays for internal element storage. It does not allow null values to be stored. It is a great choice for queues, double-endian queues, stacks, and performs better than LinkedList.

PriorityQueue, (PriorityQueue) PriorityQueue (non-blocking queue) an unbounded PriorityQueue based on priority. The elements of the priority queue are sorted in their natural order, or by the Comparator provided when constructing the queue, depending on the constructor used. This queue does not allow null elements and does not allow insertion of non-comparable objects

ConcurrentLinkedQueue, (linked list-based concurrent queue) ConcurrentLinkedQueue (non-blocking queue) : This queue is suitable for high concurrency scenarios. It achieves high performance in high concurrency scenarios by locking nothing. The ConcurrentLinkedQueue performs better than the BlockingQueue interface, which is an unbounded thread-safe queue based on linked nodes. The elements of the queue are first-in, first-out. This queue does not allow null elements.

Blocking queue:

DelayQueue, (time-priority based queue, Delayed BlockingQueue) DelayQueue is an implementation without borders BlockingQueue, to which elements must implement the Delayed interface. When the producer thread calls a method like PUT to add elements, the compareTo method on the Delayed interface is triggered to sort, that is, the elements in the queue are ordered by their expiration date, not by the order in which they were queued. The element at the head of the line is the first to expire, the later the cut expires later.

ArrayBlockingQueue (Array-based concurrent blocking queue) ArrayBlockingQueue is a bounded blocking queue whose internal implementation is an array. Bounded means that its capacity is finite, and we must specify its capacity at initialization. Once specified, the capacity cannot be changed. ArrayBlockingQueue stores data on a first-in, first-out basis

LinkedBlockingQueue The configuration of the size of the LinkedBlockingQueue is optional. If we specify a size when initializing it, it is bounded. If we do not specify a size, it is unbounded. The default size is integer.max_value. Its internal implementation is a linked list.

LinkedBlockingDeque, (FIFO two-end blocking queue based on a linked list) LinkedBlockingDeque is a two-way blocking queue consisting of a linked list structure that can insert and remove elements from both ends of the queue. The two-way queue has one more entry to the operation queue, reducing the contention in half when multiple threads join the queue at the same time. Compared to other blocking queues, LinkedBlockingDeque has addFirst, addLast, peekFirst, peekLast, etc. Methods ending in first are used to insert, retrieve, and remove the first element of a double-ended queue. A method ending in last that inserts, retrieves, and removes the last element of a two-ended queue. LinkedBlockingDeque is an optional size that can be set at initialization to prevent it from growing too much. If not, the default size is integer.max_value.

PriorityBlockingQueue PriorityBlockingQueue is an unbounded queue that has no limit and can add as many elements as memory allows. It is again a queue with priority, judged by the object passed in by the constructor, which must implement the COMPARABLE interface.

SynchronousQueue SynchronousQueue is a queue that contains only one element internally. The thread that inserts an element into the queue blocks until another thread retrieves the stored element from the queue. Similarly, if a thread tries to fetch an element and none currently exists, the thread will block until it inserts the element into the queue.

34, ThrealLocal implementation principle, why memory leak

ThreadLocal is an internal data store class that can store data in a specified thread. Once the data is stored, only the specified thread can retrieve the stored data. Other threads cannot retrieve the data.

(1) ThreadLocal is only an entry point for variable access;

(2) Each Thread object has a ThreadLocalMap object that holds a reference to the object.

(3) ThreadLocalMap takes the current threadLocal object as the key and the real storage object as the value. The get() method finds the replica object bound to the current thread through the threadLocal instance.

A ThreadLocal stores items to its Map, and the ThreadLocal then attaches the Map to the current thread so that the Map belongs only to that thread.

Data isolation in many scenarios, such as cookies and sessions, is implemented through ThreadLocal.

Spring uses Threadlocal to ensure that database operations in a single thread use the same database connection. At the same time, in this way, the business layer does not need to sense and manage connection objects when using transactions. Switching between multiple transaction configurations can be cleverly managed through propagation level. Suspend and resume.

The underlying use of arrays rather than maps.

A memory leak

When ThreadLocal is saved, it will treat itself as a Key in ThreadLocalMap. Normally, both Key and value should be strongly referenced by the outside world, but now Key is designed as WeakReference.

ThreadLocal without strong external references is reclaimed during GC, and if the thread that created the ThreadLocal keeps running, then the value in the Entry object may not be reclaimed, resulting in memory leaks.

How to solve it?

Use remove at the end of the usage to clear the value.

35, reentranlock

What is AQS

AbstractQueuedSynchronizer

37. CAS

1. ABA problems. The CAS checks whether the memory value is changed when operating on the value. If the memory value is not changed, the CAS updates the memory value. But if the memory value is A, then B, and then A again, CAS checks to see that the value has not changed, but it has. The solution to the ABA problem is to add A version number to A variable, incrementing the version number by one each time the variable is updated, so that the process changes from “A-B-A” to “1A-2B-3A.” The JDK provides an AtomicStampedReference class to solve ABA problems beginning in 1.5, encapsulated in compareAndSet(). CompareAndSet () first checks if the current reference and current flag are equal to the expected reference and expected flag, and if they are, sets the values of the reference and flag to the given updated value atomically. 2. Long cycle time and high cost. If the CAS operation is not successful for a long time, the CAS operation spins all the time, which incurs high CPU overhead. 3. Atomic operations of only one shared variable can be guaranteed. CAS guarantees atomic operations on one shared variable, but cannot guarantee atomic operations on multiple shared variables. The JDK provides the AtomicReference class to ensure atomicity between reference objects. Multiple variables can be placed in one object for CAS operations.

4, AQS queue implementation principle, which design mode is used. How to implement fair lock and unfair lock, exclusive lock and shared lock, read and write lock respectively, why unfair lock has higher performance than fair lock.

Writing a program where two threads cross-print numbers from 1 to 100 requires multiple implementations

public class Solution2 { private static final Object lock = new Object(); Private volatile int index = 1; Private volatile Boolean aHasPrint = false; Class A implements Runnable {@override public void run() {for (int I = 0; i < 50; I++) {synchronized (lock) {while (aHasPrint) {try {lock.wait(); } catch (InterruptedException e) { e.printStackTrace(); }} system.out. println("A:" + index); // if A is not printed, print the value of A index++; AHasPrint = true; // indicates that A has printed lock.notifyAll(); }}}} class B implements Runnable {@override public void run() {for (int I = 0; i < 50; i++) { synchronized (lock) { while (! AHasPrint) {// Block try {lock.wait() if A does not print; } catch (InterruptedException e) { e.printStackTrace(); }} system.out. println("B:" + index); // print the value of B index++; // count + 1 aHasPrint = false; // when B is printed out, A new round is started. A has not printed lock.notifyAll(); Public static void main(String[] args) {Solution2 Solution2 = new Solution2(); Thread threadA = new Thread(solution2.new A()); Thread threadB = new Thread(solution2.new B()); threadA.start(); threadB.start(); }}Copy the code