1. Talk about the differences between processes, threads, and coroutines

In short, a process is the basic unit of program execution and resource allocation. A program has at least one process and a process has at least one thread. A process has a separate memory unit during execution, while multiple threads share memory resources, reducing the number of switches and making it more efficient. A thread is an entity of a process. It is the basic unit of CPU scheduling and dispatching. It is a basic unit smaller than a program that can run independently. Multiple threads in the same process can execute concurrently.

2. Do you know daemons? How is it different from a non-daemon thread

When the program finishes running, the JVM waits for the non-daemon thread to complete and shut down, but the JVM does not wait for the daemon thread. The most typical example of a daemon thread is a GC thread

What is multithreaded context switching

Context switching in multithreading is the process of switching CPU control from one thread that is already running to another thread that is ready and waiting for CPU execution.

4. How to create two kinds of threads? What’s the difference?

By implementing java.lang.Runnable or by extending the java.lang.Thread class. Implementing the Runnable interface is probably better than extending Thread. There are two reasons:

  • Java does not support multiple inheritance. Therefore, extending Thread means that the subclass cannot extend other classes. A class that implements the Runnable interface may extend another class.

  • Classes might just be required to be executable, so inheriting the entire Thread class would be too expensive.

What is the difference between the start() and run() methods in Thread?

The start() method is used to start the newly created thread, and the run() method is called inside start(), which does not have the same effect as calling run() directly. When you call the run() method, it will only be called in the original thread. No new thread will be started, so the start() method will start a new thread.

How do I check if a thread has an object monitor

The Thread class provides a holdsLock(Object obj) method that returns true if and only if the Object obj’s monitor is held by a Thread. Note that this is static, meaning that “Thread” refers to the current Thread.

7. Difference between Runnable and Callable

The return value of the run() method in the Runnable interface is void, and all it does is execute the code in the run() method; The Call () method in the Callable interface, which returns a value, is a generic type that can be used in conjunction with Future and FutureTask to retrieve the result of asynchronous execution. And this is actually a very useful feature, because one of the big reasons that multithreading is more difficult and more complex than single threading is because multithreading is so unpredictable. Does a thread execute? How long did a thread execute? Is the expected data already assigned when a thread executes? We don’t know. All we can do is wait for the multithreaded task to finish. Callable+Future/FutureTask, on the other hand, can easily obtain the results of multithreading, and can cancel the task if the waiting time is too long to obtain the required data

What causes a thread to block

Blocking is the process of suspending the execution of a thread until a condition (such as a resource is ready) occurs, and those of you who have studied operating systems will be familiar with it. Java provides a number of ways to support blocking, so let’s take a look at each one.

methods instructions
sleep() Sleep () allows you to specify a period of time in milliseconds that causes the thread to block for a specified period of time and not get CPU time, after which the thread returns to the executable state. Typically, sleep() is used to wait for a resource to be ready: after the test finds that the condition is not met, the thread blocks for a while and then retests until the condition is met
Suspend () and resume () Suspend () is used in conjunction with suspend() to put a thread into a blocked state. It does not resume automatically and its corresponding resume() must be called before the thread can be put back into an executable state. Typically, suspend() and resume() are used to wait for a result from another thread: after a test finds that the result has not yet been produced, suspend the thread, and resume() is called after another thread has produced the result.
yield() Yield () causes the current thread to give up the CPU time it is currently allocated, but does not cause the current thread to block; that is, the thread is still in an executable state and could be allocated CPU time again at any time. The effect of calling yield() is equivalent to the scheduler thinking that the thread has executed for enough time to switch to another thread
Wait () and notify () Wait () allows a thread to block, either by specifying a period of time in milliseconds, or by taking no arguments. Wait () allows the thread to return to the executable state when notify() is called or when the specified amount of time has elapsed. The latter must be called with notify().

9. The difference between wait(),notify() and suspend(),resume()

At first glance, they look no different from the suspend() and resume() methods, but in fact they are very different. The crux of the difference is that none of the methods described above will release the locked lock (if any) when blocking, as opposed to the other way around. The core differences above lead to a series of detailed differences.

First, all of the methods described above belong to the Thread class, but this pair belongs directly to the Object class, which means that all objects have this pair of methods. This may seem strange at first, but it is actually quite natural because the pair of methods that block release the lock that is held by any object, and calling wait() on any object causes the thread to block and the lock on that object to be released. Calling the notify() method on any object results in unblocking (but not actually executing until the lock is acquired) from one of the randomly selected threads that blocked by calling the wait() method on that object.

Second, all of the methods described above can be called from anywhere, but this pair of methods must be called from a synchronized method or block for the simple reason that it is only in a synchronized method or block that the current thread owns the lock and can release it. Similarly, the lock on the object calling this pair of methods must be owned by the current thread in order for any lock to be released. Therefore, the pair of method calls must be placed in such a synchronized method or block, whose locking object is the object calling the pair of methods. If they do not meet the conditions, the program, though still able to compile, but abnormal IllegalMonitorStateException at run time.

Wait () and notify() are often used with the synchronized keyword because of these properties, and a comparison between them and the operating system interprocess communication mechanism reveals their similarities: Synchronized methods, or blocks, provide functionality similar to operating system primitives in that they execute without interference from multithreading, which is equivalent to block and wakeup primitives (both declared synchronized). Their combination enables us to implement a series of sophisticated inter-process communication algorithms (such as semaphore algorithms) on operating systems and to solve various complex inter-thread communication problems.

Two final points about wait() and notify() :

First: The threads that are unblocked by calling notify() are randomly selected from threads that are blocked by calling wait() on this object. There is no way to predict which thread will be selected, so take special care in programming to avoid problems caused by this uncertainty.

There is a notifyAll() method that performs a similar function. The only difference is that a notifyAll() method unblocks all threads that are blocked by calling the wait() method. Of course, only the thread that acquired the lock can enter the executable state.

You can’t talk about blocking without talking about deadlocks, and a quick analysis shows that deadlocks can occur from calls to both the suspend() method and to wait() methods with no timeout period specified. Unfortunately, Java does not support deadlock avoidance at the language level, and we must be careful to avoid deadlocks in programming.

We’ve looked at the various methods for implementing thread blocking in Java, focusing on wait() and notify() because they are the most powerful and flexible to use, but this makes them less efficient and error-prone. In practice, we should use various methods flexibly in order to better achieve our goals.

Conditions that cause deadlocks

1. Mutually exclusive condition: A resource can be used by only one process at a time. 2. Request and hold conditions: when a process is blocked by requesting resources, it holds on to acquired resources. 3. Non-deprivation condition: the process can not forcibly deprive the resources it has acquired before they are used up. 4. Circular waiting condition: a circular waiting resource relationship is formed between several processes.

12. Why wait() and notify()/notifyAll() are called in synchronous blocks

This is mandatory by the JDK. Both wait() and notify()/notifyAll() must obtain the lock of an object before being called. What is the difference between wait() and notify()/notifyAll() when giving up the object monitor

The difference between wait() and notify()/notifyAll() is that wait() releases the object monitor immediately, while notify()/notifyAll() waits for the rest of the thread code to complete before abandoning the object monitor.

What is the difference between wait() and sleep()

Both of these have been described in detail above, and here is a summary:

  • Sleep () comes from Thread, and wait() comes from Object. The thread does not release the object lock during the call to sleep(). Calling the wait thread releases the object lock

  • Sleep () does not allocate system resources after sleep, and wait allows other threads to occupy the CPU

  • Sleep (milliseconds) specifies how long it takes to sleep, and it wakes up automatically. Wait () is used with notify() or notifyAll()

14. Why are wait,nofity, and nofityAll not included in Thread

One obvious reason is that JAVA provides locks at the object level rather than the thread level. Each object has a lock, which is acquired by the thread. It makes sense to call wait() on an object if the thread needs to wait for some lock. If the wait() method is defined in the Thread class, it is not obvious which lock the Thread is waiting on. Simply put, because wait, notify, and notifyAll are lock-level operations, we define them in the Object class because locks belong to objects.

How do I wake up a blocked thread

If a thread is blocking because it called wait(), sleep(), or join(), you can interrupt it and wake it up by throwing InterruptedException. If the thread encounters AN IO block, there is nothing to be done, because IO is implemented by the operating system, and Java code has no direct access to the operating system.

What is multithreaded context switching

Context switching in multithreading is the process of switching CPU control from one thread that is already running to another thread that is ready and waiting for CPU execution.

17. The difference between synchronized and ReentrantLock

Synchronized is a keyword like if, else, for, and while. ReentrantLock is a class. This is the essential difference between synchronized and while. Since ReentrantLock is a class, it provides more flexible features than synchronized. It can be inherited, can have methods, and can have a variety of class variables. ReentrantLock has more extensibility than synchronized in several aspects: (1) ReentrantLock can set the waiting time for acquiring locks, so as to avoid deadlocks. (2) ReentrantLock can obtain various locks information. (3) ReentrantLock can flexibly implement multi-way notification. Unsafe Lock (ReentrantLock) refers to the Unsafe park method, whereas synchronized refers to the Mark Word object header.

18. What is FutureTask

This was actually mentioned earlier, FutureTask represents a task for asynchronous computation. FutureTask can pass in a concrete implementation class of Callable, which can wait for the result of the asynchronous operation, determine whether the task has been completed, and cancel the task. Of course, since FutureTask is also an implementation class of the Runnable interface, it can also be put into a thread pool.

19. What if a thread has a runtime exception?

If the exception is not caught, the thread stops executing. Another important point is that if this thread holds a monitor for an object, the object monitor is immediately released

What kinds of locks are available in Java

  • Spin-locking: spin-locking is enabled by default after JDK1.6. Based on previous observations, Shared data locked only lasts for a short period of time, for which a short period of time to suspend and resume the thread a bit wasteful, so here did a processing, the request to the thread lock in wait for a while, but don’t give up the processor execution time, look at the thread holding the lock can be quick release. In order for the thread to wait, it needs to perform a busy loop called a spin operation. After JDK6, adaptive spin locking was introduced, where the wait time was no longer fixed, but determined by the last spin time on the same lock and the state of the lock owner

  • Biased locking: A lock optimization introduced after JDk1. to eliminate synchronization primitives for data in uncontested situations. Further improve the running performance of the program. Biased locks are biased, meaning that the lock is biased toward the first thread to acquire it. If the lock is not acquired by another thread during subsequent execution, the thread holding the biased lock will never need to synchronize again. Biased locking can improve the performance of a program with synchronization but no competition, that is to say, it is not always beneficial to the operation of the program. If most of the locks in the program are accessed by multiple different threads, then biased locking mode is redundant. On the premise of specific analysis, we can consider whether to use biased locking.

  • Lightweight lock: in order to reduce the performance cost of acquiring and releasing locks, “biased lock” and “lightweight lock” have been introduced, so there are four lock states in Java SE1.6: no lock state, biased lock state, lightweight lock state and heavyweight lock state, which will gradually upgrade with the competition situation. Locks can be upgraded but cannot be downgraded, meaning biased locks cannot be downgraded after being upgraded to lightweight locks

21. How do I share data between two threads

This is done by sharing objects between threads, and then evoking and waiting with wait/notify/notifyAll, await/signal/signalAll. For example, BlockingQueue is designed to share data between threads

22, How to use wait() correctly? Use if or while?

The wait() method should be called in a loop, because other conditions may not have been met by the time the thread gets to the start of CPU execution, so it is better to loop to check if the conditions have been met before processing. Here is a standard code that uses wait and notify:

synchronized (obj) {
   while (condition does not hold)
     obj.wait(); // (Releases lock, and reacquires on wakeup)
     ... // Perform action appropriate to condition
}
Copy the code

What is a thread-local variable ThreadLocal

Thread-local variables are variables that are limited within a thread and are owned by the thread itself and are not shared between multiple threads. Java provides a ThreadLocal class to support thread-local variables as a way to achieve thread-safety. However, be careful when using thread-local variables in a managed environment, such as a Web server, where the life of a worker thread is longer than the life of any application variable. Java applications run the risk of memory leaks if any thread-local variables are not released after work is done.

24. What does ThreadLoal do?

Simple said ThreadLocal is a kind of to the practice of trading space for time in each Thread maintains a ThreadLocal. ThreadLocalMap isolating data, data is not Shared, nature is no Thread safety issues.

25. What is the role of the producer-consumer model?

(1) through balancing the producer’s production capacity and consumer spending to improve the operation efficiency of the whole system, this is the most important function of producer-consumer model (2) decoupling, this is a function of producer-consumer model attached, decoupling means the connection between the producers and consumers, contact less is also alone and do not need to receive each other’s constraints

26. Write a producer-consumer queue

This can be done by blocking queues or by wait-notify. Use blocking queues to do this

Public class Producer implements Runnable{private final BlockingQueue<Integer> queue; public Producer(BlockingQueue q){ this.queue=q; } @Override public voidrun() {
       try {
           while (true){ Thread.sleep(1000); Queue. Put (produce()); } }catch (InterruptedException e){ } } private intproduce() {
       int n=new Random().nextInt(10000);
       System.out.println("Thread:" + Thread.currentThread().getId() + " produce:" + n);
       returnn; }} public class Consumer implements Runnable {private final BlockingQueue<Integer> queue; public Consumer(BlockingQueue q){ this.queue=q; } @Override public voidrun() {
       while (true){ try { Thread.sleep(2000); // Consume (queue.take()); }catch (InterruptedException e){ } } } private void consume(Integer n) { System.out.println("Thread:" + Thread.currentThread().getId() + " consume:"+ n); Public static void Main (String[] args) {BlockingQueue<Integer> queue=new ArrayBlockingQueue<Integer>(100); Producer p=new Producer(queue); Consumer c1=new Consumer(queue); Consumer c2=new Consumer(queue); new Thread(p).start(); new Thread(c1).start(); new Thread(c2).start(); }}Copy the code

Use wait-notify to do this

This way should be the most classic, here do not explain

27. What happens if the thread pool queue is full when you submit a task

If you use LinkedBlockingQueue, which is an unbounded queue, it doesn’t matter, continue to add tasks to the blocking queue for execution, because LinkedBlockingQueue can be thought of as an almost infinite queue that can hold tasks indefinitely; If you’re using a bounded queue like ArrayBlockingQueue, the task is added to the ArrayBlockingQueue first, ArrayBlockingQueue is full, The RejectedExecutionHandler policy is used to handle the full task. The default policy is AbortPolicy.

28. Why use thread pools

Avoid frequent creation and destruction of threads to achieve reuse of thread objects. In addition, using thread pools gives you the flexibility to control the number of concurrency depending on your project.

29. What is the thread scheduling algorithm used in Java

Preemptive. After a thread runs out of CPU, the operating system calculates a total priority based on thread priority, thread hunger, etc., and allocates the next time slice to a particular thread.

What does thread.sleep (0) do

Due to Java’s preemptive Thread scheduling algorithm, it may occur that a Thread often obtains CPU control. In order to enable some threads with lower priorities to obtain CPU control, thread.sleep (0) can be used to manually trigger an operation of the operating system to allocate time slices. This is also an exercise in balancing CPU control.

31. What is CAS

CAS, short for Compare and Swap. Suppose there are three operands: the memory value V, the old expected value A, and the value to be modified B. Change the memory value to B and return true if and only if the expected value A and the memory value V are the same, otherwise do nothing and return false. Of course, CAS must be volatile to ensure that the most recent value in main memory is retrieved each time. Otherwise, the old expected value, A, will remain the same for A thread and will never succeed as long as the CAS operation fails

What is the optimistic lock and pessimistic lock

Optimistic lock: Optimistic lock assumes that contention does not always occur, so it does not need to hold the lock, and compares and replaces as an atomic operation to try to modify variables in memory. Failure indicates a conflict, and retry logic should apply.

Pessimistic locks: Pessimistic locks assume that competition always occurs and therefore hold an exclusive lock every time they operate on a resource, just as synchronized does.

33. What is the concurrency of ConcurrentHashMap?

ConcurrentHashMap concurrency is the size of the segment. The default value is 16, which means that up to 16 threads can operate on ConcurrentHashMap at the same time. This is ConcurrentHashMap’s biggest advantage over Hashtable. Can two threads fetch data from a Hashtable at the same time?

34. How ConcurrentHashMap works

ConcurrentHashMap is implemented differently in JDK 1.6 than in JDK 1.8.

ConcurrentHashMap is thread-safe, but it is implemented in a different way than Hashtablea. A Hashtable locks the hash table structure and is blocking. When one thread holds the lock, other threads must block to wait for the lock to be released. ConcurrentHashMap is a split lock that does not lock the entire hash table, but a local lock. That is, a local lock that is held by one thread does not affect other threads’ access to other parts of the hash table. ConcurrentHashMap contains a Segment JDK 1.8

In JDK 8, ConcurrentHashMap uses an optimistic CAS algorithm for synchronization instead of using Segment separation. However, the underlying implementation is “array + list -> red-black tree”.

37. CyclicBarrier and CountDownLatch are different

These two classes are very similar, both under java.util.concurrent and both can be used to indicate that the code is running at a point. The difference is that:

  • Once a thread of a CyclicBarrier reaches a certain point, the thread stops running and does not restart until all threads have reached this point. CountDownLatch is not the case. When a thread gets to a certain point, it just gives a value of -1 and the thread keeps running

  • CyclicBarrier can only evoke one task, and CountDownLatch can evoke multiple tasks

  • CyclicBarrier is reusable, and CountDownLatch is not reusable. If the count is 0, the CountDownLatch is no longer available

Is the ++ operator thread safe in Java?

Not a thread-safe operation. It involves multiple instructions, such as reading variable values, incrementing them, and then storing them back into memory, which can have multiple threads crossing

What are your best practices for multithreaded development?

  • Naming a thread

  • Minimize synchronization range

  • Use volatile in preference

  • Use higher-level concurrency tools instead of wait and notify() for thread communication whenever possible, such as BlockingQueue,Semeaphore

  • Use concurrent containers in preference to synchronous containers.

  • Consider using thread pools

About the volatile keyword

Can Volatile arrays be created?

You can create volatile arrays in Java, but only a reference to the array, not the entire array. Changing the array to which the reference refers is protected by volatile, but not by volatile if multiple threads change the elements of the array simultaneously

2. Can volatile make a nonatomic operation atomic?

A typical example is having a member variable of type long in a class. If you know that the member variable will be accessed by multiple threads, such as counters, prices, etc., it is best to set it to volatile. Why is that? Because reading a long variable in Java is not atomic and requires two steps, if one thread is changing the value of the long variable, another thread may only see half of the value (the first 32 bits). But reading or writing to a volatile long or double is atomic.

One practice is to use volatile to modify long and double variables so that they can be read and written by atomic type. Double and long are both 64 bits wide, so reads of both types are split into two parts, first reading the first 32 bits and then reading the remaining 32 bits. This process is not atomic, In Java, however, volatile long or double variables are read and written atomically. Another use of volatile fixes is to provide memory barriers, as in distributed frameworks. In simple terms, the Java memory model inserts a write barrier before you write a volatile variable, and a read barrier before you read a volatile variable. This means that when you write a volatile field, you can ensure that any thread can see the value you write, and that any value changes are visible to all threads prior to writing, because the memory barrier updates all other written values to the cache.

3. What guarantees do volatile variables provide?

Volatile serves two main purposes: instruction reordering avoidance and visibility assurance. For example, the JVM or JIT will reorder statements for better performance, but volatile variables will not be reordered with other statements even if they are assigned without synchronized blocks. Volatile provides a happens-before guarantee that changes made by one thread are visible to other threads. Volatile can also provide atomicity in some cases, such as reading 64-bit data types, such as long and double, which are not atomic (low 32 bits and high 32 bits), but double and long, which are volatile, are atomic.

Wechat scan concern public number [learn Java well], the first time to understand quality articles