catalogue

  • 5.0.0.1 What are the advantages and disadvantages of thread pools? Why does starting a large number of threads reduce performance? What can be done to reduce performance?
  • 5.0.0.3 What is the difference between the start and run methods in a thread? What are the differences between wait and sleep? What is the difference between sleep(), join(), and yield()?
  • 5.0.0.4 What is the solution to the deadlock problem caused by writing a program in Java? How can deadlocks be prevented?
  • 5.0.0.5 ThreadLocal what does this class do? Why does ThreadLocal design a key to store the current ThreadLocal object?
  • 5.0.0.6 What is Thread safety? What are the levels of thread safety? What can be done to ensure thread safety? The difference between ReentrantLock and synchronized?
  • 5.0.0.7 What are Volatile and Synchronized used for? What are the differences? Synchronize How to implement a locking mechanism at compile time?
  • 5.0.0.8 The difference between wait() and sleep()? What are the usage scenarios? How do I wake up a blocked thread? What does thread.sleep (0) do?
  • 5.0.0.9 Concepts of synchronous and asynchronous, blocking and non-blocking? What are the usage scenarios? Tell me how you understand the difference between them.
  • 5.0.1.0 What are the states of threads? Please draw a flow chart of this state. What about the execution life cycle flow of a thread? What happens to a thread if it gets a runtime exception?
  • 5.0.1.1 Synchronized What? Who do synchronized blocks of code and synchronized methods essentially lock? Why is that?
  • 5.0.1.3 What is CAS? How does CAS work? What are the problems with CAS implementing atomic operations? Can CAS guarantee atomicity for multiple shared variables?
  • 5.0.1.4 If there are N network threads and you need to do data processing after the completion of N network threads, how will you solve this problem?
  • 5.0.1.5 What is the difference between a Runnable Interface and a Callable interface? How are thread exceptions handled in Callable? How do I detect Runnable exceptions?
  • 5.0.1.6 What happens if the thread pool queue is full when the task is submitted? What is the thread scheduling algorithm?
  • 5.0.1.7 What are Optimistic locks and pessimistic locks? What are the problems with pessimistic locking? How does optimistic locking implement conflict detection and data update?
  • 5.0.1.8 Which thread calls the constructor, static block of the thread class? Which is a better choice, synchronous method or synchronous block? Is it better to synchronize as little as possible?
  • 5.0.1.9 Synchonized (this) and synchonized(object)? Is there a difference between a method Synchronize and a static method?
  • 5.0.2.0 What is volatile? What is volatile used for? When does a thread write to main memory after performing operations in working memory?
  • 5.0.2.1 How are volatile variables updated in multiple threads? Understand the happens-before relationship for volatile? Memory state after volatile reads and writes in multiple threads?
  • 5.0.2.2 How does Volatile Work? Is a volatile int safe for multiple threads to operate on? So how do i++ thread safety?
  • 5.0.2.3 What can be done in the Java memory model to ensure atomicity, visibility, and order of concurrent processes?

Good news

  • Summary of blog notes [October 2015 to present], including Java basic and in-depth knowledge points, Android technology blog, Python learning notes, etc., including the summary of bugs encountered in daily development, of course, I also collected a lot of interview questions in my spare time, updated, maintained and corrected for a long time, and continued to improve… Open source files are in Markdown format! Also open source life blog, since 2012, accumulated a total of 500 articles [nearly 1 million words], will be published on the Internet, reprint please indicate the source, thank you!
  • Link address:Github.com/yangchong21…
  • If you feel good, you can star, thank you! Of course, also welcome to put forward suggestions, everything starts from small, quantitative change causes qualitative change! All blogs will be open source to GitHub!

5.0.0.1 What are the advantages and disadvantages of thread pools? Why does starting a large number of threads reduce performance? What can be done to reduce performance?

  • Thread pool benefits:
    • 1) Reduce resource consumption;
    • 2) Increase the corresponding speed;
    • 3) Improve thread manageability. Tech blog summary
  • Thread pool implementation principle:
    • When a new task is submitted to the thread pool, determine whether all threads in the core thread pool are executing. If not, a new thread is created to execute the task. If the threads in the core thread pool are all executing tasks, the next process goes on.
    • Determines whether the work queue is full. If not, the newly committed task is stored in the work queue. If the work queue is full, it goes to the next process.
    • Determines whether the thread pools are all working. If not, a new worker thread is created to execute the task. If it is full, the saturation policy is assigned to handle the task.
  • How does thread pooling improve performance?

5.0.0.3 What is the difference between the start and run methods in a thread? What are the differences between wait and sleep? What is the difference between sleep(), join(), and yield()?

  • What is the difference between the start and run methods in a thread
    • Why do we call the run() method when we call the start() method, and why can’t we call the run() method directly? This is a classic Java multithreaded interview question. When you call the start() method you will create a new thread and execute the code in the run() method. But if you call the run() method directly, it doesn’t create a new thread or execute the code that calls the thread.
  • The difference between wait and sleep
    • The big difference is that while wait releases the lock, sleep holds it all the time. Wait is usually used for interthread interactions, and sleep is usually used to pause execution.
  • 1. Sleep () method
    • Hibernates (suspends execution) the currently executing thread for a specified number of milliseconds, subject to the precision and accuracy of the system timer and scheduler. It gives other threads a chance to continue, but it does not release the object lock. That is, if a Synchronized block is present, other threads still cannot access the shared data. Note that this method catches exceptions
    • For example, if there are two threads executing at the same time (no Synchronized), one with the MAX_PRIORITY priority and the other with the MIN_PRIORITY priority, if there is no Sleep() method, the low-priority thread can execute only after the high-priority thread completes its execution. But when the higher-priority thread is sleep(5000), the lower-priority thread has a chance to execute.
    • In short, sleep() allows low-priority threads to execute, as well as same-priority, high-priority threads.
  • 2. Yield () methodTech blog summary
    • The yield() method is similar to the sleep() method in that it does not release the “lock flag”. The difference is that it takes no arguments. In other words, the yield() method simply causes the current thread to return to the executable state. Also, unlike sleep(), yield() allows only the same or higher priority threads to execute.
  • 3. Join () method
    • Thread’s nonstatic method join() causes one Thread B to “join” the end of another Thread A. B cannot work until A is finished.
    • Thread t = new MyThread(); t.start(); t.join(); Ensures that the current thread stops execution until the thread it joined completes. However, if the thread it joins is not alive, the current thread does not need to stop.
  • What does Thread join() do?
    • The join() of Thread means to wait for the Thread to terminate, that is, to suspend the execution of the calling Thread until the called object completes its execution. For example, if there are two threads, T1 and T2, the following code indicates that T1 is started first, and t2 is not started until the end of t1.
    t1.start();
    t1.join(); 
    t2.start();
    Copy the code

5.0.0.4 What is the solution to the deadlock problem caused by writing a program in Java? How can deadlocks be prevented?

  • What’s a deadlock
    • Thread A and thread B wait for each other to hold the lock causing the program to loop indefinitely.
  • Understand deadlocks in depth
    • Two threads hold two Object objects: lock1 and lock2. These two locks act as locks for synchronized code blocks;
    • Thread 1’s run() synchronized code blocks acquire the lock1 object lock, Thread.sleep(XXX), and then acquire the lock2 object lock. The main reason for this is to prevent thread 1 from starting the lock1 and lock2 object locks in a row
    • Thread 2 is waiting for thread 1 to release the lock of lock1. The lock of lock1 is already held by thread 1
  • Deadlock simple code
    • Create two strings (a and B), and create two threads (A and B). Each thread uses synchronized to lock the string (A and B). If A locks A, and B locks B, a cannot lock B, and B cannot lock A, then the deadlock occurs.
    • Print result: As you can see, Lock1 gets obj1 and Lock2 gets obj2, but neither of them can get another obj because they are waiting for the other to release the lock first, which is a deadlock.
    public class DeadLock {
        public static String obj1 = "obj1";
        public static String obj2 = "obj2";
        public static void main(String[] args){
            Thread a = new Thread(new Lock1());
            Thread b = new Thread(new Lock2());
            a.start();
            b.start();
        }    
    }
    class Lock1 implements Runnable{
        @Override
        public void run(){
            try{
                System.out.println("Lock1 running");
                while(true){
                    synchronized(DeadLock.obj1){
                        System.out.println("Lock1 lock obj1"); Thread.sleep(3000); Synchronization (DeadLock. Obj2){system.out.println (synchronized(DeadLock."Lock1 lock obj2");
                        }
                    }
                }
            }catch(Exception e){
                e.printStackTrace();
            }
        }
    }
    class Lock2 implements Runnable{
        @Override
        public void run(){
            try{
                System.out.println("Lock2 running");
                while(true){
                    synchronized(DeadLock.obj2){
                        System.out.println("Lock2 lock obj2");
                        Thread.sleep(3000);
                        synchronized(DeadLock.obj1){
                            System.out.println("Lock2 lock obj1"); } } } }catch(Exception e){ e.printStackTrace(); }}}Copy the code
    • What if we just run Lock1? Modify main to comment out thread B.
  • How can deadlocks be prevented?
    • There are four necessary conditions for a deadlock to occur. If you break any of these four necessary conditions, a deadlock will not occur. This makes it possible for us to solve the deadlock problem. Deadlock prevention, avoidance, detection, and recovery (note: detection and recovery of deadlocks is one method) are commonly used to resolve deadlocks. Lock prevention is a strategy to ensure that the system does not enter a deadlock state. Its basic idea is to require the process to follow some protocol when applying for resources, so as to break one or more of the four necessary conditions for deadlock, and ensure that the system will not enter the deadlock state.
    • Break the mutual exclusion condition. That is, allowing processes to access certain resources at the same time. However, some resources, such as printers, are not allowed to be accessed at the same time, because of the properties of the resource itself. Therefore, this method has no practical value.
    • Break the non-preemption condition. That is, allowing the process to forcibly take certain resources from the possessor. That is, when a process has occupied some resources and it requests new resources that cannot be satisfied immediately, it must release all occupied resources and re-apply later. The resources it frees can be allocated to other processes. This amounts to a stealthily commandeered resource for the process. This deadlock prevention approach is difficult to implement and can degrade system performance.
    • Break the conditions of possession and application. A resource pre-allocation strategy can be implemented. That is, a process requests all the resources it needs from the system at once before it runs. If all resources required by a process are not met, no resources are allocated and the process does not run temporarily. Only when the system can meet all the resource requirements of the current process, all the applied resources are allocated to the process at a time. Since the running process has already occupied all the resources it needs, it does not occupy resources and then claim resources, so no deadlocks occur. However, this strategy also has the following disadvantages:
      • In many cases, it is impossible for a process to know all the resources it needs before it executes. This is because processes are dynamic and unpredictable at execution time;
      • The resource usage is low. No matter when the allocated resources are used, a process cannot execute until it has all the resources it needs. Even if some resources are used by the process only once at last, the process keeps them for the duration of its life, resulting in a situation where they are not used for a long time. This is obviously a huge waste of resources;
      • Reduces the concurrency of processes. Because resources are limited, coupled with waste, the number of processes that can be allocated to all the resources required is necessarily low.
    • Break the circular waiting condition and implement the strategy of orderly resource allocation. Using this strategy, the resources are numbered and allocated according to the number in advance, so that the process does not form a loop when applying for or occupying resources. Requests for resources from all processes must be made strictly in ascending order. Processes that use small resources can apply for large resources, so there is no loop and deadlocks are prevented. Compared with the previous strategy, this strategy greatly improves resource utilization and system throughput, but it also has the following disadvantages:
      • It limits the process’s request for resources, and it is difficult to properly number all resources in the system, and increases the system overhead.
      • To comply with the application sequence by number, resources that are not in use must be applied in advance, which increases the resource occupation time of the process.

5.0.0.5 ThreadLocal what does this class do? Why does ThreadLocal design a key to store the current ThreadLocal object?

  • ThreadLocal is a thread variable
    • ThreadLocal maintains a local variable for each thread.
    • Using space for time, it is used for data isolation between threads. It provides an independent copy of the variable for each thread that uses the variable, so that each thread can independently change its copy without affecting the corresponding copy of other threads. From a thread’s point of view, the target variable is like a thread’s Local variable, which is what “Local” in the class name means. ThreadLocal is implemented with a ThreadLocal object as the key. Any object is a worthy storage structure. This structure is attached to the thread, meaning that a thread can query a value bound to the thread based on a ThreadLocal object.
  • The ThreadLocal class is a Map
    • The ThreadLocal class maintains a Map that stores copies of variables for each thread. The keys of elements in the Map are thread objects and the values are copies of variables for the corresponding thread.
    • ThreadLocal plays a huge role in Spring, with its presence in managing beans, transaction management, task scheduling, AOP, and other modules in the Request scope.
    • Most beans in Spring can be declared Singleton scoped and wrapped with ThreadLocal, so stateful beans can work Singleton style in multithreading.
  • See the blog for more details: Delve into the Java.lang. ThreadLocal class

5.0.0.6 What is Thread safety? What are the levels of thread safety? What can be done to ensure thread safety? The difference between ReentrantLock and synchronized?

  • What is thread safety
    • Thread safety is when multiple threads access to an object, if don’t have to consider these threads in the runtime environment scheduling and execution alternately, also do not need to undertake additional synchronization, or any other coordinated operation in the caller, call the object’s behavior can get the right results, that the object is thread-safe.
  • There are several levels of thread safety
    • Tech blog summary
    • Immutable:
      • Classes such as String, Integer, and Long are final and cannot be changed by any thread unless a new one is created, so immutable objects can be used directly ina multithreaded environment without any synchronization
    • Absolute thread safety
      • Regardless of the runtime environment, callers do not need additional synchronization measures. There’s a lot of extra cost to doing this, and most of the classes in Java that label themselves as thread-safe are actually not thread-safe, but there are some classes that are thread-safe, like CopyOnWriteArrayList, CopyOnWriteArraySet
    • Relative thread safety
      • Thread-safe means thread-safe. Methods like add and remove are atomic operations that do not interrupt a Vector, but only that much. If a thread is iterating over a Vector and another thread is adding the Vector at the same time, 99% of the cases will appear ConcurrentModificationException, that is, the fail mechanism – fast.
    • Thread unsafeTech blog summary
      • ArrayList, LinkedList, HashMap, and so on are thread-unsafe classes.
  • What are the means to ensure thread safety? To ensure thread safety, we can start from the three characteristics of multi-threading:
    • Atomicity: Either all or none of a single or multiple operations are performed
      • Lock: Ensures that only one thread can acquire the Lock at a time and execute the code that applies for and releases the Lock
      • Synchronized: Assigns an exclusive lock to a thread, allowing only one thread to access classes/methods/variables that it modifies
    • Visibility: When one thread changes the value of a shared variable, other threads are immediately aware of the change
      • Volatile: Ensures that new values are immediately synchronized to main memory and flushed from main memory immediately before each use.
      • Synchronized: updates the new value of working memory to main memory before releasing the lock
    • Ordering: Program code executes according to instructions
      • Volatile: Inherently contains semantics that prohibit instruction reordering
      • Synchronized: ensures that a variable can be locked by only one thread at a time, so that two synchronized blocks holding the same lock can only enter serially
  • The difference between ReentrantLock and synchronized
    • ReentrantLock differs from synchronized in that:
      • Interruptible wait: When the thread holding the lock does not release the lock for a long time, the waiting thread can choose to abandon the wait and process something else instead.
      • Fair lock: Multiple threads waiting for the same lock must acquire the lock in the order in which the lock was applied. Synchronized is unfair in that any thread waiting for the lock has a chance to acquire it when the lock is released. ReentrantLock is also unfair by default, but you can switch to a fair lock using a constructor with a Boolean value.
      • Locks bind multiple conditions: A ReentrantLock object can bind multiple conditions simultaneously with multiple calls to newCondition(). In synchronized, the lock objects wait() and notify() or notifyAl() can implement only one implicit condition. To associate more than one condition, an additional lock must be added.
    • Synchronized is a pessimistic lock mechanism. Locks.ReentrantLock, on the other hand, is to complete an operation each time without locking, assuming no conflicts, and then retry it until it succeeds if it fails because of conflicts.
      • ReentrantLock Application scenario
      • A thread needs to interrupt while waiting for control of a lock
      • The Condition application in ReentrantLock can control which thread of notify. The lock can bind multiple conditions.
      • With fair locking, each incoming thread will queue up.
    • See blog: The Difference between Lock and synchronized for more details

5.0.0.7 What are Volatile and Synchronized used for? What are the differences? Synchronize How to implement a locking mechanism at compile time?

  • What are the respective uses of Volatile and Synchronized? What are the differences?
    • 1 Different granularity, the former for variables, the latter locks objects and classes
    • 2 SYN blocks. Volatile threads do not block
    • 3 SYN guarantees three major properties. Volatile does not guarantee atomicity
    • 4 syn compiler optimization, volatile not optimization Volatile has two features:
      • 1. Ensure that the variable is visible to all threads. When one thread changes the value of the variable, the new value is visible to other threads, but it is not multithreaded safe.
      • 2. Disable instruction reordering optimization.
    • How Volatile ensures memory visibility:
      • 1. When a volatile variable is written, the JMM flusher the shared variable from the thread’s local memory to main memory.
      • 2. When a volatile variable is read, the JMM invalidates the thread’s local memory. The thread will next read the shared variable from main memory.
    • Synchronization: The completion of one task depends on another task. The dependent task can be completed only after the dependent task is completed.
    • Asynchronous: there is no need to wait for the dependent task to complete, but only notify the dependent task to complete what work, as long as the completion of their own task is considered complete, the dependent task will be notified back whether completed. Asynchrony is characterized by notification. Phone calls and text messages are used as metaphors for synchronous and asynchronous operations.
    • Block: The CPU stops and waits for a slow operation to complete before continuing to do other work.
    • Non-blocking: Non-blocking is when the CPU does something else while the slow execution is complete, and then the CPU completes the next operation.
    • Non-blocking leads to an increase in thread switching, and it needs to be considered whether the increased CPU usage time can compensate for the system switching costs.
  • Synchronize How to implement a locking mechanism at compile time?
    • Synchronized is compiled and forms the bytecode instructions monitorenter and Monitorexit, respectively, before and after the Synchronized block. When monitorenter is executed, it first tries to acquire the object lock. If the object is not locked, or the current thread already owns the object lock, increments the lock calculator by one. Accordingly, it decrement the lock calculator by one when monitorexit is executed. When the calculator reaches zero, the lock is released. If the object lock fails to be acquired, the current thread blocks until the object lock is released by another thread.

5.0.0.8 The difference between wait() and sleep()? What are the usage scenarios? How do I wake up a blocked thread? What does thread.sleep (0) do?

  • Sleep comes from Thread, and wait comes from Object
    • The thread does not release the object lock during the call to sleep(). Calling the wait thread releases the object lock
    • Sleep Does not allocate system resources after sleep. Wait allocates system resources to other threads
    • Sleep (milliseconds) specifies how long it takes to sleep, and it wakes up automatically
  • Popular accounts
    • Both Wait and sleep in Java programs cause some form of pause, and they can meet different needs. The wait() method is used for interthread communication and releases the lock if the wait condition is true and another thread is awakened, while the sleep() method only releases CPU resources or stops the current thread from executing for a period of time, but does not release the lock.
  • How do I wake up a blocked thread?
    • If a thread is blocking because it called wait(), sleep(), or join(), you can interrupt it and wake it up by throwing InterruptedException. If the thread encounters AN IO block, there is nothing to be done, because IO is implemented by the operating system, and Java code has no direct access to the operating system.
    • Tech blog summary
  • What does thread.sleep (0) do?
    • Due to Java’s preemptive Thread scheduling algorithm, it may occur that a Thread often obtains CPU control. In order to enable some threads with lower priorities to obtain CPU control, thread.sleep (0) can be used to manually trigger an operation of the operating system to allocate time slices. This is also an exercise in balancing CPU control.

5.0.0.9 Concepts of synchronous and asynchronous, blocking and non-blocking? What are the usage scenarios? Tell me how you understand the difference between them.

  • Synchronous and asynchronous
    • Synchronous and asynchronous represent the notification mechanism of messages. The so-called synchronization, method A calls method B must wait for method B to return the result before continuing with subsequent operations. Asynchronous: Method A calls method B so that method B notifies method A after the call through A callback
  • Blocking and non-blocking
    • Blocking and non-blocking focus on the state of waiting for a message: blocking means suspending the current thread until the result is returned; Non-blocking means you can do something else while waiting, polling to see if a result has been returned

5.0.1.0 What are the states of threads? Please draw a flow chart of this state. What about the execution life cycle flow of a thread? What happens to a thread if it gets a runtime exception?

  • A thread can have only one of these states at any point in time
    • New: The thread has not been started since it was created
    • Tech blog summary
    • Runable: A thread that is Running or Waiting for the CPU to allocate execution time is Waiting for it to be explicitly woken up by another thread. The following methods cause the thread to be in an indefinite wait state:
    Object.wait() without Timeout Thread.join() locksupport.park () without TimeoutCopy the code
    • Timed Waiting: A thread that is not allocated CPU execution time but is automatically woken up after a certain amount of time. The following methods cause the thread to enter the finite wait state:
    Thread.sleep() object.wai () thread.join () locksupport.parknanos () locksupport.parkuntil ()Copy the code
    • Blocked: A thread is Blocked. Unlike the wait state, a blocking state occurs while waiting for an exclusive lock to be acquired and while another thread abandons the lock. The wait state means that a waiting period or wake up action occurs while the program is waiting to enter the synchronization zone.
    • Terminated: The thread has Terminated execution
  • Draw a flowchart for the state
  • What happens to a thread if it gets a runtime exception?
    • If the exception is not caught, the thread stops executing. Another important point is that if this thread holds a monitor for an object, the object monitor is immediately released

5.0.1.1 Synchronized What? Who do synchronized blocks of code and synchronized methods essentially lock? Why is that?

  • What a synchronized lock
    • For normal synchronous methods, the lock is the current instance object;
    • For statically synchronized methods, the lock is the Class object of the current Class;
    • For synchronous method blocks, the lock is the object configured in parentheses;
    • When a thread attempts to access a synchronized block of code, it must first acquire the lock and release it when it exits or throws an exception. Synchronized lock is a MarkWord stored in the Java object head, usually 32bit or 64bit, where the last 2bit represents the lock flag bit.
  • The object is essentially locked.
    • In the Java Virtual machine, every object and class is logically associated with a monitor, and synchronized is essentially an acquisition of an object monitor. When a synchronized block or method is executed, the thread executing the method must obtain the object’s monitor before it can enter the synchronized block or method; The thread that did not acquire the object monitor will enter the blocking queue until the thread that successfully acquired the object monitor completes its execution and releases the lock. The blocking queue thread will wake up and make it try to acquire the object monitor again.

5.0.1.3 What is the principle of CAS? What are the problems with CAS implementing atomic operations? Can CAS guarantee atomicity for multiple shared variables?

  • How does CAS work
    • CAS is the abbreviation of compare and swap, Chinese translation into compare and exchange. CAS has three operands, the memory value V, the old expected value A, and the new value B to modify. Change the memory value V to B if and only if the expected value A and memory value V are the same, otherwise nothing is done. Spin is trying the CAS operation until it succeeds.
  • What are the problems with CAS implementing atomic operations
    • ABA problem. This is because CAS needs to check if the value has changed, and update it if it has not. However, if A value that was originally A changes to A, then CAS checks that it has not changed, but in fact it has changed. ABA problems can be solved by adding a version number. Starting with Java 1.5, an AtomicStampedReference class is provided in the JDK’s Atomic package to address ABA issues.
    • Long cycle time and high overhead. Pause optimization.
    • Atomic operations of only one shared variable are guaranteed. You can combine them into one object for CAS operation.

5.0.1.4 If there are N network threads and you need to do data processing after the completion of N network threads, what will you do?

  • Multithreaded synchronization issues. Thread.join () can be used in this case; The join method blocks until the thread terminates. For more complex cases, CountDownLatch can also be used. The construction of CountDownLatch receives an int as a counter, and the counter decreases by one each time the countDown method is called. The thread doing the processing calls the await method and blocks until the counter reaches zero.

5.0.1.5 What is the difference between a Runnable Interface and a Callable interface? How are thread exceptions handled in Callable? How do I detect Runnable exceptions?

  • The difference between Runnable and Callable interfaces
    • The return value of the run() method in the Runnable interface is void, and all it does is execute the code in the run() method; The Call () method in the Callable interface, which returns a value, is a generic type that can be used in conjunction with Future and FutureTask to retrieve the result of asynchronous execution.
    • And this is actually a very useful feature, because one of the big reasons that multithreading is more difficult and more complex than single threading is because multithreading is so unpredictable. Does a thread execute? How long did a thread execute? Is the expected data already assigned when a thread executes? We don’t know. All we can do is wait for the multithreaded task to finish. Callable+Future/FutureTask can retrieve the results of multiple threads. It can cancel the task if it waits too long to retrieve the required data, which is really useful.

5.0.1.6 What happens if the thread pool queue is full when the task is submitted? What is the thread scheduling algorithm?

  • What happens if the thread pool queue is full when the task is submitted?
    • If you are using an unbounded queue LinkedBlockingQueue, that is, an unbounded queue, it doesn’t matter, just keep adding tasks to the blocking queue for execution, because LinkedBlockingQueue can be thought of as an almost infinite queue that can hold tasks indefinitely
    • Tech blog summary
    • If you are using a bounded queue such as ArrayBlockingQueue, the task is first added to the ArrayBlockingQueue. When the ArrayBlockingQueue is full, the number of threads is increased based on maximumPoolSize. If the number of threads increases and ArrayBlockingQueue continues to fill up, then the full task will be processed using the RejectedExecutionHandler policy, which is AbortPolicy by default
  • What is the thread scheduling algorithm?
    • Preemptive. After a thread runs out of CPU, the operating system calculates a total priority based on thread priority, thread hunger, etc., and allocates the next time slice to a particular thread.

5.0.1.7 What are Optimistic locks and pessimistic locks? What are the problems with pessimistic locking? How does optimistic locking implement conflict detection and data update?

  • What are optimistic locks and pessimistic locks?
    • Optimistic locking: just as its name, for concurrent operation thread safety problem between state, optimistic optimistic locking that competition does not always happen, so it doesn’t need to hold the lock, will compare – to replace the two actions as an atomic operation to try to modify variables in memory, if failure, said conflict, then there should be a corresponding retry logic.
    • Pessimistic lock: As its name suggests, pessimistic locks are pessimistic about thread-safety issues arising from concurrent operations. Pessimistic locks assume that contention always occurs and therefore hold an exclusive lock every time a resource is operated on, just like synchronized, which operates on the resource directly.

5.0.1.8 Which thread calls the constructor, static block of the thread class? Which is a better choice, synchronous method or synchronous block? Is it better to synchronize as little as possible?

  • Which thread calls the constructor, static block of the thread class?
    • The thread constructor, the static block, is called by the thread of the new class, while the code inside the run method is called by the thread itself.
  • For example
    • If Thread1 is new in Thread2 and Thread2 is new in main, then:
      • Thread2’s constructor, static block, is called by the main thread, and Thread2’s run() method is called by Thread2 itself
      • Thread1’s constructor, static block, is called by Thread2, and Thread1’s run() method is called by Thread1 itself
  • Which is a better choice, synchronous method or synchronous block?
    • Synchronized blocks, which means that code outside the synchronized block is executed asynchronously, which is more efficient than synchronizing the entire method. As a rule of thumb, the smaller the scope of synchronization, the better.
    • Tech blog summary
  • Is it better to synchronize as little as possible?
    • Yes. While less scope is better, there is an optimization called lock coarsening in Java virtual machines that makes the scope larger. This is useful, for example, StringBuffer, which is a thread-safe class. Naturally, the most common append() method is a synchronous method, and when we write code we repeatedly append the string, which means we repeatedly lock -> unlock, which is bad for performance, Because this means that the Java virtual machine is repeatedly switching between kernel and user mode on this thread, the Java virtual machine coarses the code for multiple Append calls into a lock, extending the multiple Append operations to the top and bottom of the Append method into a large synchronized block. This reduces the number of locks -> unlocks, effectively increasing the efficiency of code execution.

5.0.1.9 Synchonized (this) and synchonized(object)? Is there a difference between a method Synchronize and a static method?

  • Synchonized (this) and synchonized(object)Tech blog summary
    • In fact, synchonized(object) itself includes synchonized(this). All scenarios are used to lock a code block, which is more efficient than simply adding synchonized to the method name (see below). The only difference is the object.
    • Some understanding of synchronized(this)
      • When two concurrent threads access the synchronized(this) block of the same object, only one thread can be executed at a time. Another thread must wait for the current thread to finish executing the code block before executing it.
      • 2. However, when one thread accesses a synchronized(this) block of an object, another thread can still access a non-synchronized (this) block of that object.
      • 3. In particular, when one thread accesses a synchronized(this) block of an object, access to all other synchronized(this) blocks in the object is blocked by other threads.
      • 4. When a thread accesses a synchronized(this) block of an object, it acquires the object lock. As a result, access by other threads to all synchronized code parts of the object is temporarily blocked.
  • Synchronize differs from a static method
    • The test code is shown below
    private void test() {
        final TestSynchronized test1 = new TestSynchronized();
        final TestSynchronized test2 = new TestSynchronized();
        Thread t1 = new Thread(new Runnable() {
    
            @Override
            public void run() {
                test1.method01("a");
                //test1.method02("a"); }}); Thread t2 = new Thread(newRunnable() {
    
            @Override
            public void run() {
                test2.method01("b");
                //test2.method02("a"); }}); t1.start(); t2.start(); } private static class TestSynchronized{ private int num1; public synchronized void method01(String arg) { try {if("a".equals(arg)){
                    num1 = 100;
                    System.out.println("tag a set number over");
                    Thread.sleep(1000);
                }else{
                    num1 = 200;
                    System.out.println("tag b set number over");
                }
                System.out.println("tag = "+ arg + "; num ="+ num1);
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
        }
    
        private static int  num2;
        public static synchronized void method02(String arg) {
            try {
                if("a".equals(arg)){
                    num2 = 100;
                    System.out.println("tag a set number over");
                    Thread.sleep(1000);
                }else{
                    num2 = 200;
                    System.out.println("tag b set number over");
                }
                System.out.println("tag = "+ arg + "; num ="+ num2); } catch (InterruptedException e) { e.printStackTrace(); }} // Call method01 to print a log tag aset number over
    tag b setnumber over tag = b; num =200 tag = a; Num =100 // Call method02 to print logs static static method tag asetnumber over tag = a; num =100 tag bsetnumber over tag = b; num =200Copy the code
    • Synchronized: a static method is a class that owns a synchronized lock.
    • Synchronized: a lock acquired by a non-static method that belongs to the current object. Tech blog summary
    • Conclusion: Class lock is different from object lock, synchronized modification does not add static method, lock is added to a single object, different objects have no competition; A static lock is added to a class on which all objects compete for a lock.

5.0.2.0 What is volatile? What is volatile used for? When does a thread write to main memory after performing operations in working memory?

  • What is volatile?
    • Lightweight lock. Synchronized is blocking synchronization that escalates to heavyweight locking in the event of competitive threads. Volatile is arguably the lightest synchronization mechanism provided by the Java Virtual machine.
  • What is volatile used for?
    • Volatile variables ensure that each thread gets the latest value of the variable, preventing dirty reads.
  • When does a thread write to main memory after performing operations in working memory?
    • The Java memory model tells us that individual threads copy shared variables from main memory to working memory, and the execution engine performs operations based on the data in working memory.
    • This time is not specified for common variables, but for volatile variables, the Java VIRTUAL machine has a special convention that changes to volatile variables are immediately perceived by other threads, preventing dirty reads and ensuring “visibility” of data.
  • What are the characteristics of assembly code generated by volatile modified variables?
    • An instruction prefixes Lock when writing volatile shared variables in assembly code
    • thisLockThere must be something magical about the instructions, so what will the Lock prefixed instructions find on a multicore processor? There are mainly two aspects of influence:
      • 1. Write the data in the current processor cache line back to the system memory.
      • 2. This write back operation invalidates the data cached by other cpus

5.0.2.1 How are volatile variables updated in multiple threads? Understand the happens-before relationship for volatile? Memory state after volatile reads and writes in multiple threads?

  • How do volatile variables get the latest values in multiple threads?
    • If you write to a volatile variable, the JVM sends the processor an instruction prefixed with Lock to write the variable’s cached row back to system memory. However, even if the write is back to memory, if the value cached by other processors is still old, performing the calculation will be problematic. So, on a multiprocessor, in order to ensure that each processor cache is consistent, can achieve cache coherence protocol, each processor by sniffing the spread of the data on the bus to check the value of the cache is expired, when the processor found himself cache line corresponding to the memory address has been changed, and will be set for the current processor cache line in invalid state, When the processor modifies the data, it reads the data from system memory back into the processor cache.
    • Therefore, after analysis, we can draw the following conclusions:
      • 1. Instructions prefixed with Lock cause the processor cache to be written back to memory;
      • 2. If the cache of one processor is written back to the memory, the cache of other processors will become invalid.
      • 3. When the processor finds that the local cache is invalid, it rereads the variable data from the memory, that is, it can get the latest value.
    • This mechanism allows each thread to obtain the latest value of a volatile variable. That is, the “visibility” of the data is satisfied.
  • How to understand the happens-before relationship for volatile?
  • Let’s start with one of the two cores: the happens-before relationship for volatile.
    • Volatile variable rule: Writes to a volatile field, happens-before any subsequent reads to that volatile field. In combination with the specific code, we use this rule to derive the following:
    private void test3() {
        Thread thread1 = new Thread(new Runnable() {
            @Override
            public void run() { new VolatileExample().writer(); }}); Thread thread2 = new Thread(newRunnable() {
            @Override
            public void run() { new VolatileExample().reader(); }}); thread1.start(); thread2.start(); } public class VolatileExample { private int a = 0; private volatile boolean flag =false;
        public void writer(){
            a = 1;          //1
            LogUtils.e("Test volatile data 1--"+a);
            flag = true;   //2
            LogUtils.e("Test volatile data 2--"+flag);
        }
        public void reader(){
            LogUtils.e("Test volatile data 3--"+flag);
            if(flag){      //3
                int i = a; //4
                LogUtils.e("Test volatile data 4--"+i); }}}Copy the code
    • The following information is displayed
    // 2019-03-07 17:17:30.294 25764-25882/com.ycbjie.other E/TestFirstActivity: │ ├ ─ test Volatile data 3--false 2019-03-07 17:17:30.294 25764-25881/com.ycbjie.other E/TestFirstActivity: │ ├ ─ test Volatile data 1--1 2019-03-07 17:17:30.295 25764-25881/com.ycbjie.other E/TestFirstActivity: │ ├ ─ test Volatile data 2--true // second case 2019-03-07 17:18:01.965 25764-25901/com.ycbjie. Other E/TestFirstActivity: │ ├ ─ test Volatile data 1--1 2019-03-07 17:18:01.965 25764-25902/com.ycbjie.other E/TestFirstActivity: │ ├ ─ Test Volatile data 2-- false 2019-03-07 17:18:01.966 25764-25901/com.ycbjie. Other E/TestFirstActivity: │ Test Volatile data 2--trueCopy the code
    • The happens-before relationship for the example code above is shown below:
    • Analyze the above code execution process
      • Lock thread A executes writer method first, and then thread B executes Reader method. For each arrow in the graph, the two nodes code A happens-before relationship. The black one is derived according to the program order rule. The red ones are based on happens-before for any subsequent reads of volatile variables, and the blue ones are based on transitivity rules.
      • Here 2 happens-before 3, again defined by the happens-before rule: If A happens-before B, the result of A is visible to B, and the order of A’s execution is prior to B’s, we know that the result of operation 2 is visible to operation 3, which means that thread B can quickly sense when thread A changes its volatile flag to true.
  • Memory state after volatile reads and writes in multiple threads?
    • Or in accordance with theTwo coreHaving examined the happens-before relationship, we now examine the memory semantics of volatile further. For example, thread A executes writer first and thread B executes Reader. The following graph shows the status of thread A after volatile writes.
      • Memory state graph for thread A after volatile writes
    • When a volatile variable is written, the shared variable in the thread’s local memory is invalidated, so thread B needs to read the latest value of the variable from main memory. The following diagram shows the memory changes of thread B reading the same volatile variable.
      • Thread B reads the memory state graph after volatile
    • Results analysis
      • Horizontally, thread A and thread B communicate. When thread A writes A volatile variable, it essentially sends A message to thread B telling thread B that your current values are old, and thread B reads the volatile variable as if thread A had just sent the message. Now that it’s old, what about thread B? Naturally, you have to go to main memory to get it.

5.0.2.2 How does Volatile Work? Is a volatile int safe for multiple threads to operate on? So how do i++ thread safety?

  • What volatile does and how
    • Java code is compiled into Java bytecode, which is loaded by the classloader into the JVM, which executes the bytecode, which ultimately needs to be translated into assembly instructions for execution on the CPU.
    • Volatile is the lightweight synchronized (volatile does not cause thread context switching and scheduling) that ensures “visibility” of shared variables in multiprocessor development. Visibility means that when one thread modifies a shared variable, another thread can read the changed value.
    • The memory access speed is far lower than that of the CPU. To improve the processing speed, the processor does not directly communicate with the memory. Instead, the processor reads the data from the system memory to the internal cache and then performs operations. After a common shared variable is modified, it is uncertain when it is written to main memory. When another thread reads it, the old value may still be in memory, so visibility is not guaranteed. If you write to a volatile variable, the JVM sends the processor an instruction prefixed with Lock to write the current processor cache row back to system memory.
  • Is a volatile int safe for multiple threads to operate on
    • Tech blog summary
    • unsafe
    • The case code, as for the printed results will not be shown
    • Volatile only guarantees visibility, not atomicity.
    • I++ is actually split into multiple steps:
      • 1) Get the value of I;
      • 2) Execute I +1;
      • 3) Assign the result to I.
    • Volatile only ensures that these three steps are not reordered. In the case of multiple threads, it might be possible for two threads to acquire I at the same time, perform I +1, and then assign the result 2 to both threads. In fact, the +1 operation should be performed twice.
    private volatile int a = 0;
    for (int x=0 ; x<=100 ; x++){
        new Thread(new Runnable() {
            @Override
            public void run() {
                a++;
                Log.e("Thread-------------".""+a);
            }
        }).start();
    }
    Copy the code
  • How can i++ thread be safe
    • Can use Java. Util. Concurrent. Atomic atomic classes under the package, such as AtomicInteger. The implementation principle is to use CAS spin operation to update the value.
    for (int x=0 ; x<=100 ; x++){
        new Thread(new Runnable() {
            @Override
            public void run() {
                AtomicInteger atomicInteger = new AtomicInteger(a++);
                int i = atomicInteger.get();
                Log.e("Thread-------------".""+i);
            }
        }).start();
    }
    Copy the code

5.0.2.3 What can be done in the Java memory model to ensure atomicity, visibility, and order of concurrent processes?

  • Atomicity: An operation is either performed at all or not at all.
    • The atomic variable operations that can be directly guaranteed are read, load, assign, use, store, and write. Therefore, access and read of basic data types can be considered to be atomic.
    • To ensure a wider range of atomicity, lock and unlock are implicitly used through the higher-level bytecode instructions Monitorenter and Monitorexit, which are reflected in Java code as the synchronized keyword.
  • Visibility: When one thread changes the value of a shared variable, other threads are immediately aware of the change.
    • This is achieved by synchronizing the new value back to main memory after the variable is modified and refreshing the value from main memory before the variable is read.
    • Three keywords are provided to ensure visibility: volatile ensures that new values are immediately synchronized to main memory and flushed from main memory immediately before each use; Synchronized can synchronize a variable back into main memory before performing unlock; Fields modified by final can be seen in other threads once initialized and no reference to this is passed by the constructor.
  • Ordering: Program code executes according to instructions.
    • If observed in this thread, all operations are ordered, meaning that “threads behave as serial semantics”; If you observe another thread in one thread, all operations are out of order, referring to the phenomenon of “instruction reordering” and “working memory and main memory synchronization delay”.
    • Two keywords are provided to ensure order: volatile inherently contains semantics that prohibit instruction reordering; Synchronized guarantees that a variable can only be locked by one thread at a time, so that two synchronized blocks holding the same lock can only enter serially.

The other is introduced

01. About blog summary links

  • 1. Tech blog round-up
  • 2. Open source project summary
  • 3. Life Blog Summary
  • 4. Himalayan audio summary
  • 5. Other summaries

02. About my blog

  • My personal website: www.yczbj.org, www.ycbjie.cn
  • Github:github.com/yangchong21…
  • Zhihu: www.zhihu.com/people/yang…
  • Jane: www.jianshu.com/u/b7b2c6ed9…
  • csdn:my.csdn.net/m0_37700275
  • The Himalayan listening: www.ximalaya.com/zhubo/71989…
  • Source: China my.oschina.net/zbj1618/blo…
  • Soak in the days of online: www.jcodecraeer.com/member/cont.
  • Email address: [email protected]
  • Blog: ali cloud yq.aliyun.com/users/artic… 239.headeruserinfo.3.dT4bcV
  • Segmentfault headline: segmentfault.com/u/xiangjian…
  • The Denver nuggets: juejin. Cn/user / 197877…