• Java’s memory model and thread scheduling mechanism

    The Java Memory Model (JMM) is defined by the Java Virtual Machine specification to mask differences in Memory access by Java programs on different hardware and operating systems.

    Each thread running in the Java virtual machine has its own thread stack. This thread stack contains information about the current execution point of the method called by this thread. A thread can only access its own thread stack. Local variables created by one thread are not visible to other threads, only to itself. Even if two threads execute the same code, both threads still create local variables in code in their own thread stack. Therefore, each thread has a unique version of each local variable

  • The types of threads are daemons and user threads

    • Daemon threads are usually used to perform unimportant tasks, such as monitoring the health of other threads, and the GC thread is a daemon thread
    • SetDaemon () just before the thread starts Settings, otherwise the JVM throws an illegal abnormal thread state (IllegalThreadStateException)
  • Priority of a thread

    The Java thread priority ranges from 1 to 10. The default value is 5.

  • Six methods for threads

    Three non-static methods start(), run(), join() and three static methods currentThread(), yield(), sleep()

    1. The join() method is used to wait for other threads to finish executing. If thread A calls thread B’s join() method, thread A will wait until thread B finishes running

    The 2.yield() method is a static method that causes the current thread to give up possession of the processor, equivalent to lowering the thread’s priority

  • Thread state

    There are six states of a thread: new, runable, blocked, wait, timed wait and terminated

    1. Block status

    • Initiate blocking I/O operations
    • Apply for a lock held by another thread
    • Failed to enter a synchronized method or block
  • Java memory model

    The Java Memory Model (JMM) specifies that all variables are stored in main Memory and that each thread has its own working Memory.

  • The cache

    Modern processors have far more processing power than DRAM, the amount of time it takes to perform a single memory read/write operation. If used by a processor, the processor can execute hundreds of instructions. To bridge the gap between the processor and main memory, hardware designers put caches between the main memory and the processor.

    A cache is a hardware-implemented minuscule hash table whose key is the memory address of an object and whose value can be a copy of the memory data or data to be written to memory

    A Cache is like a Chained Hash Table consisting of buckets, each containing Cache entries.

  • Thread scheduling and security issues

    The JVM uses a preemptive scheduling model, in which the threads with the highest priority occupy the CPU first, and if they are all of the same priority, a random thread is selected and allowed to occupy the CPU

    Thread-safety-related races and the three points to ensure thread-safety: atomicity, visibility, and orderliness

    Atomicity: Atomicity means that the operation cannot be divided. Whether it is multi-core or single-core, only one thread can operate on an atomic quantity at a time. That is to say, the operation will not be interrupted by the thread scheduler during the whole operation, can be considered atomicity, for example, a++ does not have atomicity, a=1 has atomicity.

    Visibility: A shared variable is worth modifying by one thread and can be seen by other threads in a timely manner

    Reordering: The processor and compiler are code optimizations that can improve performance without affecting the correctness of single-threaded programs, but can affect the correctness of multithreaded programs, resulting in thread-safety issues.

  • Thread safety

    Common approaches to thread-safety are the use of locks and atomic types, which can be classified as internal, explicit, read/write, and volatile

  • The internal lock

    1. Because thread synchronization using synchronized is done through a monitor, internal locks are also called monitor locks

    2. The internal lock uses an unfair policy, which means it does not add context switch overhead.

  • According to the lock

    Explict Lock is an instance of the Lock interface, which abstracts explicit locks, and ReentrantLock is its implementation class

    • Reentrant, indicating that the lock is reentrant, that is, a thread holding the lock can successfully apply for the lock again
    • The difference between a manual acquire/release display lock and an internal lock is that with a display lock, we release and acquire the lock ourselves. To avoid lock leakage, we release the lock ina finally block.
    • The critical section, the code between the lock() and unLock() methods, is the critical section that displays the lock
    • Fair/unfair lock, display lock allows us to choose our own lock scheduling strategy. ReentrantLock has a constructor that allows a fair value to be passed in. When this value is true, it indicates that a fair lock is created. Since fair locks are more expensive than non-fair locks, the default ReentrantLock scheduling policy is an unfair policy.
  • Read-write lock

    The ReadWriteLock interface implementation class is ReentrantReadWriteLock, which contains two static internal classes ReadLock and WriteLock. A thread that only reads shared variables is called a reader thread, and a thread that only updates shared variables is called a writer thread

  • Reentrant and non-reentrant locks

    Reentrant locks are thread-by-thread locks. When one thread acquires the object lock, that thread can acquire the object lock again, while other threads cannot. Synchronized and ReentrantLock are both reentrant locks, which prevent deadlocks

    Reentrant locks are implemented by associating each lock with a request counter and a thread that owns it. When the count is zero, that lock is not occupied, request a vacant some thread lock, the JVM will record the lock occupant, and the counter set to 1, if the same thread request again this lock, counter will increase, each time takes up thread to exit the synchronized block, counter value will decline, until the counter to zero when the lock is released.

    A non-reentrant lock. Once a thread acquires the lock, it cannot re-enter the method until the lock is released

  • Pessimistic locks and optimistic locks

    A macro classification of locks is pessimistic and optimistic. Pessimistic locking and optimistic locking are not specific locks, but two different strategies for concurrent situations.

    Pessimistic lock, is very pessimistic, every time you take data will think that someone else will change. So every time you take data, you lock it, so that when someone else takes data, they lock it. This will block others from taking data until the pessimistic lock is released.

    Optimistic lock, is very optimistic, every time you go to pick up the data, you think that no one else will change it. So it’s not locked, it’s not locked, but if you want to update the data, before you update it, you’re going to check to see if someone else has changed the data between reading and updating it. If so, it reads again and tries to update again, repeating the above steps until the update succeeds.

    ** Pessimistic locks block transactions, optimistic locks rollback and retry **

  • CAS

    CAS(compare-and-swap) allows multiple threads to read at the same time (because there is no lock at all), but only one thread can successfully update data and cause the other threads to rollback and retry. CAS ensures atomicity at the hardware level using CPU instructions.

  • Partial lock → Lightweight lock → Synchronized lock

    When the synchronized code block is executed for the first time, the lock object becomes biased lock (the lock flag bit in the object head is modified through CAS). After the synchronized code block is executed, the thread will not actively release biased lock. When a block of synchronized code is reached for the second time, the thread will determine whether the thread holding the lock is its own (the thread holding the lock ID is also in the object head), and if so, proceed normally. Since the lock was not released before, there is no need to re-lock it. If only one thread uses the lock all the time, biased locking has little extra overhead and high performance.

    As soon as a second thread enters the lock race, the biased lock is upgraded to a lightweight lock that will spin. If the lock race continues in the lightweight lock state, the thread that did not grab the lock will spin, that is, constantly loop to determine whether the lock was successfully acquired. However, long periods of spin are very resource-intensive, and the idea that one thread holds the lock while the other threads can only use up CPU and not perform any useful tasks is called busy-waiting. If multiple threads use the same lock, but there is no lock contention, or very little lock contention, synchronized uses lightweight locks, allows for short periods of busy behavior, and so on. Short time busy, etc., in exchange for thread switching between user and kernel mode overhead. However, this wait is limited to a number of times (there is a counter that records the number of spins, allowing 10 cycles by default, which can be changed by vm parameters). If the thread has reached the maximum number of spins, the lightweight lock is upgraded to the heavyweight lock (again, the CAS changes the lock flag, but does not change the thread ID that holds the lock). When a subsequent thread tries to acquire the lock and finds that the lock is occupied by a heavyweight lock, it is suspended (rather than busy, etc.) and waits to be awakened in the future.

    A lock can only be upgraded by partial lock, lightweight lock, and heavyweight lock **(also called lock inflation)**, and cannot be degraded.

  • The Volatile keyword

    Volatile refers to variables that are subject to change. Volatile means that reads and writes to such variables are read from cache or main memory rather than allocated to registers

    Volatile has a lower cost than locking and is also called lightweight locking

    Volatile variable reads are more expensive than regular variables because the values of volatile variables are read from cache or main memory each time and cannot be temporarily stored in registers

    Read and write operations on volatile variables load and acquire storage barriers

    Ensure the order, visibility, and atomicity of variables

    Volatile variables disallow instruction reordering optimization

  • Atom type

    There is an atomic package under JUC, which contains a set of atomic classes. Using atomic class methods can keep threads safe without locking. And atomic classes is through Unsafe CAS instruction in class from the aspect of hardware to realize the thread safety, this package is AtomicInteger, AtomicBoolean, AtomicReference, AtomicReferenceFIeldUpdater, etc

  • Thread blocking wake up

    The wait/notify/notifyAll of Object and await()/signal()/signalAll() of Condition interface are waked by the thread.

    Condition An instance of Condition can be obtained with lock.newcondition ().

    CountDownLatch allows one or more threads to complete a specific set of operations before continuing,

    CyclicBarrier, which can be used when multiple threads need to wait for each other to execute at a certain point in each other’s code.

    The thread that uses CyclicBarrier. Await () to wait is called the Party, and all threads executing the CyclicBarrier. Await () method are suspended except for the last thread executing the CyclicBarrier

    Unlike CountDownLatch, CyclicBarrier is reusable, meaning that once the wait is complete, another round of waits can be made

    final int parties = 3;
    final Runnable barrierAction = new Runnable() {
    @Override
    public void run(a) {
     System.out.println("The men are gathered, and the climb begins."); }};final CyclicBarrier barrier = new CyclicBarrier(parties, barrierAction);
    
    public void tryCyclicBarrier(a) {
    firstDayClimb();
    secondDayClimb();
    }
    
    private void firstDayClimb(a) {
    new PartyThread("First day of climbing, Lao Li comes first.").start();
    new PartyThread("Lao Wang has arrived, but Xiao Zhang has not.").start();
    new PartyThread("Here comes Xiao Zhang.").start();
    }
    
    private void secondDayClimb(a) {
    new PartyThread("Climb the mountain the next day, Lao Wang first.").start();
    new PartyThread("Xiao Zhang is here, Lao Li is not here yet.").start();
    new PartyThread("Here comes Lao Li.").start();
    }
    public class PartyThread extends Thread {
    private final String content;
    
    public PartyThread(String content) {
     this.content = content;
    }
    
    @Override
    public void run(a) {
     System.out.println(content);
     try {
       barrier.await();
     } catch (BrokenBarrierException e) {
       e.printStackTrace();
     } catch(InterruptedException e) { e.printStackTrace(); }}}// Run the resultThe first day to climb the mountain, Lao Li came first Lao Wang arrived, xiao Zhang has not arrived Xiao Zhang arrived people came together, began to climb the next day to climb the mountain, Lao Wang arrived first Xiao Zhang arrived, Lao Li has not arrived Lao Li arrived people came together, began to climb the mountainCopy the code
  • ConcurrentHashMap

    Specific: separate read/write locks,volatile read CAS write (JDK 7-8) read failure on lock (JDK 5-7), weak consistency, adding elements may not be immediately readable, and may still have elements after clearing.

  • What are the guidelines for using threads

    1. Do not create a thread directly

    2. Provide a basic thread pool for each line of business to use

    3. Select an asynchronous mode

    4. The thread must be named

    5. Attach importance to priority Settings

  • HandlerThread

    HandlerThread internally executes tasks in a serial manner, which is suitable for scenarios where tasks are continuously removed from the queue and executed for a long time

  • The thread pool

    1. The significance of thread pools

    Reduce the number of threads created and destroyed, threads can be reused, avoid the overhead of creating threads, and tasks can be executed concurrently. You can adjust the number of thread pool worker threads according to the system’s capacity.

    2. Pricing interface for thread pool in JavaExecutor.

    Instance class instructions
    ExecutorService A real thread pool interface
    ScheduledExecutorService Similar to Timer/TimerTask, solve problems that require repeated execution
    ThreadPoolExecutor The default implementation of ExecutorService
    ScheduledThreadPoolExecutor Inherit ScheduledExecutorService interface of ThreadPoolExecutor, implement periodic task scheduling class.

    3. Java provides several default thread pool creation instances

    • 1.NewSingleThreadExecutor

      Create a single-threaded thread pool where only one thread is working and all tasks need to wait for serial task execution.

    • 2.newFixedThreadPool

      Create a fixed size thread pool, creating one thread each time a task is submitted until the thread pool reaches its maximum. The size of the thread pool remains constant once it reaches its maximum value.

    • 3.newCachedThreadPool

      Create a cacheable thread pool. If the size of the thread pool exceeds the number of threads needed to process the task, some idle threads are reclaimed, and as the number of tasks increases, the thread pool can intelligently add new threads to process the task. This thread pool has no limit on the thread pool size, which is entirely dependent on the maximum number of threads the operating system (or JVM) can create,

    • 4.newScheduledThreadPool

      Create a thread pool of unlimited size that supports the need to execute tasks regularly and periodically

    4. Explain the parameters of ThreadPoolExecutor

    Method signature:

     public ThreadPoolExecutor(int corePoolSize,
                                  int maximumPoolSize,
                                  long keepAliveTime,
                                  TimeUnit unit,
                                  BlockingQueue<Runnable> workQueue,
                                  ThreadFactory threadFactory,RejectedExecutionHandler handler){}Copy the code

    CorePoolSize – Number of threads in the thread pool including idle threads

    MaximumPoolSize – Maximum number of threads in the thread pool

    KeepAliveTime – The maximum duration for which a thread is idle when the number of threads is greater than the number of core threads

    TimeUnit-keepAliveTime TimeUnit

    WorkQueue – Executes the queue currently used to save tasks. This queue only holds Runnable tasks submitted by the execute method

    ThreadFactory – Executes the factory class that creates the thread

    Handler – A handler used when execution is blocked when thread scope and queue capacity are exceeded

    5. Close the thread pool

    ShoutDownNow: attempts to close a thread whether it is executing or not.

  • Several ways to create threads

    1. Inherit Thread,

    2. Implement the Runnable interface

    3. Create threads with Calladble and Future

    (1) Create the implementation class of the Callable interface and implement the Call () method, which will act as the execution body of the thread and return a value.

    (2) Use the FutureTask class to wrap the Callable object, which encapsulates the return value of the Call () method of the Callable object.

    (3) Use FutureTask as the target of Thread object to create and start a new Thread

    (4) Call FutureTask’s get() method to get the return value of the end of child thread execution

For multi-threaded knowledge summary, help yourself to record, if there is an understanding of the wrong welcome to correct, thank you, grow together, blunt together