Thread safety

If an object is thread-safe, then for the user, there is no need to consider coordination between methods, such as simultaneous writing or parallel reading, or any additional synchronization issues, such as the need to add synchronized or lock locks. Then it is thread-safe.

Run result problem

Let’s start with some code:

public class WrongResult {

    private static int temp = 0;

    public static void main(String[] args) throws InterruptedException {

        Runnable r = () -> {
            / / since 10000
            for (int i = 0; i < 10000; i++) { temp++; }}; Thread t1 =new Thread(r);
        Thread t2 = newThread(r); t1.start(); t2.start(); t1.join(); t2.join(); System.out.println(temp); }}Copy the code

Code logic: initialize the temp variable to 0, start two threads, each let temp increment by 1000, the result should be 20000.

And when you actually run this code, you’ll see that you get everything but 20000. Not at all what we expected.

Analysis:

The temp++ operation actually has three steps:

  • The first step is to read;
  • The second step is increase;
  • The third step is preservation.

Suppose two threads T1, T2, temp start with 1:

(1) T1 reads temp and increases it by 1. At this time, T1 holds temp 2 but has not saved it.

(2) at this time, the thread context is switched, T2 performs temp++ operation, because T1 does not save, so the read temp is still 1

(3) T2 is executed successfully, temp++ is saved, and the value of temp is changed to 2

(4) Switch back to T1 thread and save 2

Temp = 2; temp++ = 1; temp++ = 2

Data initialization problem

Let’s start with some code:

public static void main(String[] args) {
    List<String> list = new ArrayList<>();
    new Thread(() -> {
        list.add("1");
        list.add("2");
        list.add("3");
        list.add("4");
        list.add("5");
        list.add("6");
    }).start();
    System.out.println(list.get(5));
}
Copy the code

Expected results should be 6, but the actual operation has thrown the Java. Lang. IndexOutOfBoundsException.

Analysis:

The reason for this is that the list variable gets its value before it is initialized. Since the initialization thread and the main thread are two different threads, their execution does not interfere with each other, so they do not get the value.

Activity problem

The problem with activity is that the program never runs.

A deadlock

A deadlock is when two threads are waiting for each other’s resources, but at the same time, each thread wants to execute first.

Let’s start with some code:

public static void main(String[] args) {
    Object lock1 = new Object();
    Object lock2 = new Object();

    new Thread(() -> {
        System.out.println(Thread.currentThread().getName() + "\t try to get lock1");
        synchronized (lock1) {
            System.out.println(Thread.currentThread().getName() + "\t has obtained lock1");
            System.out.println(Thread.currentThread().getName() + "\t try to get lock2\n");
            try {
                // Sleep for 1 second, simulate delay
                TimeUnit.SECONDS.sleep(2);
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
            synchronized (lock2) {
                System.out.println(Thread.currentThread().getName() + "\t has obtained lock2"); }}},"T1").start();
    new Thread(() -> {
        System.out.println(Thread.currentThread().getName() + "\t try to get lock2");
        synchronized (lock2) {
            System.out.println(Thread.currentThread().getName() + "\t has obtained lock2");
            System.out.println(Thread.currentThread().getName() + "\t try to get lock1");
            synchronized (lock1) {
                System.out.println(Thread.currentThread().getName() + "\t has obtained lock2"); }}},"T2").start();
}
Copy the code

The T1 thread locks the lock1 variable and then locks the lock2 variable. T2 has the opposite locking logic: lock the lock2 variable first, then lock the lock1 variable.

Running results:

T1 thread holds the lock of lock1 and wants to acquire the lock of lock2. T2 threads hold the lock on lock2 and want to acquire the lock on lock1, creating a situation where each thread is waiting for the other to release the lock, resulting in a deadlock.

Live lock

A live lock is very similar to a deadlock in that the program never waits for a result, but in contrast to a deadlock, a live lock is alive. Because the running thread is not blocking, it is always running, but it never gets the result.

As an example, suppose you have a message queue, queue there are all sorts of need to be dealt with in the news, but a message by itself is written in the wrong cause could not be handled correctly, executes an error, but the queue to retry mechanism to put it in the queue priority retry processing, but no matter how many times is executed, the news itself cannot be handled correctly, Each time an error is reported, it is put into the queue head and retry again. The result is that the thread is always busy, but the program never gets the result, and the live lock problem occurs.

hunger

Hunger is the problem of a thread not being able to run when it needs certain resources, especially CPU resources.

One situation is that the priority is too low and CPU scheduling is not available for a long time.

Another situation is that the lock of a resource has been occupied by a thread for a long time, which leads to the thread that wants to acquire the resource consistently cannot obtain the lock, resulting in waiting, which is similar to hunger.

Which scenarios need to be considered for thread safety

Access shared variables or resources

The type scenarios include accessing properties of shared objects, accessing static static variables, accessing shared caches, and so on. Because this information can be accessed not only by one thread, but also by multiple threads at the same time, thread-safety issues can occur in the case of concurrent reads and writes.

Take, for example, the run-result problem above.

Operations that have dependencies

As the singleton pattern should know, there is one way to write that is thread unsafe:

public class Singleton {
    private static Singleton singleton;
    private Singleton(a) {}
    
    public static Singleton getInstance(a) {
        if (singleton == null) {
            singleton = new Singleton();
        }
        returnsingleton; }}Copy the code

Analysis:

Suppose there are two threads, T1 and T2, and the singleton is null and has not been initialized.

If (singleton == null); if (singleton == null

(2) Note that T1 has not yet executed the constructor code for some reason (thread context switch, etc.), i.e. the singleton is still null

If (singleton == null); if (singleton == null)

(4) The result is that T1 and T2 each construct an object. Is this still a singleton?

So dependent operations must be thread-safe!

The dependency here is that the object must be null to construct it.

Thread-unsafe classes

There are classes in the JDK that are not thread-safe, such as ArrayList, HashMap, StringBuilder, etc., so be careful when using them in a concurrent environment. If so, switch to thread-safe classes. Examples include CopyOnWriteArrayList, ConcurrentHashMap, StringBuffer, and so on.

Performance issues

Isn’t the main reason we use multithreading is to improve performance? Why does having multiple threads working at the same time, which speeds up your application, cause performance problems?

This is because a single-threaded program works independently and does not need to interact with other threads, but requires scheduling and cooperation between multiple threads. Scheduling and cooperation brings performance overhead and results in performance problems.

Scheduling overhead

Thread context switch

In actual development, the number of threads is often greater than the actual number of CPU cores. In this case, the operating system will allocate time slices to each thread according to a certain scheduling algorithm, so that each thread has a chance to run. Context switching occurs during scheduling.

Assuming our task content is very short, such as doing simple calculations, it is possible that the performance cost of our context switch is greater than the cost of the execution thread itself.

If the program is frequently competing for locks, or blocking frequently due to IO reads and writes, then the program may require more context switching, which leads to more overhead, and should be avoided.

Collaboration overhead

Because if there is shared data between threads, it is possible to prevent data malfunctions and to ensure thread safety by preventing compiler and CPU optimizations such as reordering, or by repeatedly flushing data from thread working memory into main memory for synchronization purposes. Then refresh from the main memory to the working memory of other threads, and so on.

Volatile keyword:

(1) Visibility; (2) Forbid instruction rearrangement; (3) Atomicity is not guaranteed.

Thread safety is a higher priority than performance, which indirectly degrades our performance.