Multiple threads can behave correctly no matter how they access a class and do not need to synchronize in the calling code.

Thread safety can be implemented in the following ways:

immutable

Immutable objects are thread-safe, requiring no thread-safety safeguards. As long as an immutable object is constructed correctly, you will never see it in an inconsistent state across multiple threads. In multithreaded environments, objects should be made immutable as much as possible to meet thread safety.

An immutable type:

The base data type modified by the final keyword

String

Enumerated type

Number subclasses include numeric wrapper types such as Long and Double, and big data types such as BigInteger and BigDecimal. But the AtomicInteger and AtomicLong classes, both Number, are mutable.

For collection types, you can use the Collections. UnmodifiableXXX () method to get an immutable collection.

public class ImmutableExample {

public static void main(String[] args) {

Mapmap = new HashMap();

MapunmodifiableMap = Collections.unmodifiableMap(map);

unmodifiableMap.put(a, 1);

}

}

Exception in thread main java.lang.UnsupportedOperationException

at java.util.Collections$UnmodifiableMap.put(Collections.java:1457)

at ImmutableExample.main(ImmutableExample.java:9)

Collections. UnmodifiableXXX () on the set of the original copy first, need to modify the collection methods are directly throw an exception.

public V put(K key, V value) {

throw new UnsupportedOperationException();

}

The mutex synchronization

Synchronized and already.

Nonblocking synchronization

The main problem with mutex synchronization is the performance problem caused by thread blocking and waking up, so it is also called blocking synchronization.

Mutex synchronization is a pessimistic concurrency strategy that assumes that if you don’t do the right synchronization, you’re bound to have a problem. Whether or not shared data is actually competing, it does locking (this is a conceptual model, but the virtual machine optimizes a large portion of unnecessary locking), user-mode core mind-shifting, maintaining lock counters, and checking to see if there are blocked threads that need to be woken up.

1. CAS

As the hardware instruction set evolves, we can use an optimistic concurrency strategy based on collision detection: do the operation first, and if there are no other threads contending for the shared data, then the operation succeeds, otherwise compensate (keep retrying until it succeeds). Many implementations of this optimistic concurrency strategy do not require threads to block, so this synchronization operation is called non-blocking synchronization.

Optimistic locking requires atomicity of operation and collision detection, which can no longer be guaranteed by mutex synchronization, but only by hardware. The most typical atomic operation supported by hardware is: compare-and-swap (CAS). The CAS instruction requires three operands, which are the memory address V, the old expected value A, and the new value B. When performing the operation, the value of V is updated to B only if the value of V is equal to A.

2. AtomicInteger

The method of the integer atom class AtomicInteger in the J.U.C package calls the CAS operation of the Unsafe class.

The following code uses AtomicInteger to perform the increment operation.

private AtomicInteger cnt = new AtomicInteger();

public void add() {

cnt.incrementAndGet();

}

The following code is the source for incrementAndGet(), which calls the Unsafe getAndAddInt().

public final int incrementAndGet() {

return unsafe.getAndAddInt(this, valueOffset, 1) + 1;

}

The following code is the source of getAndAddInt(), var1 indicates the memory address of the object, var2 indicates the offset of the field relative to the memory address of the object, and var4 indicates the value to be added by the operation, which is 1. Use getIntVolatile(var1, var2) to get the old expected value. Use compareAndSwapInt() to do the CAS comparison. If the value in the field’s memory address is equal to var5, Update the variable with memory address var1+var2 to var5+var4.

You can see that getAndAddInt() goes in a loop, and the conflict is repeated retry.

public final int getAndAddInt(Object var1, long var2, int var4) {

int var5;

do {

var5 = this.getIntVolatile(var1, var2);

} while(! this.compareAndSwapInt(var1, var2, var5, var5 + var4));

return var5;

}

3. ABA

If A variable is first read with A value of A, its value is changed to B, and then changed back to A, the CAS operation will assume that it has never been changed.

The J.U.C package addresses this problem by providing a tagged atom reference class, AtomicStampedReference, that guarantees the correctness of the CAS by controlling the version of the variable value. ABA problems do not affect concurrency in most cases, and if ABA problems need to be addressed, switching to traditional mutex synchronization may be more efficient than atomic classes.

Asynchronous scheme

Synchronization is not necessary to be thread-safe. If a method does not inherently involve sharing data, it naturally does not require any synchronization measures to ensure correctness.

1. The stack

There are no thread-safety issues when multiple threads access local variables of the same method because local variables are stored in the virtual machine stack and are thread-private.

public class StackClosedExample {

public void add100() {

int cnt = 0;

for (int i = 0; i 100; i++) {

cnt++;

}

System.out.println(cnt);

}

}

public static void main(String[] args) {

StackClosedExample example = new StackClosedExample();

ExecutorService executorService = Executors.newCachedThreadPool();

executorService.execute(() – example.add100());

executorService.execute(() – example.add100());

executorService.shutdown();

}

100

100

2. Thread Local Storage

If the data needed in one piece of code must be shared with other code, see if the code that shares the data can be guaranteed to execute in the same thread. If we can, we can limit the visibility of shared data to the same thread, so that synchronization is not required to ensure that there is no data contention between threads.

Applications that fit this pattern are not uncommon, and most architectural patterns that use consumption queues (such as the producer-consumer pattern) try to consume products in a single thread. One of the most important application examples is the thread-per-request processing in the classic Web interaction model. The widespread application of this processing method enables many Web server applications to use thread-local storage to solve the thread-safety problem.

You can use the java.lang.ThreadLocal class to implement thread-local storage.

For the following code, thread1 sets threadLocal to 1 and thread2 sets threadLocal to 2. After a certain amount of time, thread1 reads threadLocal as 1, unaffected by thread2.

public class ThreadLocalExample {

public static void main(String[] args) {

ThreadLocal threadLocal = new ThreadLocal();

Thread thread1 = new Thread(() – {

threadLocal.set(1);

try {

Thread.sleep(1000);

} catch (InterruptedException e) {

e.printStackTrace();

}

System.out.println(threadLocal.get());

threadLocal.remove();

});

Thread thread2 = new Thread(() – {

threadLocal.set(2);

threadLocal.remove();

});

thread1.start();

thread2.start();

}

}

1

To understand ThreadLocal, look at the following code:

public class ThreadLocalExample1 {

public static void main(String[] args) {

ThreadLocal threadLocal1 = new ThreadLocal();

ThreadLocal threadLocal2 = new ThreadLocal();

Thread thread1 = new Thread(() – {

threadLocal1.set(1);

threadLocal2.set(1);

});

Thread thread2 = new Thread(() – {

threadLocal1.set(2);

threadLocal2.set(2);

});

thread1.start();

thread2.start();

}

}

Its corresponding underlying structure diagram is as follows:

  


Each Thread has a ThreadLocal ThreadLocalMap object.

/* ThreadLocal values pertaining to this thread. This map is maintained

* by the ThreadLocal class. */

ThreadLocal.ThreadLocalMap threadLocals = null;

When you call a ThreadLocal set(T value) method, you get the ThreadLocalMap object of the current thread and insert the ThreadLocal-value key-value pair into the Map.

public void set(T value) {

Thread t = Thread.currentThread();

ThreadLocalMap map = getMap(t);

if (map ! = null)

map.set(this, value);

else

createMap(t, value);

}

The get() method is similar.

public T get() {

Thread t = Thread.currentThread();

ThreadLocalMap map = getMap(t);

if (map ! = null) {

ThreadLocalMap.Entry e = map.getEntry(this);

if (e ! = null) {

@SuppressWarnings(unchecked)

T result = (T)e.value;

return result;

}

}

return setInitialValue();

}

ThreadLocal is not technically designed to solve the problem of multithreaded concurrency, because there is no multithreaded competition.

In some scenes, especially using a thread pool), because the ThreadLocal. ThreadLocalMap the underlying data structure, leading to a ThreadLocal has a memory leak, should as far as possible after each use ThreadLocal manually call remove (), To avoid the classic risk of ThreadLocal leaking memory or even disrupting its own business.

3. Reentrant Code

This Code, also known as Pure Code, can be interrupted at any point in its execution to execute another piece of Code (including recursive calls to itself) without any errors in the original program after control is returned.

Reentrant code has some common characteristics, such as not relying on data stored on the heap and common system resources, using state quantities passed in from parameters, and not calling non-reentrant methods.