From the school to A factory all the way sunshine vicissitudes of life

Please go to www.codercc.com

Concurrent programming has always been unfathomable for beginners, so I wanted to write something down to improve my understanding and knowledge of heap concurrent programming. Why is concurrency needed? There are always two sides of the coin. What are the trade-offs, that is, the disadvantages of concurrent programming? And what are the concepts that you should know and master when doing concurrent programming? This article mainly talks about these three problems.

1. Why concurrency

Hardware has evolved so rapidly that it is known as “Moore’s Law”. It may be strange to talk about hardware development when we are talking about concurrent programming, but the development of multi-core cpus provides the hardware foundation for concurrent programming. Moore’s Law is not a law of nature or a law of physics, but a prediction of the future based on observations. At the predicted rate, our computing power will grow exponentially, and soon we will have super computing power. Just as we were thinking about the future, in 2004, Intel announced that it was postponing plans for 4GHz chips until 2005. Then in the fall of 2004, Intel announced that it was canceling plans for 4GHz chips altogether. So Moore’s Law has been valid for more than half a century and then it stops. However, clever hardware engineers did not stop developing, and instead of pursuing single computing units, they integrated multiple computing units into one, that is, multi-core cpus. In just over a decade, home cpus like the Intel I7 will have four or even eight cores. Professional servers typically have several independent cpus, with up to eight cores per CPU. Therefore, Moore’s Law seems to continue to be experienced with CPU core extensions. Therefore, under the background of multi-core CPU, the trend of concurrent programming is born. Through the form of concurrent programming, the computing power of multi-core CPU can be brought into full play and the performance is improved.

Donald Ervin Knuth, a top computer scientist, has this to say about the situation: “It seems to me that this phenomenon is more or less the result of hardware designers running out of ideas and putting the blame for Moore’s Law on software developers.

In addition, it is inherently suitable for concurrent programming in special business scenarios. In the field of image processing, for example, a 1024X768 pixel image contains more than 786,000 pixels. Even if it takes a long time to traverse all the pixels on one side, it is necessary to make full use of multi-core computing power in the face of such complex computation. For example, when we shop online, in order to improve the response speed, we need to split, reduce inventory, generate orders and so on. We can split using multi-threaded technology to complete. In the face of complex business models, parallel programs are better suited to business requirements than serial programs, and concurrent programming is better suited to this kind of business separation. Because of these advantages, multithreading technology can be paid attention to, but also a CS learner should master:

  • Make full use of the computing power of multi-core CPU;
  • This facilitates service splitting and improves application performance

What are the disadvantages of concurrent programming

With all the benefits of multithreading, isn’t there a disadvantage that it can work in any situation? Apparently not.

2.1 Frequent context switching

The time slice is the amount of time that the CPU allocates to each thread. Because the time is very short, the CPU keeps switching threads to make us feel that multiple threads are executing at the same time. The time slice is usually tens of milliseconds. Each time you switch, you need to save the current state so that you can restore the previous state, and this switch is very performance destructive, too often can not take full advantage of multithreaded programming. Generally reducing context switching can be achieved by using lockless concurrent programming, CAS algorithms, using minimal threads and using coroutines.

  • Lock-free concurrent programming: Consider the idea of concurrentHashMap lock fragmentation, where different threads process different segments of data to reduce context switching time when multiple threads are competing.

  • CAS algorithm, using Atomic CAS algorithm to update data, uses optimistic lock, which can effectively reduce part of the unnecessary lock competition caused by context switch

  • Use minimal threads: Avoid creating threads that are not needed, such as creating a large number of threads with few tasks, resulting in a large number of threads waiting

  • Coroutine: Performs scheduling of multiple tasks in a single thread and maintains switching between tasks in a single thread

Because context switching is also a relatively time-consuming operation, an experiment in the book “The Art of Concurrent Programming in Java” shows that concurrent accumulation is not necessarily faster than serial accumulation. You can use Lmbench3 to measure the duration of the context switch. Vmstat measures the number of context switches

2.2 Thread Safety

In multithreaded programming, the most difficult to grasp is the critical area thread safety problem, a little attention will appear deadlock, once the deadlock will cause the system function is unavailable.

public class DeadLockDemo { private static String resource_a = "A"; private static String resource_b = "B"; public static void main(String[] args) { deadLock(); } public static void deadLock() { Thread threadA = new Thread(new Runnable() { @Override public void run() { synchronized (resource_a) { System.out.println("get resource a"); try { Thread.sleep(3000); synchronized (resource_b) { System.out.println("get resource b"); } } catch (InterruptedException e) { e.printStackTrace(); }}}}); Thread threadB = new Thread(new Runnable() { @Override public void run() { synchronized (resource_b) { System.out.println("get resource b"); synchronized (resource_a) { System.out.println("get resource a"); }}}}); threadA.start(); threadB.start(); }}Copy the code

In the demo above, two threads threadA, threadB, are enabled. ThreadA occupies Resource_A and waits for resource _B to be released by threadB. ThreadB occupies resource _A that resource _B is waiting to be released by threadA. Therefore, threadA,threadB has thread safety issues, resulting in a deadlock. This inference can also be proved with JPS and JStack:

"Thread-1":
  waiting to lock monitor 0x000000000b695360 (object 0x00000007d5ff53a8, a java.lang.String),
  which is held by "Thread-0"
"Thread-0":
  waiting to lock monitor 0x000000000b697c10 (object 0x00000007d5ff53d8, a java.lang.String),
  which is held by "Thread-1"

Java stack information for the threads listed above:
===================================================
"Thread-1":
        at learn.DeadLockDemo$2.run(DeadLockDemo.java:34)
        - waiting to lock <0x00000007d5ff53a8(a java.lang.String)
        - locked <0x00000007d5ff53d8(a java.lang.String)
        at java.lang.Thread.run(Thread.java:722)
"Thread-0":
        at learn.DeadLockDemo$1.run(DeadLockDemo.java:20)
        - waiting to lock <0x00000007d5ff53d8(a java.lang.String)
        - locked <0x00000007d5ff53a8(a java.lang.String)
        at java.lang.Thread.run(Thread.java:722)

Found 1 deadlock.
Copy the code

As mentioned above, the current deadlock situation is completely visible.

Then, deadlock situations can usually be avoided by:

  1. Prevent one thread from acquiring multiple locks at the same time.
  2. Avoid one thread occupying multiple resources inside the lock and try to ensure that each lock occupies only one resource.
  3. Try to use a timing lock, using lock.tryLock(timeOut), the current thread will not block while waiting for timeOut;
  4. For database locks, lock and unlock must be in the same database connection, otherwise the unlock will fail

Therefore, there is a great deal of knowledge about how to properly use multithreaded programming techniques, such as how to ensure thread safety, and how to correctly understand the problems caused by the ATOMicity, orderliness, and visibility of the JMM memory model, such as dirty data reads, DCL, etc. (covered in a later section). You’ll also learn a lot about multi-threaded programming.

3. Concepts to know

3.1 Synchronous vs. Asynchronous

Synchronous and asynchronous are commonly used to describe a method call. At the beginning of a synchronous method call, the caller must wait for the method being called to finish before the code behind the caller can execute. Asynchronous invocation, on the other hand, means that the caller continues to execute the code regardless of whether the method is complete or not, and notifies the caller when the method is complete. For example, in overtime shopping, if an item is missing, you have to wait for the warehouse staff to transfer the goods with you, until the warehouse staff with you to deliver the goods, you can continue to pay at the cashier, which is similar to synchronous call. But asynchronous call, just like online shopping, after you pay online, you don’t have to care about anything, do what you have to do, when the goods arrive you receive notification to pick up.

3.2 Concurrency and parallelism

Concurrency and parallelism are very confusing concepts. Parallelism refers to the alternating of multiple tasks, while parallelism refers to “simultaneous” in the true sense. In fact, if there is only one CPU in the system, and the use of multi-threading, then the real system environment can not parallel, can only be carried out alternately by switching time slices, and become concurrent execution tasks. True parallelism can only occur on systems with multiple cpus.

3.3 Blocking and non-blocking

Blocking and non-blocking is often used to describe a multithreaded, the mutual influence between such as a thread of the critical section resources, then other threads need the resources have to wait for the release of the resources, will lead to wait for the thread hangs, this kind of circumstance is blocking, rather than block is on the contrary, it emphasizes that no one thread can block other threads, All threads will try to run forward.

3.4 critical region

A critical section is used to represent a common resource, or shared data, that can be used by multiple threads. However, once a critical section resource is occupied by one thread per thread, the other threads must wait.