Written in the book of the former

Now the third year, double non two, employment pressure, inner volume left shift, I am waste… Long time ago, my friends suggested me to do an eight-part essay integration, I refused, because there are too many!

So we found a bunch of eight-part essay Outlines, but each outline is just an outline, and in detail, it needs an introduction, so we have this. This article is not the eight-part essay recitation handout, just what I see, some eight-part essay may be examined, but explain the total of more in-depth articles, the source is the link πŸ”—.

This article is updated continuously… You can bookmark or like to get status updates.

Eight-part essay on GitHub

  • ⚽ ️ JavaGuide
  • πŸ€ Java3y
  • 🏐 a Xiu’s college admissions notes

The operating system

Process/thread

The process is the most basic execution body, it has the most basic characteristics of an execution unit, including PC, register, stack, open file set, PID, etc. A process can fork out its children, which in Linux share read-only space with their parent. Replication on write is used when a child process wants to write data.

Each process has a main thread, the thread as the process execution entity, a process can have multiple threads, belonging to the same process threads share all process resources. For the kernel, scheduling threads depends on the type of thread.

First, there are two kinds of threads: user threads and kernel threads. Kernel threads are created by kernel processes. Kernel is essentially a process, and all threads created by it are called kernel threads. User threads are created by non-kernel processes. For kernel threads, the scheduling unit is the thread, and for user threads, the scheduling unit is the process it belongs to. Therefore, user threads in the same process need to schedule their own scheduling rules. If a user process blocks, the entire process will be blocked.

To dump the burden of user thread scheduling onto the kernel, the LWP lightweight process emerged, which had only one main thread and each time a thread was created, instead of creating a thread within it, a new LWP was created. As mentioned earlier, the unit of user thread scheduling is process. If one thread corresponds to one process, then each thread can be scheduled by the kernel. Therefore, LWP does this, and its composition is lighter than that of ordinary processes.

Both Linux and Windows implement this strategy of one process per thread, allowing the kernel to schedule threads, which eliminates the costly process of creating kernel threads and allows the kernel to schedule user threads.

  • πŸͺ‚ Common threads and kernel threads
  • πŸ‡ Several ways of interprocess communication and interthread communication

Memory management

The file system

I/O

Synchronization and lock

We only talk about deadlocks here.

Four conditions for deadlock

  • 🀺 Mutual exclusion: a resource is either owned and unavailable or available (some resources can be owned by more than one thread at a time, and this cannot constitute a deadlock).
  • ⚾️ Occupy and wait: a process that owns a resource can continue to apply for other resources.
  • πŸ“ Preemption: Other processes cannot forcibly rob resources obtained by a process.
  • 🏸 Loop wait: in a deadlocked system, there must be two or more processes waiting for each other to release object resources.

When the program is already running, the common detection for deadlock is directed loop detection. A->B indicates that A is waiting for B’s resources to be released. Multiple processes form A directed graph.

When a program is not yet running, the banker algorithm is commonly used to detect deadlocks by determining whether the system is in a safe state to allow multiple processes to execute. The so-called safe state refers to the system from the execution of the first thread to the end of the execution of all threads, there is no deadlock, so the state of the system before the execution is not safe state.

The single resource banker algorithm is relatively simple. Let’s look directly at multiple resources:

Suppose there are M processes and N resources, the number of each resource is Res[N](one-dimensional array), and the number of each resource required by each process is Req[M][N](two-dimensional array); Each process has obtained each resource as Have[M][N], and needs each resource as Need[M][N].

  • Find a thread 1 ️ ⃣ I, making it the first j a resources are met, i.e. the Need [I] [j] < = Res [j] – (Have [0] [j] +… + Have [M – 1) [j]); That is, the number of resources required <= the number of resources remaining.
  • 2 ️ ⃣ j = j + 1
  • 3 ️ ⃣ repeat 1 ️ ⃣
  • 4️ If I is not satisfied, I = I +1, repeat 1️
  • 5️ If j == N, release all resources owned by I and continue 1️.

Others, such as two – stage lock and communication lock, live lock solution is relatively simple.

Processor Architecture (lower level)

Program optimization (lower level)

Java

  • 🐢Java interview knowledge point 【 recitationversion 240钘 about 7W word 】
  • 🐱JVM- Lock, JMM, concurrency
  • πŸ¦† JVM – GC buckets
  • 🐍 Java lock
  • πŸ¦… Memory allocation problems in concurrent operations
  • πŸ“ Analysis of the main points of the working principle of ConcurrentHashMap in Java8

MySQL

In MySQL, redolog records the data before modification to implement rollback operations. Undolog is only used for persistent modification, because the data needs to be loaded into the buffer before modification, and then written back to disk. Undolog is the append method, which belongs to sequential I/O, and is faster. The operations are as follows: Write to buffer=> Write to undolog=> Refresh disks asynchronously (by periodical refreshing).

In addition to InnoDB’s resource request graph deadlock detection, MySQL also provides a two-stage lock, which means that all the lock operations are in one place. The lock phase is only added and not released, and then the unlock phase is only released and not added. This effectively prevents deadlocks by breaking the conditions that create the hold and wait deadlocks.

  • 🍎InnoDB storage structure
  • 🍐 Four indexes of InnoDB
  • πŸ‹ Possible locks for InnoDB
  • 🍌MySQL two-phase locking protocol, deadlock, and deadlock detection
  • πŸ‰ Relationship between transaction isolation levels and locks in Innodb this article is approximately equal to πŸ‹+🍌
  • πŸ‡ Common ways to write MySQL index failure
  • πŸ“ redolog undolog
  • πŸ₯ Optimize the MySQL paging query limit

NoSQL

  • πŸ’ How to ensure Redis dual-write consistency with MySQL?
  • πŸ‘ hardcore! 16000 words Redis interview knowledge point summary, suggestions collection!
  • πŸ₯¦ Distributed lock -Redis/Zookeeper?

Spring

  • πŸ₯¬ What is the lifecycle of beans in Spring? – big idle person chai Maomao’s answer – Zhihu
  • πŸ₯’Spring source code analysis – dependency injection implementation principle
  • 🌽 How Spring AOP is implemented? – bravo1988’s answer – zhihu

distributed

Kafka

  • 🚲 Kafka interview

other

  • πŸš— Traffic limiting algorithm
  • πŸš‘ Bloom filter

The cluster

Zookeeper

  • 🚌 Zookeeper briefly
  • πŸ›΄ ZAB protocol of Zookeeper

network

  • πŸš€30 diagrams: TCP retransmission, sliding Windows, flow control, congestion control

First we need to make it clear that TCP sends a request and receives an acknowledgement of receipt. If we do not receive an ACK from the recipient after a request, we can assume that there is a problem and the problem is called timeout. There are two reasons for this problem: packet loss and network congestion.

Why are TCP’s congestion control algorithms still being perfected? This is largely because we do not know when a timeout occurs, whether it is due to simple packet loss or network congestion, which can be simply resended and which is designed to recover network congestion. Therefore, the current algorithm is based on the network bandwidth, traffic, delay, ACK response to determine which one, in order to improve the algorithm.

If a timeout occurs, a retransmission is required. There are two reasons for timeout: packet loss in sending and packet loss in returning ACK, both of which need to be resended. This involves the determination of timeout time, which is represented by RTO time, and RTO is related to RTT. Linux has its own way to calculate RTO, and RTO= twice before each time.

In addition, if a packet is lost during a transmission and the receiver finds that all the packets following this packet have been received except this one, it can receive a new packet each time and send the ACK of the packet before the lost packet to realize the notification to the sender. If the sender finds that the same ACK has occurred three times in a row, packet loss has occurred and retransmission needs to be involved, which is called fast retransmission.

In order to know whether a fast retransmission is all or a packet that depends on the SACK, it has a data segment that indicates what data the receiver has received, so that the sender can choose to retransmit the data.

SACK, however, cannot handle retransmission caused by ACK loss. SACK can only handle the problem of packet loss during sending. If the sending succeeds but the ACK reply is lost, it can only be solved by d-sack (duplication-sack). In addition, it can solve the problem caused by network delay. If a packet does not arrive due to network delay, it triggers a quick retransmission, sends a second packet, and then the delayed packet arrives. DSACK informs the sender that the cause of the retransmission is not packet loss, but network delay.

If we have to wait for a response to every request, the communication becomes very inefficient, and we need to go through the window. The send window maintains four sections:

  • 1️ Catchment has been sent and confirmed
  • 2️ 8% sent but not confirmed
  • 3️ 8% is not delivered but can be delivered
  • 4️ 8% is not delivered and cannot be delivered

Similarly, the receive window maintains four sections:

  • 1 ️ ⃣ has been used
  • 2️ 8% received and confirmed
  • 3️ 8% not received but acceptable
  • 4️ Collageable and uncollageable

Through these two Windows, we can know how to plan the sending and receiving data. The size of the sending window is approximately equal to the size of the receiving window, and the size of the receiving window is specified by the Windows parameters of the response data.

Each time, the sender marks the location specified in the ACK as received, while right-sliding the window; The receiving end for the received data also right – slide window to achieve.

Netty

  • πŸš… design a million-level message push system

I/O

  • ✈️ Talk about Java network programming

Design patterns

Algorithms and data structures

Sorting algorithm

  • 🍺 Top 10 classic sorting algorithms
    public static void insertSort(int[] nums) {
        // [0, I] is ordered
        for (int i = 0; i < nums.length - 1; ++i) {
            int j = i + 1;
            int tmp = nums[i + 1];
            // Find a suitable position for I +1 and insert it
            for (; j - 1> =0 && tmp < nums[j - 1]; --j) {
                nums[j] = nums[j - 1]; } nums[j] = tmp; }}public static void selectSort(int[] nums) {
        for (int i = 0; i < nums.length; ++i) {
            int min = nums[i];
            int minIdx = i;
            // In [I +1, nums.length-1], find the smallest element less than nums[I] and exchange it
            for (int j = i + 1; j < nums.length; ++j) {
                if(nums[j] < min) { min = nums[j]; minIdx = j; } } nums[minIdx] = nums[i]; nums[i] = min; }}public static void bubbleSort(int[] nums) {
        for (int i = 0; i < nums.length; ++i) {
            // A constant exchange of violence
            for (int j = 0; j < nums.length - 1; ++j) {
                if (nums[j] > nums[j + 1]) {
                    swap(nums, j, j + 1); }}}}public static void shellSort(int[] nums) {
        for (int gap = nums.length / 2; gap > 0; gap /= 2) {
            // The same as insert sort, but the gap between each process is gradually reduced
            for (int i = 0; i + gap < nums.length; i += gap) {
                int tmp = nums[i + gap];
                int j = i + gap;
                for (; j - gap >= 0&& tmp < nums[j - gap]; j -= gap) { nums[j] = nums[j - gap]; } nums[j] = tmp; }}}public static void mergeSort(int[] nums) {
        mergeSort0(nums, 0, nums.length-1);
    }

    // [from, to]
    private static void mergeSort0(int[] nums, int from, int to) {
        if (from >= to) {
            return ;
        }
        int mid = (from + to) / 2;
        mergeSort0(nums, from, mid);
        mergeSort0(nums, mid+1, to);
        int[] tmp = new int[to-from+1];
        int idxA = from, idxB = mid+1, idx = 0;
        while (idxA <= mid && idxB <= to) {
            if (nums[idxA] < nums[idxB]) {
                tmp[idx++] = nums[idxA++];
            } else{ tmp[idx++] = nums[idxB++]; }}while (idxA <= mid) {
            tmp[idx++] = nums[idxA++];
        }
        while (idxB <= to) {
            tmp[idx++] = nums[idxB++];
        }
        System.arraycopy(tmp, 0, nums, from, tmp.length);
    }

    public static void quickSort(int[] nums) {
        quickSort0(nums, 0, nums.length-1);
    }

    // [from, to]
    private static void quickSort0(int[] nums, int from, int to) {
        if (from >= to) {
            return ;
        }
        int cmp = nums[from];
        int l = from, r = to;
        while (l < r) {
            while (l <= r && nums[l] <= cmp) {
                ++l;
            }
            while (l <= r && nums[r] > cmp) {
                --r;
            }
            if (l < r) {
                swap(nums, l, r);
            }
        }
        swap(nums, from, r);
        quickSort0(nums, from, r-1);
        quickSort0(nums, r+1, to);
    }

    public void add(int[] nums, int i, int val){
        nums[i] = val;
        int curIndex = i;
        while (curIndex > 0) {
            int parentIndex = (curIndex - 1) / 2;
            if (nums[parentIndex] < nums[curIndex])
                swap(nums, parentIndex, curIndex);
            else break; curIndex = parentIndex; }}public int remove(int[] nums, int size){
        int result = nums[0];
        nums[0] = nums[size - 1];
        int curIndex = 0;
        while (true) {
            int leftIndex = curIndex * 2 + 1;
            int rightIndex = curIndex * 2 + 2;
            if (leftIndex >= size) break;
            int maxIndex = leftIndex;
            if (rightIndex < size && nums[maxIndex] < nums[rightIndex])
                maxIndex = rightIndex;
            if (nums[curIndex] < nums[maxIndex])
                swap(nums, curIndex, maxIndex);
            else break;
            curIndex = maxIndex;
        }
        return result;
    }

    private static void swap(int[] nums, int idxA, int idxB) {
        int tmp = nums[idxA];
        nums[idxA] = nums[idxB];
        nums[idxB] = tmp;
    }
Copy the code