The main contents of this paper are as follows:

preface

We’re all talking about distribution, especially when you’re interviewing, whether you’re hiring a junior software engineer or a senior software engineer, you’re going to have to know about distribution, you’re going to have to have used it. What is the much-hyped distribution, and what are its advantages?

Using the skill of ninjutsu

Those of you who have seen Naruto must know naruto’s signature ninjutsu: the art of multiple shadows at once.

  • There’s one thing about this trick that’s particularly powerful,Process and experienceThe feelings and experiences of multiple doppelganger are the same. For example, when A dopper goes to Kakashi (Naruto’s teacher) to ask A question, the other dopper will know what the dopper is asking.
  • narutoThere is another super powerful ninjutsu that requires several shadow-dopes to complete:Wind dun ยท spiral hand sword. This ninjutsu is accomplished by the collaboration of three Naruto.

What do these two ninjutsu have to do with distribution?

  • Systems or services that are distributed in different places are interrelated to each other.

  • A distributed system is a cooperative system.

Case study:

  • Such as RedisThe guard mechanism, you can know which one in the cluster environmentRedisThe node is down.
  • Kafka’sLeader Election MechanismIf a node is down, it will be used fromfollowerA new leader is elected. (The leader serves as the entry point for writing data, and the follower serves as the entry point for reading data)

So what’s the downside of being a doppelganger?

  • It takes a lot of chakras. Distributed systems have the same problem, requiring several times more resources to support.

A popular understanding of distribution

  • It’s a way of working
  • A collection of independent computers that act as a single related system to the user
  • Distribute different businesses in different places

Advantages can be considered in two ways: one is macro, the other is micro.

  • Macro level: a system with multiple functional modules mashuped together performs service splitting to decouple calls between services.
  • Micro level: Distribute the services provided by modules to different machines or containers to expand the intensity of services.

Everything has a Yin and a Yang, so what are the problems of distribution?

  • The need for more high-quality talent to understand distribution, human costs increase
  • Architecture design has become incredibly complex and expensive to learn
  • Deployment and maintenance costs have increased significantly
  • Links between multiple services become longer, making it more difficult to develop and troubleshoot problems
  • High reliability of the environment
  • Data idempotence problem
  • The order of the data
  • , etc.

When we talk about distributed, we have to know the CAP theorem and the Base theory.

The CAP theorem

In theoretical computer science, the CAP theorem states that for a distributed computing system, it is impossible to satisfy the following three points:

  • Consistency
    • All nodes access the same up-to-date copy of data.
  • Availability
    • Every request can get an error-free response, but there is no guarantee that the data is up to date
  • Partition tolerance
    • Failure to achieve data consistency within the time limit means that A partitioned situation has occurred and you must choose between C and A for the current operation)

The BASE theory of

BASE is an abbreviation for Basically Available, Soft state, and Eventually consistent. BASE theory is an extension of AP in CAP, which sacrifices strong consistency to obtain availability. When a failure occurs, it allows partial unavailability but ensures the availability of core functions. It allows data to be inconsistent for a period of time, but eventually reach a consistent state. The transactions that satisfy the BASE theory are called flexible transactions.

  • Basic availability: When a distributed system fails, some available functions are allowed to be lost to ensure that core functions are available. If the e-commerce network site transaction payment problems, the goods can still be normal browsing.
  • Soft state: Because strong consistency is not required, BASE allows an intermediate state (also called soft state) to exist in the system. This state does not affect the availability of the system, such as “payment”, “data synchronization” and other states in the order. After the data is finally consistent, the state changes to “Success”.
  • Final consistency: Final consistency means that over a period of time, all nodes will be consistent. For example, the “payment in” status of the order will eventually change to “payment success” or “payment failure”, so that the order status and the actual transaction results are reached, but it needs some time to delay and wait.

The pits of distributed message queues

How do message queues be distributed?

The messages in a message queue are divided among multiple nodes (a machine or container), and the sum of the message queues of all nodes contains all the messages.

1. Non-idempotent pits of message queues

(1) Idempotence

Idempotence is the fact that no matter how many operations you do, you get the same result as the first operation. If the message is consumed multiple times, it is likely that the data will be inconsistent. Inevitable if message consumption for many times, if we developers can through technical means to ensure data consistency, it is acceptable, it reminds me of the problem of ABA in concurrent Java programming, if the [ABA problem] (with blocks on principle of ABA | wife incredibly understand again!) , if you can ensure the consistency of all the data is acceptable.

(2) Scenario analysis

RabbitMQ, RocketMQ, and Kafka message queue middleware are all prone to message duplication. This problem is not guaranteed by MQ itself, but by the developer.

In the middle are some of the best distributed message queues in the world, which must allow for idempotence of messages. Let’s take Kafka as an example and see how Kafka guarantees idempotent message queues.

Kafka has the concept of an offset, representing the message number, each message to a message queue can have an offset, consumer consumption data, every once in a fixed time, will be offset to submit the consumption of news, says it has consumption, consumption from behind the offset start next time.

Pit: After consuming the message, the system is shut down before the offset is committed, so uncommitted messages are consumed again.

As shown in the figure below, data A, B and C in the queue correspond to offsets of 100, 101 and 102 respectively, and are consumed by consumers. However, only offset 100 of data A is successfully submitted, and the other two offsets are not submitted in time due to system restart.

After the restart, the consumer again picks up data after offset 100 and starts to pick up messages from offset 101. So data B and data C are repeated messages.

As shown in the figure below:

(3) Pit avoidance guide

  • Wechat Pay result notification scenario
    • Wechat official documents mentioned that wechat pay notification results may be pushed multiple times, developers need to ensure idempotent. In the first time, we can directly modify the order state (such as in payment -> payment is successful), and in the second time, we can judge according to the order state. If it is not in payment, no order processing logic is carried out.
  • Inserting a database scenario
    • Before inserting data, check whether the database has a primary key ID for the data. If so, update the data.
  • Write a Redis scenario
    • The RedisSetOperate on natural idempotences, so you don’t have to worry about Redis writing data.
  • Other Scenarios
    • When the producer sends each piece of data, it adds a globally unique ID, similar to the order ID. Before each consumption, check whether the ID exists in Redis first. If not, the message will be processed normally and the ID will be saved in Redis. If this ID is found, it indicates that the message was previously consumed, so do not reprocess the message.
    • Different business scenarios may have different idempotent solutions. You can choose the right one. The above solutions only provide common solutions.

2. The message of the pit of the message queue is lost

Pit: What is the problem of message loss? Financial losses may be caused in case of loss of information related to order placing, payment result notification and fee deduction. If the amount is large, huge losses will be brought to Party A.

Does message queuing guarantee that messages are not lost? Answer: No. There are three main scenarios for message loss.

(1) The message is lost when the producer stores the message

The solution

  • Transaction mechanism (not recommended, asynchronous)

For RabbitMQ, the RabbitMQ transaction mechanism channel.txselect is enabled before the producer sends data. If the message is not queued, the producer receives an error and rolls back the channel.txRollback and then sends the message again. If a message is received, the transaction channel.txCommit can be committed. However, this is a synchronous operation and affects performance.

  • Confirm mechanism (recommended, asynchronous)

We can use another mode: confirm mode to solve the synchronization performance problem. Each message sent by the producer is assigned a unique ID. If it is written to the RabbitMQ queue, RabbitMQ will send back an ACK message indicating that the message has been received successfully. If RabbitMQ fails to process the message, the nack interface is called back. You need to send the message again.

You can also customize the timeout time + message ID to implement timeout wait after retry mechanism. The problem is that the CALL to the ACK interface fails, so the message is sent twice, and the idempotentiality of the consumer consumption message needs to be ensured.

Transaction mode ๅ’Œ confirmDifferences in patterns:

  • The transaction mechanism is synchronous, and the commit regret is blocked until the commit completes.
  • Confirm mode receives notifications asynchronously, but may not receive notifications. You need to consider scenarios where you don’t receive notifications.

(2) Message queue loss

Messages in a message queue can be placed in memory, or messages in memory can be transferred to a hard disk (such as a database), usually both in memory and hard disk. If it’s just in memory, the messages are all lost when the machine restarts. In the case of hard disk, there may be an extreme case where message queues fail to persist messages to hard disk during the conversion of data from memory to hard disk.

The solution

  • createQueueSet it to persistent when the This place is not understood, welcome to explore the solution.
  • The message will be sent when the messagedeliveryModeSet this parameter to 2.
  • Turn on producerconfirmMode, you can retry sending the message.

(3) Consumer loss message

The consumer has just received the data and has not yet started to process the message, but the process unexpectedly exits and the consumer has no chance to get the message again.

The solution

  • Disable automatic for RabbitMQackEach time the producer writes a message to the message queue, it automatically sends one backackTo the producer.
  • The consumer processes the message before taking the initiativeack, telling the message queue that I’m done.

Question: What are the loopholes in the active ACK? What if I die while actively ack?

It may be consumed again, which is where idempotent processing is needed.

Q: What if the message keeps getting reconsumed?

If the number of retries exceeds a certain number of times, the message will be lost, recorded to the exception table or sent an exception notification to the on-duty personnel.

(4) Summary of RabbitMQ message loss

(5) Kafka message loss

Scenario: One of Kafka’s brokers is down and the leader (the node to write to) is re-elected. If the leader fails and the follower data is not synchronized, the message queue will lose some data after the follower becomes the leader.

The solution

  • For a topicreplication.factorThe value must be greater than 1. Each partition must have at least two copies.
  • Set the Kafka servermin.insyc.replicasThe value must be greater than 1, indicating that a leader has at least one follower in contact with the leader.

3. Messages in the pit of the message queue are out of order

Pit: the user first placed a successful order, and then cancel the order, if the order is reversed, then there will be a successful order in the database.

The RabbitMQ scene:

  • The producer sends two messages to the message queue in sequence: message 1: add data A, and message 2: delete data A.
  • Expected result: Data A is deleted.
  • But if there are two consumers, the order of consumption is: message 2, message 1. Then the final result is that data A is increased.

RabbitMQ solution:

  • Split the Queue and create multiple memory queues. Message 1 and message 2 are in the same Queue.
  • Create multiple consumers with a Queue for each consumer.

Kafka scene:

  • A topic is created with three partitions.
  • Create an order record, order ID as the key, order related messages are thrown to the same partition, the same producer created messages, the order is correct.
  • To consume a message quickly, multiple consumers are created to process the message, and to improve efficiency, each consumer may create multiple threads to fetch and process the message in parallel, perhaps out of order.

Kafka solution:

  • The solution is similar to RabbitMQ in that it uses multiple memory queues, one per thread.
  • Messages with the same key go to the same Queue.

4. Message queue pits are overstocked

Message backlog: There are many messages in the message queue that cannot be consumed.

Scenario 1: There is a problem with the consumer, such as the consumer is dead and there is no consumer to consume, resulting in the backlog of messages in the queue.

Scenario 2: Something goes wrong on the consumer side, such that the consumer is spending too slowly, resulting in a backlog of messages.

Pit: for example, online is doing order activity, order all go message queue, if the message backlog, order is not successful, then will lose a lot of transactions.

Solution: The solution must also be the bell

  • Fix problems with consumers at the code level to ensure that subsequent spending speeds are recovered or as fast as possible.
  • Cut out existing customers.
  • Temporarily create five times the number of queues.
  • Temporarily build up five times the number of customers.
  • All the messages are transferred to temporary queues, which are consumed by consumers.

5. The message of the pit in the message queue is invalid

Pit: RabbitMQ can set the expiration time, if the message is not consumed after a certain time, will be cleaned up by RabbitMQ. The message is lost.

Solution:

  • Prepare a batch repilot program
  • Manually rerun messages in batches when they are idle

6. The pit queue of the message queue is full

Pit: When a message queue is almost full due to a backlog of messages and cannot receive any more messages. Messages produced by the producer will be discarded.

Solution:

  • To determine which messages are useless, RabbitMQ can do thisPurge MessageOperation.
  • If it is a useful message, it needs to be consumed quickly and its contents dumped into a database.
  • Get ready for the program to rerouted messages from the database to the message queue again.
  • Reroute messages to message queues at idle.

Two, distributed cache pit

In high-frequency database access scenarios, we will add a caching mechanism between the business layer and the data layer to share the database access pressure, after all, access to disk I/O is very slow. For example, using the cache to look up the data, may be done in 5ms, while looking up the database may take 50 ms, which is an order of magnitude off. In the case of high concurrency, the database may also lock the data, resulting in slower access to the database.

Distributed caches One of the things we use the most is Redis, which provides distributed caches.

1. Redis data loss pit

The guard mechanism

Redis can implement the sentinel mechanism to achieve high availability of clusters. What about the ten sentinels?

  • English name:sentinel, Chinese name:The sentry.
  • Cluster monitoring: Responsible for the normal running of the primary and secondary processes.
  • Message notification: Responsible for reporting fault information to operation and maintenance personnel.
  • Failover: Responsible for moving the primary node to the standby node.
  • Configuration center: Notifying clients to update the address of the primary node.
  • Distributed: Multiple sentinels are distributed on each active and standby node and work together.
  • Distributed election: A majority of sentries need to agree to a master/slave switchover.
  • High availability: Even if some sentinel nodes are down, the sentinel cluster still works properly.

Pit: When the active node is faulty, an active/standby switchover is required, which may cause data loss.

Data loss caused by asynchronous replication

When the active node asynchronously synchronizes data to the standby node, the active node breaks down and some data is not synchronized to the standby node. The slave node is elected as the master node, and some data is lost.

Data loss due to a split brain

The machine on which the master node resides is disconnected from the cluster network and is actually running on its own. But when the sentinel elects the standby node as the master node, there are two master nodes running. It’s like two brains telling the cluster what to do. This is a split brain.

So how does a split brain cause data loss? If the client is still connected to the first master node before switching to the new master node, some data is still written to the first master node, but the new master node does not have the data. After the first primary node recovers, it will be connected to the cluster environment as a standby node, and its data will be cleared and data will be replicated from the new primary node. The new primary node does not have the data written by the client before, so some of the data is lost.

Avoid pit guide

  • Configure min-slaves to write 1 to enable at least one standby node.
  • If min-slaves max-lag 10 is configured, the delay of data replication and synchronization cannot exceed 10 seconds. A maximum of 10 seconds of data is lost

Note: Cache avalanches, cache penetrations, and cache penetrations are not unique to distributed systems, but can also occur in stand-alone systems. So it’s not one of the distributed pits.

Three, the pit of the database table

1. Expansion of the pool and table

Split library, split table, vertical split and horizontal split

  • Database splitting: Since the maximum number of concurrent accesses supported by a database is limited, you can split the data of a database into multiple libraries to increase the maximum number of concurrent accesses.

  • Sub-table: Because the data volume of a table is too large, it is impossible to query the data using indexes. Therefore, you can split the data of a table into multiple tables. When querying, only one table after splitting is used, which improves the query performance of SQL statements.

  • Advantages of database and table partitioning: after database and table partitioning, the concurrent capacity increases by many times; Disk usage is greatly reduced. The amount of single table data is reduced, and the SQL execution efficiency is significantly improved.

  • Horizontal splitting: Splitting data from a table into multiple databases, with the same table structure in each database. Use multiple libraries to combat higher concurrency. For example, the order table has 5 million pieces of data accumulated every month, and each month can be split horizontally, and the data of last month can be put into another database.

  • Vertical split: To split a table with many fields into multiple tables on the same library or libraries. High-frequency access fields go to one table, and low-frequency access fields go to another table. Use the database cache to cache frequently accessed row data. For example, an order table with many fields can be broken up into several tables with different fields (there can be redundant fields).

  • The method of dividing database and table:

    • According to tenants to divide the database, sub-table.
    • Use the time range to divide the database, divide the table.
    • The use of THE ID module to divide the library, divide the table.

Pit: Sub-database sub-table is an operation level to do things, sometimes take the early morning downtime to start the upgrade. They may stay up until dawn, fail to upgrade, and need to roll back, which is actually a pain for the technical team.

How to make it automatic to save the time of sub-table?

  • Dual-write migration: During migration, new data is added, deleted, or modified on both the new and old repositories.
  • Sharding- JDBC is used to complete the work.
  • Use a program to compare the data from two libraries until they are consistent.

Pit: The sub-database table looks bright and beautiful, but what new problems will the sub-database table introduce?

The problem with vertical splitting

  • There is still the problem of too much data in a single table.
  • Some tables cannot be queried by associated query and can only be solved by interface aggregation, which increases the complexity of development.
  • Distributed processing is complex.

The problem with horizontal split

  • Poor performance of associated queries across libraries.
  • Multiple capacity expansion and maintenance of data are large.
  • Transaction consistency across shards is difficult to guarantee.

2. The unique ID of the pit in the sub-table

Why do database sub-tables need unique IDS

  • If you want to create A separate table, you must consider that the table primary key ID is globally unique. For example, if you have an order table, it is divided into A and B libraries. If both order tables are increments from 1, the query of the order data is confused, and many of the order ids are duplicate, but these orders are not the same order.
  • One of the expected results of library splitting is that the number of times the data is accessed is distributed to other libraries. Some scenarios require even allocation, so when data is inserted into multiple databases, unique ids need to be generated alternately to ensure that requests are evenly distributed to all databases.

Pit: there are n ways to generate unique ID, each has its own purpose, don’t use the wrong.

The principle of generating unique ids

  • Global uniqueness
  • Increasing trend
  • Monotone increasing
  • Information security

Several ways to generate unique ids

  • ID of the database. Every time each database adds a record, its ID increases by 1.

    • disadvantages
      • The ID of multiple libraries may be duplicate, this scheme can be directly rejected, is not suitable for the ID generation after the database table.
      • Information insecurity
  • Applicable UUID Unique ID.

    • disadvantages
      • The UUID is too long and occupies too much space.
      • When the primary key is used, no sequential append operations can be performed while writing data. Only insert operations can be performed, resulting in the entire data being readB+Tree nodes into memory, insert records and write the entire node back to disk, performance is poor when records occupy large space.
  • Gets the current system time as the unique ID.

    • disadvantages
      • In high concurrency, there may be multiple identical ids within 1 ms.
      • Information insecurity
  • Twitter Snowflake: Twitter’s open source distributed ID generation algorithm, 64 bit long ID, divided into 4 parts

    • 1 bit: The value is 0

    • 41 bits: indicates the time stamp of 69 years.

    • 10 bits: 5 bits indicates the machine room ID, and 5 bits indicates the machine ID. A maximum of 32 equipment rooms can be configured. Each equipment room can be configured with a maximum of 32 machines.

    • 12 bits: specifies the ids in the same millisecond. A maximum of 4096 ids are supported in auto-increment mode

    • Advantages:

      • The number of milliseconds is high, the increment sequence is low, and the entire ID is trending upward.

      • It does not rely on third-party systems such as databases. It is deployed as a service, with higher stability and high PERFORMANCE in ID generation.

      • It can assign bits according to its own service characteristics, which is very flexible.

    • Disadvantages:

      • Rely heavily on the machine clock. If the clock is turned back on the machine (search for the 2017 leap second 7:59:60), it can cause duplicate numbers or make services unavailable.
  • Baidu’s UIDGenerator algorithm.

    • Snowflake based optimization algorithm.
    • Use future time and double Buffer to solve the problems of time callback and generation performance, while combining with MySQL for ID allocation.

  • Meituan’s leaf-Snowflake algorithm.

    • Why it’s called Leaf: From the mathematician Leibniz: “There are no two identical leaves in the world,” which means that the ID generated by this algorithm is unique.

    • Obtaining an ID is to obtain a batch of IDS (number segments) by accessing the database through the proxy service.
    • Double buffering: When the id of the current batch is used up to 10%, the database is accessed to obtain a new batch of IDS for cache, and the ids of the previous batch are used up directly.
    • Advantages:
      • Leaf services can easily scale linearly, and the performance is sufficient to support most business scenarios.
      • The ID is a 64 digit with 8 bytes increasing in trend and meets the primary key storage requirements of the preceding databases.
      • High disaster recovery: Leaf service has internal number segment cache. Even if DB breaks down, Leaf can still provide external services normally in a short time.
      • You can customize the size of max_ID to facilitate service migration from the original ID mode.
      • Even when DB is down, the Leaf can continue to issue numbers for a while.
      • Occasional network jitter will not affect the next number segment update.
    • Disadvantages:
      • ID number is not random enough, can leak the number of messages, not very safe.

Four, distributed transaction pit

How to understand transactions?

  • A transaction can be simply understood as either the whole thing has been done, or the whole thing has not been done at all, as if it had not happened.

  • In a distributed world, there are services that call each other, links that can be long, and if either party fails, operations involving other services need to be rolled back. For example, after the order is placed successfully in the order service, a voucher is issued through the coupon issuing interface of the marketing center, but the deduction of wechat Payment fails, the issued voucher needs to be returned, and the order status needs to be changed to abnormal order.

Pit: How to ensure the correct execution of distributed transactions is a big problem.

There are several major ways to distribute transactions

  • XA scenario (two-phase commit scenario)
  • TCC scheme (try, confirm, cancel)
  • SAGA scheme
  • Reliable message ultimate consistency scheme
  • Best efforts to inform the scheme

Principle of XA scheme

  • The transaction manager is responsible for coordinating transactions across multiple databases. Is each database ready? If it is, the operation is performed in the database, and if either database is not prepared, the transaction is rolled back.
  • Suitable for individual applications, not for microservice architectures. Because each service can only access its own database, cross-access to other microservices’ databases is not allowed.

TCC scheme

  • In the Try phase, resources of each service are checked and locked or reserved.
  • Confirm phase: The actual operations are performed in the various services.
  • Cancel phase: If a business method of any of the services fails to execute, the previous successful step needs to be rolled back.

Application scenario:

  • When dealing with payments and transactions, you must ensure that the funds are in the right scenario.
  • High requirements for consistency.

Disadvantages:

  • However, because it takes a lot of code to write compensation logic and is not easy to maintain, other scenarios recommend against doing this.

Sega scheme

Basic principles:

  • If one of each step in the business process fails, it compensates for the successful step of the previous operation.

Application scenario:

  • Business processes are long and numerous.
  • Participants include other companies or legacy system services.

Advantage:

  • The first phase commits local transactions, is lock-free, and has high performance.
  • Participants can execute asynchronously, with high throughput.
  • Compensation services are easy to implement.

Disadvantages:

  • Transaction isolation is not guaranteed.

Reliable message consistency scheme

Basic principles:

  • Using message middlewareRocketMQTo implement message transactions.
  • Step 1: System A sends A message to MQ, which marks the message status asprepared(ready state, half message), the message cannot be subscribed.
  • Step 2: MQ responds to System A, telling system A that the message has been received.
  • Step 3: SYSTEM A performs local transactions.
  • Step 4: If system A executes the local transaction successfully, thepreparedThe message is changed tocommit(Commit the transaction message), system B can subscribe to the message.
  • Step 5: MQ also polls all regularlyprepared“, calls back to system A to tell MQ how the local transaction is doing and whether to wait or roll back.
  • Step 6: A The system checks the execution result of the local transaction.
  • Step 7: If system A fails to execute the local transaction, MQ receivesRollbackSignal, discard message. If the local transaction is successful, MQ receivesCommitThe signal.
  • B After receiving the message, the system starts to execute the local transaction. If the execution fails, the system automatically tries again until the transaction succeeds. Or system B rolls back the system and notifies System A to roll back the system by other means.
  • B systems need to be idempotent.

Best efforts to inform the scheme

Basic principles:

  • System A sends A message to MQ after the local transaction is completed.
  • MQ persists the messages.
  • System B if the local transaction fails, thenBest serviceIt will periodically try to call system B again and try its best to get system B to retry. After many retries, it will still fail and give up. Transfer to the developer for troubleshooting and subsequent manual compensation.

How to choose several schemes

  • When dealing with payments and transactions, TCC is preferred.
  • Large systems, but with less stringent requirements, consider message transactions or SAGA scenarios.
  • For monomer applications, it is recommended that XA be submitted in two phases.
  • Try your best to add solutions and suggestions, after all, it is not possible to just leave it to the development to review once something goes wrong. Try again a few times to see if it works.

Write in the last

This article is just a small summary. From these pits, we also know that distribution has its advantages and disadvantages. Whether it should be used depends entirely on the business, time, cost and comprehensive strength of the development team. I’ll continue to share some of the underlying principles of distribution, as well as some tips on how to avoid pitfalls.

Resources: Meituan’s leaf-Snowflake algorithm. Baidu’s UIDGenerator algorithm. Advanced-java

Hello, I am Wukong Brother, “7 years of project development experience, full stack engineer, development team leader, love schematic programming underlying principles”.

I also handwritten 2 small program, Java brush small program, PMP brush small program, click on my public number menu to open! In addition, 111 architect documents and 1000 Java interview questions are available in PDF format. You can follow the public number “Wukong chat structure” reply to Wukong to get quality information.

“Retweet -> Look -> Like -> Favorites -> Comment!!” Is the biggest support for me!

Java Concurrency Must Know must be series:

1. Counter the interviewer | | 14 zhang diagram is no longer afraid of being asked a volatile!

2. Programmer was despised by his wife in the middle of the night because CAS principle was too simple?

3. The principle of with blocks on ABA | wife incredibly understand again!

4. Cut the finest | 21 picture with you appreciate collection thread unsafe

5.5000 word | 24 picture show you thoroughly understand the 21 kinds of locks in Java

6. Dry goods | reel off 18 kinds of Queue (Queue), the stability

Chat ๐Ÿ† technology project stage v | distributed those things…

Chat ๐Ÿ† technology project stage v | distributed those things…