After a month of torturing, finally separated.

Original order module, inventory module, integral module, payment module…… All of a sudden, they become independent systems.

The owners gave these individual systems a snappy name: microservices!

Some micro services are the heart of the owner, they “hog” one or more machines, like my points module, oh no, points system, not to be liked, only to share a machine with a few other guys.

The host said that we are now a distributed system, we should work together to complete the original task.

Everyone used to live in the same JVM, where modules made direct function calls, but now everyone provides HTTP-based apis: To access someone, you need to prepare JSON data, send it over HTTP, and when it’s processed, send a JSON response.

It is really troublesome that even the simplest communication has to cross the network.

Mention this network in my heart to gas, think about the original everyone in a process, that call speed is cool. Now good, one is slow as a snail, two is unreliable, from time to time will be wrong.

30 milliseconds ago, the order guy called my interface to add 200 points to a user called U0002, which I gladly did.

POST /xxx/BonusPoint/U0002

{“value:200”}

However, when I wanted to tell the order system the call result of integral, I found that the network had been disconnected and the delivery failed. How to do? I think I’ve already done it. Forget it!

But the order boy to my side of the situation, the in the mind thinking that maybe I made a mistake, he is determined to initiate the same call.

To me, this new call has nothing to do with the previous one. (Don’t forget, HTTP is stateless.) I’ll just do it again, honestly.

As a result, user “U0002” score was increased twice for no reason!

The order guy said, “No, you have to remember that I made the call before, so I don’t have to do it the second time!”

“You’re kidding! HTTP is stateless, how can I record your calls?”

“We can increase the state a little bit. Every time we call it, I send you a Transaction ID, or TxID, and you need to save the TxID, UserID, credits, and other information in the database.”

POST /xxx/BonusPoint/U0002

{“txid”:”T0001″,”value”:”200″}

I said, “What’s the use?”

“Each time you execute, you can check the database ah, if you see the same TxID already exists, that means the previous execution, do not need to repeat the execution. If it doesn’t exist, implement it.”

This was a good idea, although I had added a bit of work and needed a bit of extra storage (and a better server!). , but there is a nice feature: no matter how many times the same TxID is called, the execution effect is as if it was executed once.

We later learned that humans call this property idempotent.

In general, reads are idempotent with unchanged back-end data, yielding the same result no matter how many times they are read. Write operations, however, cause data to change with each operation. In order for an operation to be performed multiple times without side effects, it is important to keep track of whether the operation has been performed.

I tell you the new API: must pass me a TxID ah, otherwise don’t blame me not to deal with!

On this day, I received two HTTP calls. The first one looked like this:

POST /xxx/BonusPoint/U0002

{“txid”:”T0010″,”value”:”200″}

So I was happy to execute, and the T0010 txID to save.

Then the second call comes, exactly the same as the first:

POST /xxx/BonusPoint/U0002

{“txid”:”T0010″,”value”:”200″}

I use T0010 a check, the database already exists, I know, do not have to deal with, tell each other directly: processing completed.

To my surprise, users quickly complained: Why did I increase my score twice (by 200 each time) when I only increased it once?

This is definitely not my pot, there is no problem on my side, everything is implemented according to the design. I said, “Who just made that call? Check the call log!”

After investigating the caller’s log, it turns out that those two calls were made by both systems!

As it happens, the two systems generate the same TxID: T0010, which leads to what I think are two attempts at the same call, which in fact are two completely different calls.

TxID = TxID = TxID = TxID = TxID = TxID = TxID = TxID

How do you generate a unique ID in a distributed system?

Order guy said: “this is very simple, we can use the UUID, UUID contains the NIC MAC address, timestamp, random number and other information, from the time and space to ensure that the unique, will not repeat.”

UUID can be easily generated locally without making any remote calls, which is extremely efficient.

844A6D2B-CF7B-47C9-9B2B-2AC5C1B1C56B

I said, “It’s just that the 128 digits of numbers and letters are messy, can’t be sorted, and can’t be ordered (especially in a database, where ordered ids are easier to locate).”

Everyone nodded and the UUID was denied.

MySQL suggests: “You forgot me! I can support auto_increment columns, natural IDS, folks, and definitely order.”

“Ah? Using a database? What if you go on strike? We have no ID to use, we can’t do anything!” Everyone hated the thought of the slow old man, making everyone depend on him, putting the power of life and death in his hands.

Ngnix said: “You are afraid of his strike, just build more MySQL, like 2.

The initial value of the first one is 1, and each increment of 2 produces an ID of 1, 3, 5,7……

The second one starts at 2 and increments by 2 each time, producing an ID of 2,4,6,8,10……

Get another ID generation service. If one MySQL is down, access another.”

“What if the ID generation service dies too?” “Someone asked.

“That can deploy more than a few ID generation services ah, this is the advantage of your micro services?” Nginx asked.

Ngnix is engaged in load balancing, but this method is quite wonderful, not only improve the availability, ID can maintain the trend of increasing.

“However, EVERY time I need a TxID, I need to access a database ah, this how slow ah!” “Said the order boy.

“Don’t go to the database every time,” says Redis, who is in charge of caching. “Do what I did, cache some data into memory.”

“Cache? How to cache?”

“Every time you access the database, you can retrieve a batch of ids, say 10, and store them in memory so that others can use them without having to access the database, which of course needs to keep track of what the current maximum ID is,” Redis said.

Assuming the initial maximum ID is 1, get 10 ids, namely 1,2,3…… 10, save to memory, at this point the maximum ID becomes 10.

Get 10 more next time, namely 11,12,13…… 20, the maximum ID becomes 20.

“But, you this only MySQL strike, the system or to shut down ah!” I said.

Ngnix said. “Well, if the data isn’t copied from the Master to the Slave in time, the Master stops working, and the Max ID in the Slave isn’t up to date, then you might have a problem. Maybe a two-master structure……”

Alas, how complicated it is to create a distributed unique ID!

Ngnix is mumbling there and nobody notices, a new service comes online and it says: “Hi everyone, I’m Snowflake……”

(Note: Too long, stop here, next time write Snowflake……)

Jingdong Value activity! Buy all the science and technology books and original books over 40 yuan, just add 25 yuan to get the “Code farmers turn over”!