The title is a bit big, but fortunately this article is mainly using Go to optimize the HTTP service, also calculate a marginal ball ~

background

The rapid increase of feature data leads to a high delay of the interface for obtaining all features of a city. The interface response time as shown in the following is more than 5s at the slowest time.

Cache optimization

Code optimization ideas:

1. Use caching.

1.1 Why use memory instead of Redis?

Analyzing service requirements, the current data to be stored is ObjectId and ObjectId is a string of about 14 bytes. Assume that the ObjectId is a string of 16 bytes on average. This means that each ObjectId occupies 2 bytes of memory. The current number of ObjectId to be stored is about 300,000. Therefore, the memory that the ObjectId needs to be stored can be operated in the memory for 0.5M. There is no network overhead and higher efficiency than using Redis.

1.2 Cache initialization: When the service starts, the local cache is initialized to empty.

1.3 The concept of cached versions.

The cached version is an offline feature. After a production task is updated, the data version is updated to the DB.

The following three solutions all store ObjectId data based on memory and have different strategies for memory updates.

Plan a

2.1 Cache Update

Create a scheduled task to proactively update the cache. Check the DB data version every minute. If the database version is updated, update the cache data.

2.2 disadvantages

Start a cache update thread alone, the code is not easy to maintain, there will be timed task thread hanging, not easy to find. In addition, relevant parameters need to be configured into the code in advance or introduced into the configuration center, which costs a lot of maintenance.

Scheme 2

3.1 Cache Update

A passively triggered cache update strategy is adopted, triggered by an interface call. After the request comes in, check whether the version of data in the current cache is consistent with that in the DB. If the version is updated, re-read all data of the city corresponding to the current request into the cache, and return the updated data to the caller.

3.2 disadvantages

Because it is passively triggered and the cache is updated synchronously, it is easy to cause occasional burrs when the interface is called when the version is updated and the data needs to be updated into the memory.

3.3 Business execution sequence diagram

Scheme 3 (The final scheme adopted)

4.1. Cache updates

A passive cache update strategy is adopted, triggered by the interface caller. If there is data in the current cache, it will directly return the data in the cache, and then check whether the version of the data in the current cache is consistent with that in the DB. If the version is updated, all feature data of the current requested city will be re-read into the cache, otherwise the cache update logic will be terminated.

4.2 Sequence diagram of service execution

Concurrent optimization scheme

Use Goroutine to optimize our serial logic

Go’s greatest feature is its support for concurrency (Goroutine) from the language level. Goroutine is the most basic execution unit in Go. Virtually every Go program has at least one Goroutine: the main Goroutine. When the program starts, it is created automatically.

To better understand Goroutine, let’s talk about threads and coroutines:

Threads: Sometimes called Lightweight processes (LWP), threads are the smallest unit of a program’s execution flow. A standard thread consists of the thread ID, current instruction pointer (PC), register set, and stack. In addition, thread is an entity in the process, is the system independent scheduling and dispatching of the basic unit, thread does not own system resources, only have a little in the operation of the essential resources, but it can be a process and other threads share all the resources.

Threads have their own separate stack and shared heap, shared heap, not shared stack, thread switching is generally scheduled by the operating system.

Like subroutines (or functions), a coroutine is a program component. Coroutines are more general and flexible than subroutines, but are not as widely used in practice.

Like threads, shared heap, not shared stack, and switching of coroutines is generally controlled explicitly by the programmer in code. It avoids the extra cost of context switch, takes into account the advantages of multi-threading, and simplifies the complexity of high concurrency programs.

The map in Golang is thread-unsafe

Obviously, we can use locking to solve the problem of concurrent reads and writes to maps. Let’s change the map structure above to the following:

// M
typeM struct {Map Map [string]string lock sync. func (m *M) Set(key, value string) { m.lock.Lock() defer m.lock.Unlock() m.Map[key] = value } // Get ... func (m *M) Get(key string) string {return m.Map[key]
}
Copy the code

In the code above, we introduced a locking mechanism operation to secure the map across multiple Goroutines.

Use policy patterns to optimize our logic

This is mainly because there are too many if/else in the code, so we use the strategy pattern to optimize our code structure. Here’s an article I found online, and then I’ll write a separate article when I have time. The optimized code is 50% less than the previous code, clearer and easier to maintain. Here is the effect of the optimized code after it goes live, the request time is less than 100ms:

summary

The above overall introduction of the general processing scheme when our interface takes a long time, of course, the specific problem has to be analyzed in detail, so when the interface response is slow, we should specifically analyze the specific reasons for the slow interface response, before the appropriate medicine!

Pay attention to our