An overview of the

If we have fast enough CPU, large enough memory, then object pooling technology is completely unnecessary, all kinds of garbage collection is unnecessary; However, everything always has a but, resources are always limited, how to give play to the optimal effect under the limited resources, has been exploring since the birth of human beings.

Tomcat-efficient ring queue

Tomcat is a common Web container in Java technology architecture, and its NIO (non-blocking I/O) model achieves higher performance than traditional BIO (blocking I/O). Its NIO model is shown in the figure below.

Acceptors are used to block receive connections. After receiving a connection, a Poller is selected to perform subsequent I/O task processing. (The number of pollers is fixed)

How would you implement the process of selecting a Poller? Think about it first

In Tomcat, Poller is stored in a ring queue, and the next element in the queue is iterated through an atomic variable, as shown in the figure below.

The picture comes from click access, I drew it too ugly

Ring queues are physically stored as linear arrays (or linked lists), not as circles.

We can use javascript to get a quick taste of circular arrays.

let pollers = [1.2.3.4.5.6]
let index = 0
let getNext = function(){
    return pollers[Math.abs(index++) % pollers.length]
}
for(let i = 0 ; i < 10086 ; i++){
    console.log(getNext())
}
Copy the code

As you can see, the implementation of a ring queue is that we can always fetch the next element in the queue. If the queue is empty, we will go back to the head of the queue and start again.

Is that all? This is perfectly fine in JS because JavaScript in a browser executes single-threaded without concurrency issues, but as a backend programmer concurrency and thread safety are important considerations.

Analysis of the code shows that we need to maintain an index index to indicate the current location, so if we store index as an atomic class, we don’t run into thread-safety issues or lock.

PollerRotater is an atomic class that allows us to obtain the next index without locking, and in a thread-safe manner.

    public Poller getPoller0(a) {
        int idx = Math.abs(pollerRotater.incrementAndGet()) % pollers.length;
        return pollers[idx];
    }
Copy the code

Jetty- Real estate investor on a budget

If you are a real estate investor on a tight budget, how can you invest in real estate to maximize your returns? Generally speaking, there are several options

  • High plate, borrowed all relatives and friends by the way hollowed out six wallets, the result was nothing
  • By buying second-hand houses, you take over various types of houses and transform them into various types of rental houses (single apartment, two-bedroom, three-bedroom, etc.) to rent to customers, so you have a fixed income every year (talked with the principal landlord of the village in the city, hundreds of thousands of a year is some).

Similarly, memory and CPU are precious resources for a computer. If you start with a lot of objects, it will take up a lot of resources, and there is a good chance that none of these objects will be reused and your memory will run out and the service will crash. (Of course, if your server memory is large enough, never mind)

Therefore, we can also recycle objects that are no longer needed and store them in our object pool, so objects to be recycled need to be restored to their original state. (The tenant is no longer renting the house, we need to clean up the house and rent it to other customers)

In addition, we can further refine and categorize the objects to meet different types of needs (for example, single customers usually rent single rooms, while those with wives and children will rent larger ones).

So how does object pooling work in Jetty?

We know that both NIO and BIO require a buffer for reading and writing data, and that these buffers are frequently used, so Jetty designed a ByteBufferPool of objects for buffers

ArrayByteBufferPool

ArrayByteBufferPool is an implementation of ByteBufferPool

By default, the structure of an ArrayByteBufferPool is shown below

As shown in the figure above, buckets are held in linear arrays, each containing a different size of ByteBuffer buffer to accommodate different buffer sizes. By default, there are 64 buckets, and the base size of the ByteBuffer is called Factor. In this figure, Factor is 1024.

Why classify buffer sizes? The reason is simple: make the best use of resources (you let a bachelor rent a three-bedroom apartment, it’s not harmful, if you have money, forget it).

I also used Fiddler to do a simple statistics on the size of the packets that were commonly used to visit the nuggets home page.

  • The requested packets are mostly in100b400bNote that there is room for optimization if we use Jetty, such as willfactorAdjust to 512 to save space)
  • The response datagram is in100bto1000kbBetween, the biggest is mainly static resources (JS, CSS) can be placed on the CDN to reduce the pressure of the Web container

As you can see, sorting ByteBuffers by size allows us to make full use of resources and further optimize by tuning the Factor parameter to reduce memory footprint.

Note that the actual object holding ByteBuffer is ConcurrentLinkedDeque, which is represented by its interface because the name is too long

Bucket index =(target buffer size -1)/factor

In this case,factor is 1024, and if you want a buffer of 10086 size, you should have (10086-1)/1024 = 10, which means that the Bucekt array index is 10, the 11th element in the Bucekt array, and the size of the ByteBuffer is 1024*11

Why do you want to reduce the buffer size by 1

It is worth noting that ArrayByteBufferPool does not immediately allocate ByteBufferPools to all buckets at the outset. Instead, it checks whether there is a target size ByteBuffer when needed. If there is one, it returns one from the corresponding Bucekt to the caller. If not, it creates a new one. It is returned to the ArrayByteBufferPool voluntarily by the caller when no longer needed

In addition, you can specify a maximum memory size for the ArrayByteBufferPool (to avoid running out of memory and causing an overflow). When the total size of the cached Bytebuffers exceeds this value, a cleanup is performed to remove the old Bucekt.

Interested can read the corresponding class source code

org.eclipse.jetty.io.ArrayByteBufferPool
Copy the code

conclusion

Divide and conquer can be said to be the basic methodology of human problem-solving. If you’re familiar with the segment-based locking of ConcurrentHashMap, you’ll be familiar with Jetty’s ByteBufferPool design philosophy, which is best practice in the divide-and-conquer philosophy.