This is the 23rd day of my participation in the November Gwen Challenge. Check out the event details: The last Gwen Challenge 2021

create

ByteBuf can be created by selecting allocator by ByteBufAllocator and calling the corresponding buffer() method. By default, direct memory is used as ByteBuf. The capacity is 256 bytes and the initial capacity can be specified.

public class ByteBufStudy {
    public static void main(String[] args) {
        / / create ByteBuf
        ByteBuf buffer = ByteBufAllocator.DEFAULT.buffer(16);
        ByteBufUtil.log(buffer);

        // Write data to buffer
        StringBuilder sb = new StringBuilder();
        for(int i = 0; i < 20; i++) {
            sb.append("a");
        }
        buffer.writeBytes(sb.toString().getBytes(StandardCharsets.UTF_8));

        // View the write resultByteBufUtil.log(buffer); }}Copy the code

Result visualization:

  • When the capacity of ByteBuf cannot contain all data, ByteBuf expands the capacity

  • If ByteBuf is created in handler, this parameter is recommendedChannelHandlerContext ctx.alloc().buffer()To create a

Direct memory vs. heap memory

ByteBuf is created using direct memory based ByteBuf in the following way

ByteBuf buffer = ByteBufAllocator.DEFAULT.buffer(16);
Copy the code

In addition, the following code can be used to create pooled heap-based ByteBuf

ByteBuf buffer = ByteBufAllocator.DEFAULT.heapBuffer(16);
Copy the code

You can also use the following code to create pooled direct memory-based ByteBuf

ByteBuf buffer = ByteBufAllocator.DEFAULT.directBuffer(16);
Copy the code
  • Direct memory creation and destruction are expensive, but have high read/write performance (one less memory copy) and are suitable for use with pooling
  • Direct memory is less stressful to the GC because it is not managed by JVM garbage collection, but it is important to be proactive about releasing it in a timely manner

Pooled versus unpooled

Similar to the thread pool idea, expensive resources are created in advance, saving the time and steps of creation, and need to be returned when they are used up. The greatest significance of pooling lies in the reuse of ByteBuf, which has the following advantages:

  • Without pooling, a new instance of ByteBuf would have to be created each time, which is expensive for direct memory and, even for heap memory, increases GC stress
  • With pooling, ByteBuf instances in the pool can be reused, and a memory allocation algorithm similar to jemalloc is used to improve allocation efficiency
  • With high concurrency, pooling saves memory and reduces the possibility of memory overflow
public class ByteBufStudy {
    public static void main(String[] args) {
        ByteBuf buffer = ByteBufAllocator.DEFAULT.buffer(16);
        System.out.println(buffer.getClass());

        buffer = ByteBufAllocator.DEFAULT.heapBuffer(16);
        System.out.println(buffer.getClass());

        buffer = ByteBufAllocator.DEFAULT.directBuffer(16); System.out.println(buffer.getClass()); }}Copy the code
/ / the use of pooling direct memory class io.net ty. The buffer. The PooledUnsafeDirectByteBuf / / the use of pooling heap memory class io.net ty. Buffer. PooledUnsafeHeapByteBuf / / the use of pooling direct memory class io.net ty. Buffer. PooledUnsafeDirectByteBufCopy the code

composition

ByteBuf has the following main components

  • Maximum capacity and current capacity

    • When constructing ByteBuf, you can pass in two parameters, one for the initial capacity and the other for the maximum capacity. If the second parameter (maximum capacity) is not passed, the maximum capacity defaults to integer.max_value
    • If the capacity of ByteBuf cannot contain all data, the system expands the capacityBeyond maximum capacityThat will be thrownjava.lang.IndexOutOfBoundsExceptionabnormal
  • Unlike ByteBuffer, which only uses position to control read and write operations, ByteBuf is controlled by two Pointers, read and write

    • You do not need to change the mode when performing read and write operations

      • The part before reading the pointer is called obsolete and is the content that has already been read
      • The space between the read pointer and the write pointer is called the readable part
      • The space between the write pointer and the current capacity is called the writable part

write

Common methods are as follows

The method signature meaning note
writeBoolean(boolean value) Write a Boolean value 01 00 | in a byte represents the true | false
writeByte(int value) Write byte value
writeShort(int value) Writing short values
writeInt(int value) Write an int value Big Endian, 0x250, 00 00 02 50
writeIntLE(int value) Write an int value Little Endian, 0x250, after 50, 02, 00, 00
writeLong(long value) Writing long values
writeChar(int value) Writing char values
writeFloat(float value) Writes a float value
writeDouble(double value) Write a double value
writeBytes(ByteBuf src) Write netty ByteBuf
writeBytes(byte[] src) Write byte []
writeBytes(ByteBuffer src) Write the nioByteBuffer
int writeCharSequence(CharSequence sequence, Charset charset) Write string CharSequence is the parent of the string class, and the second argument is the corresponding character set

Pay attention to

  • These methods have an unspecified return value, which is ByteBuf, meaning they can be chained to write different data
  • In network transport, the default is Big Endian, writeInt(int value)
  • CharSequence is the parent of String, StringBuffer, and StringBuilder

The following example writes bytes, ints, and longs respectively. Note that writing long expands:

public class test {

    public static void main(String[] args) {
        / / create ByteBuf
        ByteBuf buffer = ByteBufAllocator.DEFAULT.buffer(16.20);

        // Write the byte type to buffer
        buffer.writeBytes(new byte[] {1.2.3.4});
		// Write a 4 byte int to buffer
        buffer.writeInt(5);
		// Write a 4-byte int to the small end of the buffer
        buffer.writeIntLE(6);
		// Write type long (8 bytes) to buffer
        buffer.writeLong(7);
        log(buffer);
    }

    private static void log(ByteBuf buffer) {
        / / ByteBuf visualization
        int length = buffer.readableBytes();
        int rows = length / 16 + (length % 15= =0 ? 0 : 1) + 4;
        StringBuilder buf = new StringBuilder(rows * 80 * 2)
                .append("read index:").append(buffer.readerIndex())
                .append(" write index:").append(buffer.writerIndex())
                .append(" capacity:").append(buffer.capacity()) .append(NEWLINE); appendPrettyHexDump(buf, buffer); System.out.println(buf.toString()); }}Copy the code

capacity

When the capacity of ByteBuf cannot accommodate the data written to it, the capacity of ByteBuf is expanded.

buffer.writeLong(7); log(buffer); // Read index:0 Write index:12 Capacity :16... / / expansion after the read index: 0 the write index: 20 capacity: 20 + -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- + | 0 1 2 3 4 5 6 7 8 9  a b c d e f | +--------+-------------------------------------------------+----------------+ |00000000| 01 02 03 04 00 00 00 05 06 00 00 00 00 00 00 00 |... | | 00000010 | | 07 00 00 00... | +--------+-------------------------------------------------+----------------+Copy the code

Expansion rule

  • How do I select the next integer multiple of 16 to expand the capacity if the data size does not exceed 512 bytes? For example, if the capacity is 12 bytes, the capacity is 16 bytes

  • If the data size exceeds 512 bytes, select the next 2n2^n2n. For example, if the data size is 513 bytes, the capacity is 210=1024 bytes (292^929=512 is insufficient).

  • Expansion cannot exceed maxCapacity, otherwise it will throw the Java. Lang. IndexOutOfBoundsException anomalies

read

Netty supports repeated reads. If repeated reads are required, buffer.markReaderIndex() is called to mark the read pointer. The read pointer is restored to the mark position with buffer.resetreaderIndex (). For details, see the following code:

public static void main(String[] args) {
        / / create ByteBuf
        ByteBuf buffer = ByteBufAllocator.DEFAULT.buffer(16.20);

        // Write data to buffer
        buffer.writeBytes(new byte[] {1.2.3.4});
        buffer.writeInt(5);
        ByteBufferUtil.log(buffer);

        // Read 4 bytes
        System.out.println(buffer.readByte());
        System.out.println(buffer.readByte());
        System.out.println(buffer.readByte());
        System.out.println(buffer.readByte());
        ByteBufferUtil.log(buffer);

        // Repeat reading with mark and reset
        buffer.markReaderIndex();
        System.out.println(buffer.readInt());
        ByteBufferUtil.log(buffer);

        // Revert to mark
        buffer.resetReaderIndex();
        ByteBufferUtil.log(buffer);
    }
Copy the code

The release of

Netty uses reference counting to control reclaimed memory, and each ByteBuf implements the ReferenceCounted interface

  • Each ByteBuf object has an initial count of 1
  • Call the release method to decrement the count by 1, and if the count is 0, ByteBuf memory is reclaimed
  • A call to the retain method counts up to 1, indicating that no other handler will reclaim even if it calls release until the caller runs out of use
  • When the count reaches zero, the underlying memory is reclaimed, and the methods of the ByteBuf object cannot be used properly, even if it is still there

Release rules

Because of pipeline, ByteBuf is usually passed to the next ChannelHandler. If release is called in each ChannelHandler, Transitivity is lost (if the ByteBuf has already done its job within the ChannelHandler, it doesn’t need to be passed), so the basic rule is who is the last user and who is responsible for release

  • Inbound ByteBuf processing principles

    • The original ByteBuf is not processed and is passed backwards by calling ctx.FireChannelRead (MSG) without release
    • To convert the original ByteBuf to some other Type of Java object, ByteBuf is no longer useful and must be released
    • If ctx.fireChannelRead(MSG) is not passed backwards, then release must also be required
    • Note the exceptions that must be released if ByteBuf is not successfully passed to the next ChannelHandler
    • TailContext is responsible for releasing unprocessed messages (raw ByteBuf), assuming messages are always passed back.
  • Outbound ByteBuf processing principle

    • Outbound messages are eventually converted to ByteBuf output, all the way forward, and released by HeadContext flush
  • Exception Handling Principles

    • Sometimes it is not clear how many times ByteBuf has been referenced, but must be released completely. We can loop through release until it returns true:

      while(! buffer.release()) {}Copy the code

When ByteBuf is passed to the head and tail of the pipeline, the ByteBuf is released completely by the methods within it.

Slice Slice

NIO network programming (10) – Zero copy technology – dig gold (juejin. Cn) introduced NIO zero copy problem, here introduce Netty using Slice method to achieve zero copy.

ByteBuf slice is one of the representations of zero-copy. The original ByteBuf is sliced into multiple ByteBuf slices, and the memory of the sliced ByteBuf is still used. The sliced ByteBuf maintains independent read. The write pointer.

  • After the sharded buffer is obtained, its retain method is called to increment its internal reference count by one. Avoid the release of the original ByteBuf, resulting in unusable slice buffer

  • Modifying the value of the original ByteBuf will also affect the ByteBuf obtained after slicing

  • Slices cannot be expanded, that is, they cannot be written to

  • To use slice, retain is required and release is used after use

public static void main(String[] args) {
        / / create ByteBuf
        ByteBuf buffer = ByteBufAllocator.DEFAULT.buffer(16.20);

        // Write data to buffer
        buffer.writeBytes(new byte[] {1.2.3.4.5.6.7.8.9.10});

        // Divide the buffer into two parts, parameters: start, length, no data replication occurred during the slicing process, the same memory
        ByteBuf slice1 = buffer.slice(0.5);
        ByteBuf slice2 = buffer.slice(5.5);

        // We need to increment the buffer reference count for the fragment by one
        // Avoid fragment Buffer unusable due to original Buffer release
        slice1.retain();
        slice2.retain();

        ByteBufferUtil.log(slice1);
        ByteBufferUtil.log(slice2);

        // Change the value in the original buffer
        System.out.println("=========== Modify the original buffer value ===========");
        buffer.setByte(0.5);

        System.out.println("= = = = = = = = = = = print slice1 = = = = = = = = = = =");
        ByteBufferUtil.log(slice1);
    }
Copy the code

The result of the above code: