This is the fifth day of my participation in the August More text Challenge. For details, see:August is more challenging

preface

Recently, I have watched a lot of videos and books about Netty, and I have gained a lot. I hope to share what I know with you, so that we can work together and grow up together. Java IO, BIO, NIO, AIO

An in-depth analysis of Java IO (I) overview

An in-depth look at The Java IO (II) BIO

Java IO (3) NIO

Java IO (4) AIO

In this article, we begin our in-depth analysis of Netty, starting with a look at the shortcomings of JAVA NIO and AIO.

The pain of Java native apis

While the JAVA NIO and JAVA AIO frameworks provide support for multiplexing IO/ asynchronous IO, they do not provide a good encapsulation of the upper-level “information formats.” Implementing a real web application using these apis is not easy.

JAVA NIO and JAVA AIO do not provide for reconnection, intermittent network disconnection, half-packet read and write, failure caching, network congestion, and abnormal bit streams. It is up to the developer to do the work.

AIO, in practice, is no better than NIO. AIO is implemented differently on different platforms. Windows uses an asynchronous IO technology called IOCP. Since there is no such asynchronous IO technology under Linux, epoll is used to simulate asynchronous IO. So the performance of AIO on Linux is not ideal. AIO also does not provide support for UDP.

In summary, Java’s native API is not widely used in actual large Internet projects. Instead, a third-party Java framework is called Netty.

The advantage of Netty

Netty provides an asynchronous, event-driven network application framework and tools for the rapid development of high-performance, reliable network servers and clients.

Non-blocking I/O

Netty is a network application framework based on the Java NIO API. It can be used to quickly and easily develop network applications, such as server and client programs. Netty greatly simplifies the process of network program development, such as TCP and UDP Socket service development.

Because of its NIO-based API, Netty can provide non-blocking I/O operations, greatly improving performance. At the same time, Netty encapsulates the complexity of the Java NIO API internally and provides thread pool handling, making developing NIO applications extremely easy.

Rich agreement

Netty provides a simple, easy-to-use API, but that doesn’t mean the application will be difficult to maintain and low performance. Netty is a well-designed framework that draws lessons from the implementation of many protocols, such as FTP, SMTP, HTTP, and many binary and text-based traditional protocols.

Netty supports a variety of network protocols, such as TCP, UDP, HTTP, HTTP/2, WebSocket, SSL/TLS, and so on. These protocols are implemented out of the box. Therefore, Netty developers can achieve simplicity, high performance, and stability without loss of flexibility.

Asynchronous and event-driven

Netty is an asynchronous event-driven framework, which means that all I/O operations are asynchronous. All I/O calls return immediately. There is no guarantee of success, but the call returns ChannelFuture. Netty uses ChannelFuture to notify if the call succeeded, failed, or cancelled.

At the same time, Netty is event-driven, and the caller can not get the result immediately, but through the event listening mechanism, the user can easily get the result of I/O operation actively or through the notification mechanism.

When the Future object is created, it is in an incomplete state. The caller can retrieve the status of the operation by returning ChannelFuture, and then register the listener function to perform the completed operation.

  • throughisDoneMethod to determine whether the current operation is complete.
  • throughisSuccessMethod to determine whether the completed current operation was successful.
  • throughgetCauseMethod to get the reason why the completed current operation failed.
  • throughisCancelledMethod to determine whether the completed current operation has been canceled.
  • throughaddListenerMethod to register listeners when the operation has completed (isDoneMethod returns completion), the specified listener will be notified; iffutureObject is completed, the specified listener is understood to be notified.

For example: in the following code, the binding port is an asynchronous operation, and when the binding operation is completed, the corresponding listener processing logic will be called.

serverBootstrap.bind(port).addListener(future -> {
    if(future.isSuccess()){
        System.out.println("Port binding succeeded!");
    }else {
        System.out.println("Port binding failed!"); }});Copy the code

The benefits of Netty asynchronous processing are that it does not block the thread, that threads can execute other programs during I/O operations, and that they are more stable and have higher throughput under high concurrency conditions than traditional blocking I/O.

A well-designed API

Netty has provided users with the best API and implementation design from the start.

For example, you might opt for a traditional blocking API when the number of users is small, since it would be easier to use a blocking API than Java NIO. However, when traffic grows exponentially and servers need to handle thousands of customer connections simultaneously, problems can arise. In this case, you might try to use Java NIO, but the complex NIO Selector programming interface is time-consuming and ultimately hindering rapid development.

Netty provides a unified asynchronous I/O programming interface called channel, which abstracts all point-to-point communication operations. That is, if an application is based on one of Netty’s transport implementations, it can also run on another. Common subinterfaces of a Channel are:

Rich buffer implementation

Instead of using Java NIO’s ByteBuffer to represent a sequence of bytes, Netty uses its own caching API. This approach has significant advantages over ByteBuffer.

Netty uses a new buffer type, ByteBuf, and is designed to solve the ByteBuffer problem from the bottom, while also meeting the needs of everyday network application development.

Important Netty has the following features:

  • Custom buffering types are allowed.
  • A transparent zero-copy implementation is built into the composite buffering type.
  • Out of the box dynamic buffering type with imageStringBufferSame dynamic buffering capability.
  • No longer need to callflip()Methods.
  • Normally there is a ratioByteBufferFaster response time.

Efficient network transmission

Java native serialization has the following disadvantages:

  • Unable to cross languages.

  • The serialized stream is too large.

  • Performance after serialization is low.

There are a number of frameworks in the industry that address these issues, such as Google Protobuf, JBoss Marshalling, Facebook Thrift, etc. For each of these frameworks, Netty provides packages to integrate them into applications. At the same time, Netty itself also provides a number of codec tools, convenient for developers to use. Developers can develop efficient network transport applications based on Netty, such as: high-performance messaging middleware Apache RocketMQ, high-performance RPC framework Apache Dubbo, etc.

Netty core concepts

As can be seen from the above architecture diagram, Netty is mainly composed of three parts:

  • Core components
  • Transport services
  • agreement

Core components

Core components include: event model, byte buffer, and communication API

The event model

Netty is based on asynchronous event driven, the framework embodied in allI/OThe actions are asynchronous and the caller does not get the results immediately, but rather through event listening, which can be easily obtained by the user actively or through notificationI/OThe result of the operation.

Netty classifies all events according to their relevance to inbound or outbound data flows.

Events that can be triggered by inbound data or related state changes include the following:

  • The connection was activated or deactivated.
  • Data reading.
  • User events.
  • Error events.

An outbound event is the result of an action that will be triggered in the future, including the following actions:

  • Open or close a connection to a remote node.
  • Write or flush data to a socket.

Each event can be distributed to a user-implemented method in the ChannelHandler class.

Byte buffer

Netty uses a new buffer type,ByteBuf, which is different from The Java ByteBuffer. ByteBuf provides rich features.

Communication API

Netty’s communication APIS are abstracted into channels to provide a unified asynchronous I/O programming interface for all point-to-point communication operations.

Transport services

Netty has a number of built-in out-of-the-box transport services. Because not all of their transports support every protocol, one must be selected that is compatible with the protocol used by the application. The following are all the transports provided by Netty.

NIO

Io.net ty. Channel. Socket. Nio packages used to support the nio. The implementation below this package uses the Java.nio.Channels package as the base (in a select-based manner).

epoll

The io.netty.channel. ePoll package is used to support epoll and non-blocking IO driven by JNI.

It should be noted that this epoll transfer is only supported on Linux. Epoll also provides a variety of features, such as SO_REUSEPORT, faster than NIO transmission, and is completely non-blocking.

OIO

Io.net ty. Channel. The socket. The oio package to support on the basis of using the java.net package blocking I/O.

local

The io.NET ty.channel.local package is used to support local transport for communication through pipes within the VM.

embedded

Io.net ty. Channel. Embedded package as embedded transmission allows the use of ChannelHandler and don’t need a true based on the network transmission.

Protocol support

Netty supports a variety of network protocols, such as TCP, UDP, HTTP, HTTP/2, WebSocket, SSL/TLS, and so on. These protocols are implemented out of the box. Therefore, Netty developers can achieve simplicity, high performance, and stability without loss of flexibility.

Netty Simple application

Introducing Maven dependencies

<dependency>
    <groupId>io.netty</groupId>
    <artifactId>netty-all</artifactId>
    <version>4.149..Final</version>
</dependency>
Copy the code

Server side pipe handler

public class NettyServerHandler extends ChannelInboundHandlerAdapter {

    // Read the actual data (here we can read the message sent by the client)
    2. Object MSG: default Object */. /* 1
    @Override
    public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
        System.out.println("server ctx =" + ctx);
        Channel channel = ctx.channel();
        // Convert MSG to a ByteBuf
        //ByteBuf is provided by Netty, not NIO's ByteBuffer.
        ByteBuf buf = (ByteBuf) msg;
        System.out.println("The message sent by the client is :" + buf.toString(CharsetUtil.UTF_8));
        System.out.println("Client address :" + channel.remoteAddress());
    }


    // The data is read
    @Override
    public void channelReadComplete(ChannelHandlerContext ctx) throws Exception {
        //writeAndFlush is write + Flush
        // Write data to the cache and refresh
        // In general, we encode the data we send
        ctx.writeAndFlush(Unpooled.copiedBuffer("The company's recent account has no money, wait a few days!", CharsetUtil.UTF_8));
    }

    // Handle the exception, which usually requires closing the channel
    @Override
    public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception { ctx.close(); }}Copy the code

NettyServerHandler inherited from ChannelInboundHandlerAdapter, this class implements the ChannelInboundHandler interface. ChannelInboundHandler provides a number of interface methods for event handling.

The channelRead() event handling method is overridden here. This method is called when the message is received whenever new data is received from the client.

The channelReadComplete() event handling method is called when the data has been read, by calling the writeAndFlush() method of ChannelHandlerContext to write the message to the pipe and send it to the client.

The exceptionCaught() event handling method is called when a Throwable object is present.

The server main program

public class NettyServer {

    public static void main(String[] args) throws Exception {
        // Create BossGroup and WorkerGroup
        / / that
        //1. Create two thread groups bossGroup and workerGroup
        //2. The bossGroup only handles connection requests. The workerGroup handles the real business with the client
        //3. Both are infinite loops
        //4. Number of child threads (NioEventLoop) in bossGroup and workerGroup
        // The default number of actual CPU cores x 2
        //
        EventLoopGroup bossGroup = new NioEventLoopGroup(1);
        EventLoopGroup workerGroup = new NioEventLoopGroup(); / / 8
        try {
            // Create a startup object on the server and set parameters
            ServerBootstrap bootstrap = new ServerBootstrap();
            // Use chained programming to set this up
            bootstrap.group(bossGroup, workerGroup) // Set two thread groups
                    .channel(NioServerSocketChannel.class) //bossGroup is implemented using NioSocketChannel as the channel for the server
                    .option(ChannelOption.SO_BACKLOG, 128) // Set the number of connections in the thread queue.
                    .childOption(ChannelOption.SO_KEEPALIVE, true) // Set the child to keep the connection alive for worker thread groups
                    .childHandler(new ChannelInitializer<SocketChannel>() {//workerGroup uses SocketChannel to create a channel to initialize the object (anonymous object)
                        // Set processor for pipeline
                        @Override
                        protected void initChannel(SocketChannel ch) throws Exception {
                            // The SocketChannel can be managed using a collection. When pushing messages, the service can be added to the taskQueue or scheduleTaskQueue of the NIOEventLoop corresponding to each channel
                            ch.pipeline().addLast(newNettyServerHandler()); }});// Set the handler for our workerGroup's EventLoop pipeline

            System.out.println("... The server is ready...");
            // Bind a port and synchronize, generating a ChannelFuture object
            // Start the server (and bind the port)
            ChannelFuture cf = bootstrap.bind(7788).sync();
            // Register a listener for cf to monitor the events we care about
            cf.addListener(new ChannelFutureListener() {
                @Override
                public void operationComplete(ChannelFuture future) throws Exception {
                    if (cf.isSuccess()) {
                        System.out.println("Service started on port 7788...");
                    } else {
                        System.out.println("Service startup failed..."); }}});// Listen for closed channels
            cf.channel().closeFuture().sync();
        } finally{ bossGroup.shutdownGracefully(); workerGroup.shutdownGracefully(); }}}Copy the code

NioEventLoopGroup is a multi-threaded event loop for handling I/O operations. Netty provides a number of different implementations of EventLoopGroup to handle different transports.

In the server application above, two NioEventLoopgroups are used. The first one, called bossGroup, is used to receive incoming connections. The second, called workerGroup, handles connections that have already been received. Once the bossGroup receives the connection, it registers the connection information with the workerGroup.

ServerBootstrap is a boot class for NIO services. You can use a Channel directly in this service.

  • groupMethod is used to setEventLoopGroup.
  • throughChannelMethod can be specified for new connectionsChannelA type ofNioServerSocketChannelClass.
  • childHandlerIs used to specify theChannelHandlerWhich is what we implemented beforeNettyServerHandler.
  • Can be achieved byoptionSet the specifiedChannelTo implement theNioServerSocketChannelIs used to configure parameters.
  • childOptionThe main SettingsSocketChannelThe son ofChannelOptions.
  • bindUsed to start services on a bond port.

Client pipe processor

public class NettyClientHandler extends ChannelInboundHandlerAdapter {

    // This method is triggered when the channel is ready
    @Override
    public void channelActive(ChannelHandlerContext ctx) throws Exception {
        System.out.println("client ctx =" + ctx);
        ctx.writeAndFlush(Unpooled.copiedBuffer("Boss, when do I get paid?", CharsetUtil.UTF_8));
    }

    // Triggers when the channel has a read event
    @Override
    public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
        ByteBuf buf = (ByteBuf) msg;
        System.out.println("The server replied to the message :" + buf.toString(CharsetUtil.UTF_8));
        System.out.println("Server address:"+ ctx.channel().remoteAddress());
    }

    // Handle the exception, which usually requires closing the channel
    @Override
    public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception { cause.printStackTrace(); ctx.close(); }}Copy the code

The channelRead method converts the received message into a string that can be printed on the console.

The message received by channelRead is of type ByteBuf. ByteBuf provides a convenient way to convert the message into a string.

Client main program

public class NettyClient {

    public static void main(String[] args) throws Exception {
        // The client needs an event loop group
        EventLoopGroup group = new NioEventLoopGroup();
        try {
            // Create the client startup object
            // Note that the client uses Bootstrap instead of ServerBootstrap
            Bootstrap bootstrap = new Bootstrap();
            // Set related parameters
            bootstrap.group(group) // Set the thread group
                    .channel(NioSocketChannel.class) // Set the client channel implementation class (reflection)
                    .handler(new ChannelInitializer<SocketChannel>() {
                        @Override
                        protected void initChannel(SocketChannel ch) throws Exception {
                            ch.pipeline().addLast(new NettyClientHandler()); // Add your own processor}}); System.out.println("Client OK..");
            // Start the client to connect to the server
            // Netty's asynchronous model is used to describe the future
            ChannelFuture channelFuture = bootstrap.connect("127.0.0.1".7788).sync();
            // Listen for the closing channel
            channelFuture.channel().closeFuture().sync();
        } finally{ group.shutdownGracefully(); }}}Copy the code

The client only needs a NioEventLoopGroup.

A test run

Start the NettyServer and NettyClient programs respectively

Server console output:

. The server is ready... The service has been started. The port number is 7788... server ctx =ChannelHandlerContext(NettyServerHandler#0, [id: 0xa1b2233c, L:/127.0.0.1:7788 -r :/127.0.0.1:63239]) Client address :/127.0.0.1:63239Copy the code

Client console output:

The client ok.. client ctx =ChannelHandlerContext(NettyClientHandler#0, [id: 0x21d6f98e, L:/127.0.0.1:63239 -r :/127.0.0.1:7788]) Server address: /127.0.0.1:7788Copy the code

So far, a simple development based on Netty server and client is complete.

conclusion

This article focuses on the background, features, core components, and how to quickly start your first Netty application.

Later we will analyze the Netty architecture design, Channel, ChannelHandler, ByteBuf buffer, thread model, codec, boot program and so on.

At the end

I am a code is being hit is still trying to advance. If the article is helpful to you, remember to like, follow yo, thank you!