= = =

1 introduction

Libevent encapsulates the underlying multiplexing interface, making it easier to use asynchronous network IO across platforms. Libevent also implements a scheduled task, so you don’t have to implement it yourself.

Libevent provides the libevent tutorial, libevent examples, and libevent interface documentation. After I read it again, starting to use it for a responsible for communication with the Internet of things devices access procedures, namely common TCP/UDP server, to receive connection request, receiving data, distributed data, authentication, forward equipment connection timeout, management, and realize some simple interface, of course, the function of the other too lazy to say. This program is very similar to nginx, before I directly use epoll implementation of many similar programs, recently saw libevent developed a lot of programs, so I began to try to use it.

And then there’s the problem. I tried to create multiple IO threads, but things weren’t as I expected, event_del() blocked. Libevent’s event operations can only be performed in the same thread as the Dispatch loop, after the loop exits, or in a callback function.

Some debugging after this echo-server example, as well as this note, record my debugging process.

Introduction to 2 libevent

Libevent has two concepts, event_base and event.

An event is an event that sets the trigger condition (readable/writable/timeout/signal) and the function that needs to be executed once the condition is set. Event is equivalent to epoll_event in epoll.

The event loop is executed on event_base, which records the trigger conditions for all events, checks the conditions in the loop, and calls the functions specified in the event if the conditions are met. Event_base is equivalent to the structure created by epoll_create() in epoll.

For example, to read a file descriptor fd, create a read event that reads fd in the event’s callback function:

Void event_cb(evutil_socket_t fd, short what, void* arg) { } void* arg = NULL; Struct event_base* ev_base = event_base_new(); Struct event* ev = event_new(ev_base, fd, EV_READ, event_cb, arg); // register the event to the loop event_add(ev, NULL); Event_base_dispatch (ev_base); // Start the event loop, calling the event callback event_base_dispatch(ev_base) if the event condition is met;Copy the code

3 Design Structure

There are two ways the structure works for me. One is a thread that listens for events and a thread pool handles them. The second type is that multiple threads listen for events and execute them directly in their own thread. I use the second one.

3.1 Single event loop, multiple processing threads

The event_base operation can only be in the same thread as the event loop. In order for event processing to be possible in multiple threads, the first idea is to create an event_base loop in the event thread with a callback function that hands the event processing to the thread pool. The event loop contains A timeout event, A, whose callback is responsible for executing the code for the action event sent from the thread pool. If action events are also needed in the thread pool, the code for the action events is sent to the event thread and the time-out events are activated to execute the code.

I often use this approach with epoll, where a thread listens for events, which are triggered and then handed over to the thread pool for processing. If an event needs to be added or manipulated, the manipulation function is sent to a queue for execution by the thread listening for the event. This approach involves communicating across multiple threads and incurs some performance penalty, but in my real projects, the performance penalty is trivial compared to the performance cost to the business. But this time I used a different model.

3.2 Multi-event loops

Create multiple event threads, each creating an event_base event loop that is executed directly in the event thread when triggered.

In a real project, after an event is triggered, not only do I/O operations occur, but there are many blocking tasks that need to be handled, the most common being requests to the database. Database operations can also be written as asynchronous IO and placed in an event loop, but for convenience, I always run the database operations in a thread pool, queue the event operations after the run, and use a timeout task to process the event operations as in the previous structure.

4 Code Description

The code is uploaded, the business-related logic is removed, and only the echo-Server function is implemented.

4.1 buffer

Buffer. cc and buffer.h implement variable-length read/write buffers. Libevent has evbuffer structure. The reason why WE implemented our own buffer is that not only libevent is used in business programs, but also there are many legacy IO codes. In order to unify the business interface after I/O, we still use our own buffer. The implementation of Libevent is much more convenient and efficient than my implementation. I don’t know how much higher it is. I will consider using it in the future.

A buffer is essentially a cache queue in which data is inserted from the head and retrieved from the tail. Data is retrieved in the same order as data is inserted.

4.2 the dispatcher

The Dispatcher is a wrapper around the event_base event loop. The event_base_loop() function runs here, with each IO thread having a Dispather instance in use. The queue post_callbacks_ holds operations to the Dispatcher via post() from other threads. The post() function is activated for time-out events, and the function is removed from the queue for execution in time-out events.

While only one queue is demonstrated in the code, business needs may require multiple queues to meet task priorities. In our business, operations such as closing connections and releasing resources are considered low priority, and we set up a separate queue to process tasks in the lower priority queue after all other queues are finished.

4.3 thread_pool

A simple thread pool implementation. Multiple threads loop to fetch the task from the task queue and execute it. There are multiple task queues for priority purposes. In our business, device authentication involves a lot of cryptography and database operations that are time consuming, and we make it a very low priority; Device data uploading has a higher priority. In this way, existing services are not interrupted due to a large number of high-performance device access requests.

4.4 the listener

Open the listener port, register the event, call the Accept function to receive the new connection after triggering the event, call the specified function (handler) to process the connection after receiving the new connection.

The listener link sets the SO_REUSEPORT option so that a port can be listened on simultaneously in multiple threads.

4.5 handler

Method of connection processing. When the Listener receives the new connection, it instantiates the class to process the new connection.

The handler’s read event receives the data sent by the client, puts it into read_buf, writes from read_buf to write_buf (a bit redundant here), and registers the write event. In the write event, the contents of write_buf are sent to the client. The echo-server function is complete.

In real applications, we often need to query a client connection and actively send data from the server to the client, so we store the handler in a hash table and set the reference count. But that’s not required in this case.

4.6 the main

Program entry, do some initialization.

5 concludes

These bells and whistles are no better than a new employee learning a business application developed by Go for a week. If you really need an application that isn’t that business-intensive, doesn’t update frequently, and is highly efficient, try using the approach described here.