About Tcp packets

A lot of people have done a lot of research on this, and have put a lot of effort into the implementation code and blog documentation. Of course, there are also a number of various comments, I have a look, summed up some experience.

First, let’s take a look at these friends. They are:

Blog.csdn.net/stamhe/arti…

www.cppblog.com/tx7do/archi…

/ /………………………

Of course, there’s too much, too much sticky stuff to know whose original it is,J

Reading these friends’ blogs is my suggestion to take a look at the TCP-IP Details volume 1 for myself.

The working principle of TCP:

The working principle of

Section 17.2 of Chapter 17 of TCP-IP provides a brief introduction to the principles of TCP services.

Although CDP and CDP both use the same network layer (IP), CDP provides completely different services to the application layer. CTP provides a connection-oriented, reliable byte stream service.

Connection-oriented means that two applications (typically a client and a server) that use T-P must establish a t-P connection before exchanging data with each other. The process is very similar to making a phone call. You dial and ring, wait for the other person to pick up and say “Hello”, and then explain who it is. In Chapter 18 we will see how a T C P connection is established and how to disconnect when one party has finished communicating.

In a T C P connection, only two parties communicate with each other. Broadcast and multicast described in Chapter 12 cannot be used for T, C, P.

Reliability is provided by:

• Application data is segmented into the data blocks that T, C, and P deem most suitable for sending. This is quite different from U, D and P, where the length of the datagram generated by the application will remain constant. The unit of information transmitted from T cP to IP is packet segment or segment (s e g m e n T) (see Figure 1-7). In section 18. 4 we will see how T, C, and P determine the length of a message segment.

• When T C P sends a segment, it starts a timer and waits for the destination to confirm receipt of the segment. If an acknowledgement cannot be received in time, the packet segment is resend. In Chapter 21 we will learn about adaptive timeout and retransmission strategies in the T, C, and P protocols.

• When T C P receives data from the other end of the T C P connection, it will send an acknowledgement. This confirmation is not sent immediately and is usually delayed by a fraction of a second, as discussed in section 19.3.

• CTP will maintain the checksum of its head and data. This is an end-to-end checksum to detect any changes in the data during transmission. If there is an error in the verification of the received segment, T C P discards the segment and does not acknowledge receipt of the segment (expecting the sender to time out and retransmit).

• Since the T C P packet segment is transmitted as the I P datagram, and the arrival of the I P datagram may be out of order, the arrival of the T C P packet segment may also be out of order. If necessary, the received data is resorted to deliver the received data to the application layer in the correct order.

• Since duplicate data can occur in ipdatagrams, the receiver must discard duplicate data.

• CTP can also provide flow control. T C P Each side of the connection has a fixed size buffer space. The receiver only allows the other end to send as much data as the receiver buffer can accept. This prevents faster hosts from overrunning the buffer for slower hosts. The two applications exchange a byte stream consisting of 8 bit bytes over a T C P connection. T C P Does not insert record identifiers in the byte stream. We call this a Byte stream service. If one side’s application sends 10 bytes, then 20 bytes, then 50 bytes, the other side of the connection has no way of knowing how many bytes the sender sent each time. The receiver can receive the 80 bytes in four batches, each receiving 20 bytes. One end places the byte stream on the T C P connection, and the same byte stream will appear on the other end of the T C P connection. In addition, the contents of the byte stream are not interpreted. T C P It is not known whether the data byte stream transmitted is binary data, A S C I I characters, E B C D I C characters, or other types of data. The byte stream is interpreted by the application layer on both sides of the T C P connection. This processing of byte streams is similar to the processing of files by the U N I X operating system. The kernel does not interpret what an application reads or writes, but leaves it to the application to handle. For the kernel, it cannot distinguish between a binary file and a text file.

T C P How do I determine the length of the packet segment

I still quote section 18.4 of Chapter 18 of TCP-IP Volume 1:

Maximum Packet segment Length (M S S) Indicates the maximum length of the chunk of data sent to the other end. When a connection is established, both parties to the connection notify their respective MSS. We’ve already seen that M, S, S are all 1, 0, 2, 4. This results in I P datagrams that are typically 40 bytes long: 20 bytes T C P header and 20 bytes I P header.

In some books, this is referred to as a “negotiable” option. It is not negotiable under any conditions. When you set up a link

At the next moment, each party has an mss-option to advertise that it expects to receive (mss-option can only appear in the SY-N message segment). If one party does not receive an ms value from the other, ms is set as the default value 5 3 6 bytes (this default value allows a 20 byte I P header and a 20 byte T C P header to fit a 5 7 6 byte I P datagram).

In general, if no segmentation occurs, the larger the M S S is better (this is not always true, see Figure 2 4-3 and Figure 2 4-4 for examples). The larger the packet segment is, the more data is allowed to be transmitted in each packet segment, and the network utilization is higher than that of the I P and T C P headers. When T C P sends a S Y N, either because a local application process wants to initiate a connection, or because the host on the other end has received a connection request, it can set the value of M S S to the length of M T U on the outgoing interface minus the fixed length of the I P header and T C P header. For an Ethernet, the M S S value is up to 1, 4, 60 bytes. Using the IEEE 802.3 package (see Section 2.2), its ms is up to 1, 4, 52 bytes.

If the destination IP address is non-local (n o n L O c a L), the default value of M S S is 5 3 6. If the network number and subnet number of the destination IP address are the same as ours, the IP address is local. If the network number of the destination IP address is completely different from ours, it is non-local; If the network number of the destination IP address is the same as ours but the subnet number is different from ours, it may be local or non-local. Most implementations provide a configuration option (Appendix E and Figure E-1) that allows the system administrator to indicate whether different subnets are local or non-local. The setting of this option will determine whether ms can be selected as large as possible (reaching the length of the outgoing interface) or the default value 5, 3, and 6.

M S S tells the host to limit the length of the datagram sent by the other end. Plus the host can also control the length of the datagram it sends, which will enable hosts connected to a network in smaller MB to avoid fragmentation.

Avoiding this fragmentation is only effective if the host on one end is directly connected to a network with less than 5, 7, 6 bytes of MTU.

If the hosts at both ends are connected to the Ethernet network, and both use 5, 3, 6 M, S, S, but the intermediate network uses 2, 9, 6 M, T, U, it will also

Segmentation occurs. Using the mtU discovery mechanism on the path (see Section 24.2) is the only way to approach this problem.

MSS= mTU-IP header on the outgoing interface -TCP header MSS= MTU-IP header on the outgoing interface -TCP header

       

The size of the last Ethernet frame should be the size of our exit MTU. When the destination host receives an Ethernet data frame, the data starts to rise from the bottom of the protocol stack, removing the header added by each layer of protocol. Each layer protocol box checks the protocol identifier in the packet header to determine the upper-layer protocol that receives the data. This process is called partitioning (D e m u L t I p L E x I n g), and Figure 1-8 shows how this process occurs.

What is MTU? This is actually a concept of the data link layer. Ethernet and 802.3, two LAN technical standards, have a limit on the size of the data frame in the “link layer” :

L Maximum transmission unit MTU

 

As you can see in Figure 2-1, Ethernet and 802. 3 both impose a limit on the length of data frames, with the maximum values being 1 500 0 and 1 4 9 2 bytes, respectively. This link layer feature is called mtU, maximum transfer unit. Most networks of different types have an upper limit.

If there is a datagram to be transmitted at the i-p layer and the length of the datagram is larger than that at the link layer, the i-P layer needs to divide the datagram into several pieces (F r a g M e n T a T I o n), so that each piece is smaller than M T U. We will discuss the process of P sharding in Section 11.5.

Figure 2-5 lists some typical MTU values from RFC 1191[Mogul and Deering 1990]. The M T U of the point-to-point link layer (e.g., S L I P and P P P P) does not refer to the physical properties of the network media. Rather, it is a logical constraint designed to provide a fast enough response time for interactive use. In Section 2.10, we will see how this limit value is calculated. In Section 3.9, we will print out the M t U of the network interface with the n e t s t a t command.

L path MTU

 

When two hosts on the same network communicate with each other, the network’s M T U is very important. But if the

Communication between two hosts is carried over multiple networks, so the link layer of each network may have different M T U. important

It is not the value of mtU on the network where the two hosts reside, but the minimum mtU on the path where the two hosts communicate. It’s called a road

M, T, U.

The path between two hosts is not necessarily a constant. It depends on the route chosen at the time. The road may not be

Is symmetric (the route from A to B may be different from the route from B to A), so the paths M T U need not be in both directions

Consistent.

RFC 1191[Mogul and Deering 1990] describes the discovery mechanism of path MTu, that is, determining the path at any time

M T U method. We’ll see how it works after we introduce the I C M P and I P sharding methods. In verse 11.6, I

We will use this discovery method when we see unreachable errors for I C M P. In section 11.7, you will also see the t r a C e r O U t e program

And that’s how you determine the path to the destination node, M, T, U. In sections 11. 8 and 2. 2, the product support road will be introduced

How do U, D, P and T, C, P operate in the discovery method of path M, T, U.

TCP timeout and retransmission

When TCP sends a segment, it starts a timer and waits for the destination to acknowledge receipt of the segment. If an acknowledgement is not received in time, the segment will be retransmitted.

T C P provides reliable transport layer. One of the methods it uses is to acknowledge the data received from the other end. But data and validation can be lost. This problem is solved by setting a timer when sending. If no acknowledgement is received when the timer overflows, it retransmits the data. The key for any implementation is the timeout and retransmission strategy, how to determine the timeout interval and how often to retransmit.

T, C, and P manage four different timers for each connection.

1) The retransmission timer is used when you want to receive an acknowledgement from the other end.

2) Insist (p e r s I s t) timer so that the window size information keeps flowing, even if the other end closes its receiving window.

3) Keepalive (k E E P a L I V E) The timer can detect when the other end of an idle connection crashes or restarts.

4) 2MSL timer measures the time that a connection is in the state T I M E _ WA I T.

The most important part of timeout and retransmission is the measurement of the round trip time (RT T) for a given connection. Since both router and network traffic change, we believe that this time may change frequently, and that THE TC and P should track these changes and change the timeout accordingly.

Most of the Berkeley-derived implementations only measure RT T values once per connection at any one time. When sending a message segment, if the timer for the given connection is already in use, the message segment is not timed.

It is difficult to estimate the specific RTT value. Please refer to chapter 21 of TCP-IP Detailed Volume 1.

TCP undergoes delayed validation

Interactive data is always sent in packets smaller than the maximum message segment length. For these small segments, the receiver uses a delay-within-time acknowledgment method to determine whether the acknowledgment can be delayed so that it can be sent with the echo data. This usually reduces the number of message segments.

Usually T C P does not send A C K immediately upon receipt of data; Instead, it delays sending so that A, C, and K can be sent along with data that needs to be sent in that direction (sometimes called data piggy-back A, C, and K). Most implementations use a delay of 200 ms, that is, T, C, P will wait with a maximum delay of 200 ms to see if data is sent together.

Let’s take a look at this from another friend’s blog:

Abstract: When using TCP to transmit small packets, the design of the program is very important. If not in the design of TCP packet delay response, Nagle algorithm, Winsock buffer effect caused attention, will seriously affect the performance of the program. This paper discusses these problems, enumerates two cases, and gives some optimal design schemes for transmitting small packets.

Background: When a packet is received by the Microsoft TCP stack, a 200-millisecond timer is started. When ACK confirms that the packet has been sent, the timer is reset and the next packet is received, starting the timer again for 200 milliseconds. To improve application transport performance over intranets and the Internet, Microsoft TCP stack uses the following policy to determine when to send an ACK acknowledgement packet after receiving it: 1. If the next packet is received before the 200 ms timer expires, an ACK acknowledgement packet is sent immediately. 2. If there is a packet that needs to be sent to the receiving end of the ACK acknowledgement message, attach the ACK acknowledgement message to the packet and send it immediately. 3. When the timer times out, an ACK confirmation message is sent immediately. To avoid network congestion caused by small packets, Microsoft TCP stack enables the Nagle algorithm by default. This algorithm can concatenate the data sent by multiple calls to Send and Send the ACK information of the previous packet together. The following are the exceptions to Nagle’s algorithm: 1. If the number of packets aggregated by Microsoft TCP stack exceeds the MTU value, the data is sent immediately without waiting for an ACK from the previous packet. On the Ethernet, the Maximum Transmission Unit (MTU) of TCP is 1460 bytes. 2. If TCP_NODELAY is set, Nagle is disabled, and packets sent by applications calling Send are immediately delivered to the network without delay. To optimize performance at the application layer, Winsock copies data sent by application calls to Send from the application buffer to the Winsock kernel buffer. Microsoft TCP stack uses a method similar to Nagle’s algorithm to determine when data is actually delivered to the network. The default size of the kernel buffer is 8K. Using the SO_SNDBUF option, you can change the size of the Winsock kernel buffer. Winsock can buffer data larger than the size of the SO_SNDBUF buffer if necessary. In most cases, the completion of the Send call simply indicates that data has been copied to the Winsock kernel buffer, not that data has actually been posted to the network. The only exception is when Winsock kernel buffers are disabled by setting SO_SNDBUT to 0.

Winsock uses the following rules to indicate the completion of an Send call to the application: 1. If the socket is still within the SO_SNDBUF limit, Winsock copies the data that the application is sending to the kernel buffer to complete the Send call. 2. If the Socket exceeds the SO_SNDBUF limit and there is only one cached Send data in the kernel buffer, Winsock copies the data to the kernel buffer and completes the Send call. 3. If the Socket exceeds the SO_SNDBUF limit and the kernel buffer has more than one buffered send, Winsock copies the data to the kernel buffer and sends the data to the network until the Socket falls below the SO_SNDBUF limit or there is only one data left to send. The Send call is complete.

Case 1 A Winsock TCP client needs to send 10000 records to the Winsock TCP server and save them to the database. Record sizes range from 20 to 100 bytes. For simple application logic, the possible design is as follows: 1. The client sends in blocking mode and the server receives in blocking mode. 2, the client set SO_SNDBUF to 0, disable Nagle algorithm, let each packet individually sent. 3. The server calls Recv in a loop to receive the packet. Pass a 200-byte buffer to Recv so that each record is retrieved in a Recv call.

Performance: In the test, it was found that the client could only send 5 pieces of data to the service segment per second, with a total of 10,000 records and 976K bytes. It took more than half an hour to send all the data to the server.

Analysis: Because the TCP_NODELAY option is not set on the client, the Nagle algorithm forces the TCP stack to wait for the ACK confirmation of the previous packet before sending it. However, the client sets SO_SNDBUF to 0, disabling the kernel buffer. Therefore, 10,000 Send calls can Send and acknowledge only one packet. Each ACK acknowledgement is delayed by 200 ms for the following reasons: 1. When the server receives a packet, a timer is started for 200 ms. 2. The server does not need to send any data to the client, so ACK acknowledgements cannot be carried along with the packets sent back. 3. The client cannot send a packet without receiving the confirmation of the previous packet. 4. After the timer on the server times out, ACK confirmation is sent to the client.

How to improve performance: There are two problems with this design. First, there is the problem of delay. The client needs to be able to send two packets to the server in 200 milliseconds. Because clients use the Nagle algorithm by default, the default kernel buffer should be used and SO_SNDBUF should not be set to 0. Once the number of TCP packets exceeds the MTU value, the packets are sent immediately without waiting for the previous ACK. Second, the design calls Send once for each packet that is so small. It’s not very efficient to send such a small packet. In this case, you should fill each record up to 100 bytes and Send 80 records per call to Send. To let the server know how many records are sent at a time, the client can precede the record with a header.

Case 2: A Winsock TCP client opens two connections to communicate with a Winsock TCP server that provides the stock quote service. The first connection acts as a command channel to transfer the stock number to the server. The second connection acts as a data channel to receive stock quotes. After the two connections are established, the client sends the stock number to the server over the command channel and waits for the stock quote information to return on the data channel. The client sends the next stock number request to the server after receiving the first stock quote information. The SO_SNDBUF and TCP_NODELAY options are not set on the client or server.

Performance: During the test, it was found that the client could only get 5 quotes per second.

Analysis:

This design allows you to access only one stock message at a time. The first stock number information is sent to the server through the command channel, and the stock quote information returned by the server through the data channel is immediately received. The client then immediately sends a second request message, the SEND call immediately returns, and the sent data is copied to the kernel buffer. However, the TCP stack cannot deliver this packet to the network immediately because it does not receive an ACK for the previous packet. 200 milliseconds later, the timer on the server expires, the ACK of the first request packet is sent back to the client, and the second request packet from the client is delivered to the network. The quote for the second request is immediately returned from the data channel to the client, because at this point the client timer has expired and the ACK confirmation for the first quote has been sent to the server. This happens in cycles.

How to improve performance: A two-link design is not necessary here. If a connection is used to request and receive quote information, ACK confirmation of the stock request is immediately carried back along the way by the quote information returned. To further improve performance, the client should call Send to Send multiple stock requests at once, and the server should return multiple quotes at once. If for some special reason two one-way connections must be used, the TCP_NODELAY option should be set on both the client and server to send small packets immediately without waiting for an ACK from the previous packet.

Performance improvement tips: The two cases above illustrate some of the worst cases. When designing a solution for sending and receiving a large number of small packets, follow these recommendations: 1. If the data fragments do not need urgent transmission, the application should splice them into larger data blocks and then call Send. Since the send buffer is likely to be copied to the kernel buffer, the buffer should not be too large, usually a little less than 8K is efficient. As soon as the Winsock kernel buffer gets a block larger than the MTU value, it sends several packets, leaving the last packet. The sender is not triggered by the 200-millisecond timer except for the last packet. 2, if possible, avoid one-way Socket data flow. 3. Do not set SO_SNDBUF to 0 unless you want to ensure that packets are delivered to the network immediately after sending. In fact, the 8K buffer is suitable for most situations and does not need to be resized unless the new buffer is tested to be more efficient than the default size. 4. If the reliability of data transmission is not guaranteed, use UDP.

Conclusion:

1. TCP provides a reliable transmission service for “continuous byte stream”. TCP does not understand the data content carried by the stream, which needs to be resolved by the application layer.

2. The byte stream is continuous, unstructured, and need our application is orderly and structured data information, so we need to define your own “rules” to interpret the “continuous stream of bytes”, the solution is to define your own packet type, and then use this type to map “continuous byte streams.

To define a packet, let’s review the previous diagram of the encapsulation process for data entering the protocol stack:

In fact, packets define the user data entering the protocol stack in the figure above as a type for easy identification and communication. This is similar to the concept of an envelope, which is a format for communication between people. The format of an envelope is as follows:

Envelope format:

Addressee postcode

Addressee address

Name of recipient

letter

In C++, there are only two types suitable for expressing this concept: structure and class. Many people on the Internet have commented on this and posted codes: for example

/ * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * /

/* Data packet information definition starts */

/ * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * /

 

Pragma pack(push,1) #pragma pack(push,1) #pragma pack(push,1

 

/* Packet type enumeration [listed here as required] */

typedef enum{

              NLOGIN=1,

              NREG=2,

              NBACKUP=3,

              NRESTORE=3,

              NFILE_TRANSFER=4,

              NHELLO=5

} PACKETTYPE;

 

/ * * / in baotou

typedef struct tagNetPacketHead{

byte version; / / version

PACKETTYPE ePType; / / package type

WORD nLen; // Package length

} NetPacketHead;

 

/* Packet object [header & body] */

typedef struct tagNetPacket{

NetPacketHead netPacketHead; / / in baotou

char * packetBody; / / inclusions

} NetPacket;

 

#pragma pack(pop)

/ * * * * * * * * * * * * * * end of data packet information definition * * * * * * * * * * * * * * * * * * * * * * * * * * /

3. Packet sending sequence and packet receiving

A) due to the negotiated solution to TCP to send out the message segment length, so we sent data is likely to be divided and even is divided again after reorganization to the network layer to send, the network layer is transmitted by grouping, namely the order of the network layer to target the datagram is completely unpredictable, so accept packages will be half a pack, sticky problems. For example, if the sender sends msG1 and MSG2 in a row, the sender [transport layer] may have the following situation:

I. For MSS with Msg1 and MSG2 smaller than TCP, the two packets are sent in sequence without being split or reassembled

Ii. If Msg1 is too large, it is divided into two TCP packets msG1-1 and MSG2-2 for transmission. If MSG2 is too small, it is directly encapsulated into one packet for transmission

Iii. If Msg1 is too large, it is split into two TCP packets msG1-1 and MSG2-2. Msg1-1 is transmitted first, and the remaining MSG1-2 and MSG2 [smaller] are combined into a single message for transmission

Iv. Msg1 too large is split into two TCP packets MSG1-1 and MSG2-2. Msg1-1 is transmitted first, and the remaining MSG1-2 and MSG2 [smaller] are still too small together

V………………………………………. . Too much…………………………………… .

B) possible situations at the receiving end [transport layer]

I. Msg1 is received first and then MSG2 is received.

Ii. Msg1-1, MSG1-2, and msG2 are received

Iii. Receive MSG1, MSG2-1, and msG2-2

Iv. Msg1 and MSG2-1 are received first, and then MSG2-2 is received

V. / /………………… There are many more ………………

C) “receiving terminal network layer” actually receives the packet data submitted to the order and the sender is more likely to be entirely random, such as hair “sending the network layer of” send 1, 2, 3, 4, 5, the network layer and the receiving end receives a datagram sequence but may be 2, 1, 5, 4, 3, but the transport layer at the receiving end will ensure the orderliness and reliability of the link, The “receiver transport layer” reorganizes the out-of-order packets received by the “receiver network layer” into ordered packets (that is, the order sent by the sender transport layer) and then gives them to the “receiver application layer”, so the “receiver Transport layer” can always ensure the order of packets. The “receiver application layer” [the socket program we wrote] doesn’t have to worry about the order in which data is received.

D) But as mentioned above, sticky and half-pack problems are inevitable. We need to code ourselves to deal with sticky packet and half packet problems at the receiving application layer. The general approach is to define a buffer or use the container provided by the standard library/framework to store the received data in a circular manner, and then judge whether the buffer data meets the packet header size. If it meets the packet header size, then judge whether the remaining data in the buffer meets the packet body size. The detailed steps are as follows:

1. The received data is saved to the end of the buffer

2. The buffer data meets the size of the packet header. No

3. If the buffer data does not meet the packet header size, go to Step 1. If the buffer data meets the size of the packet header, take out the packet header. Then check whether the remaining data in the buffer meets the size of the packet body defined in the packet header. If not, go back to step 1.

4. If the buffer data meets the sum of the size of a packet header and a packet body, take out the packet header and packet body for use. In this case, you can copy the buffer data to another place or directly call the callback function to save memory.

5. Clear the first header and packet information of the buffer by copying the remaining data of the buffer to the buffer head and overwriting the first header and packet information.

Sticky package, half package processing concrete implementation of many friends have their own practices, such as the front of the link posted, here I also posted a reference:

Buffer implementation header file:

#include <windows.h>

 

#ifndef _CNetDataBuffer_H_

#define _CNetDataBuffer_H_

 

#ifndef TCPLAB_DECLSPEC

#define TCPLAB_DECLSPEC _declspec(dllimport)

#endif

 

/ * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * /

/* Data packet information definition starts */

/ * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * /

 

Pragma pack(push,1) #pragma pack(push,1) #pragma pack(push,1

 

/* Packet type enumeration [listed here as required] */

typedef enum{

              NLOGIN=1,

              NREG=2,

              NBACKUP=3,

              NRESTORE=3,

              NFILE_TRANSFER=4,

              NHELLO=5

} PACKETTYPE;

 

/ * * / in baotou

typedef struct tagNetPacketHead{

byte version; / / version

PACKETTYPE ePType; / / package type

WORD nLen; // Package length

} NetPacketHead;

 

/* Packet object [header & body] */

typedef struct tagNetPacket{

NetPacketHead netPacketHead; / / in baotou

char * packetBody; / / inclusions

} NetPacket;

 

#pragma pack(pop)

/ * * * * * * * * * * * * * * end of data packet information definition * * * * * * * * * * * * * * * * * * * * * * * * * * /

 

// The initial buffer size

#define BUFFER_INIT_SIZE 2048

 

// Buffer expansion coefficient [buffer expansion size = original size + coefficient * new data length]

#define BUFFER_EXPAND_SIZE 2

 

// A macro that calculates the length of the buffer except for the first header

#define BUFFER_BODY_LEN (m_nOffset-sizeof(NetPacketHead))

 

// Calculate whether the buffer data currently meets a full packet amount.

#define HAS_FULL_PACKET ( \

                                                 (sizeof(NetPacketHead)<=m_nOffset) && \

                                                 ((((NetPacketHead*)m_pMsgBuffer)->nLen) <= BUFFER_BODY_LEN) \

                                          )

 

// Check whether the packet is valid [the packet length is greater than zero and the packet is not empty]

#define IS_VALID_PACKET(netPacket) \

((netPacket.netPacketHead.nLen>0) && (netPacket.packetBody! =NULL))

 

// The length of the first packet in the buffer

#define FIRST_PACKET_LEN (sizeof(NetPacketHead)+((NetPacketHead*)m_pMsgBuffer)->nLen)

 

/* Data buffer */

class /*TCPLAB_DECLSPEC*/ CNetDataBuffer

{

/* Buffer operation related member */

private:

char *m_pMsgBuffer; // Data buffer

int m_nBufferSize; // Total buffer size

int m_nOffset; // Buffer data size

public:

int GetBufferSize() const; // Get the size of the buffer

BOOL ReBufferSize(int); // Resize the buffer

BOOL IsFitPacketHeadSize() const; // Whether the buffered data fits the size of the packet header

BOOL IsHasFullPacket() const; // Does the buffer have complete packet data [including header and packet body]

BOOL AddMsg(char *pBuf,int nLen); // Add messages to the buffer

const char *GetBufferContents() const; // Get the buffer contents

void Reset(); // Buffer reset [clear buffer data, but not free buffer]

void Poll(); // Remove the first packet in the buffer header

public:

       CNetDataBuffer();

       ~CNetDataBuffer();

};

 

#endif

 

Buffer implementation file:

#define TCPLAB_DECLSPEC _declspec(dllexport)

 

#include “CNetDataBuffer.h”

 

/ * * / construction

CNetDataBuffer::CNetDataBuffer()

{

m_nBufferSize = BUFFER_INIT_SIZE; // Set the buffer size

m_nOffset = 0; // Set the data offset value [data size] to 0

       m_pMsgBuffer = NULL;

m_pMsgBuffer = new char[BUFFER_INIT_SIZE]; // Allocate the buffer to the initial size

ZeroMemory(m_pMsgBuffer,BUFFER_INIT_SIZE); // The buffer is cleared

}

 

/ * destructor * /

CNetDataBuffer::~CNetDataBuffer()

{

if (m_nOffset! = 0)

       {

delete [] m_pMsgBuffer; // Release the buffer

              m_pMsgBuffer = NULL;

              m_nBufferSize=0;

              m_nOffset=0;

       }

}

 

 

 

/ * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * /

/* Description: Get the size of the data in the buffer */

/* Return: size of data in buffer */

/ * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * /

INT CNetDataBuffer::GetBufferSize() const

{

       return this->m_nOffset;

}

 

 

/ * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * /

/* Description: The size of the data in the buffer is sufficient for a packet size */

/* Return: returns True if it does, False otherwise

/ * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * /

BOOL CNetDataBuffer::IsFitPacketHeadSize() const

{

       return sizeof(NetPacketHead)<=m_nOffset;

}

 

/ * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * /

/* Description: check whether the buffer has a complete packet (header and body) */

/* Return: Returns True if the buffer contains a full packet, False */ otherwise

/ * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * /

BOOL CNetDataBuffer::IsHasFullPacket() const

{

// Return if the size of the header is not satisfied

//if (! IsFitPacketHeadSize())

       //     return FALSE;

 

return HAS_FULL_PACKET; // Macros are used here to simplify the code

}

 

/ * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * /

/* Description: resets the buffer size */

/* nLen: New data length */

/* Return: adjust the result */

/ * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * /

BOOL CNetDataBuffer::ReBufferSize(int nLen)

{

char *oBuffer = m_pMsgBuffer; // Save the original buffer address

       try

       {

nLen=(nLen<64? 64:nLen); // The minimum increment size is guaranteed

// New buffer size = increased buffer size + old buffer size

              m_nBufferSize = BUFFER_EXPAND_SIZE*nLen+m_nBufferSize;          

m_pMsgBuffer = new char[m_nBufferSize]; // Allocate a new buffer,m_pMsgBuff points to the new buffer address

ZeroMemory(m_pMsgBuffer,m_nBufferSize); // The new buffer is cleared

CopyMemory(m_pMsgBuffer,oBuffer,m_nOffset); // Copy the contents of the original buffer to the new buffer

       }

       catch(…)

       {

              throw;

       }

 

delete []oBuffer; // Release the original buffer

       return TRUE;

}

 

/ * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * /

/* Description: Adds a message to the buffer */

/* pBuf: data to be added */

/* nLen: the length of the added message

/* return: returns True on success, False */ otherwise

/ * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * /

BOOL CNetDataBuffer::AddMsg(char *pBuf,int nLen)

{

       try

       {

// Check whether the buffer length meets the requirement. If not, adjust the buffer size

              if (m_nOffset+nLen>m_nBufferSize)

                     ReBufferSize(nLen);

             

// Copy new data to the end of the buffer

              CopyMemory(m_pMsgBuffer+sizeof(char)*m_nOffset,pBuf,nLen);

             

m_nOffset+=nLen; // Modify data offset

       }

       catch(…)

       {

              return FALSE;

       }

       return TRUE;

}

 

/* Get the buffer contents */

const char * CNetDataBuffer::GetBufferContents() const

{

       return m_pMsgBuffer;

}

 

/ * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * /

/* Buffer reset */

/ * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * /

void CNetDataBuffer::Reset()

{

      

       if (m_nOffset>0)

       {

              m_nOffset = 0;

              ZeroMemory(m_pMsgBuffer,m_nBufferSize);

       }

}

 

/ * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * /

/* Remove the first packet in the buffer header */

/ * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * /

void CNetDataBuffer::Poll()

{

       if(m_nOffset==0 || m_pMsgBuffer==NULL)

              return;

       if (IsFitPacketHeadSize() && HAS_FULL_PACKET)

       {           

              CopyMemory(m_pMsgBuffer,m_pMsgBuffer+FIRST_PACKET_LEN*sizeof(char),m_nOffset-FIRST_PACKET_LEN);

       }

}

 

TCP packet sending and receiving are encapsulated as follows:

The header file:

#include <windows.h>

#include “CNetDataBuffer.h”

 

// #ifndef TCPLAB_DECLSPEC

// #define TCPLAB_DECLSPEC _declspec(dllimport)

// #endif

 

 

#ifndef _CNETCOMTEMPLATE_H_

#define _CNETCOMTEMPLATE_H_

 

 

// Communication port

#define TCP_PORT 6000

 

/* Communication terminal [containing a Socket and a buffer object] */

typedef struct {

SOCKET m_socket; // Communication socket

CNetDataBuffer m_netDataBuffer; // The data buffer associated with this socket

} ComEndPoint;

 

/* The packet callback argument */

typedef struct{

       NetPacket *pPacket;

       LPVOID processor;

       SOCKET comSocket;

} PacketHandlerParam;

 

class CNetComTemplate{     

/* Socket operation related member */

private:

      

public:

void SendPacket(SOCKET m_connectedSocket,NetPacket &netPacket); // The packet sending function

BOOL RecvPacket(ComEndPoint &comEndPoint,void (*recvPacketHandler)(LPVOID)=NULL,LPVOID=NULL); // The packet acceptance function

public:

       CNetComTemplate();

       ~CNetComTemplate();

};

 

#endif

 

 

Implementation file:

 

#include “CNetComTemplate.h”

 

CNetComTemplate::CNetComTemplate()

{

 

}

 

CNetComTemplate::~CNetComTemplate()

{

 

}

 

/ * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * /

/* Description: send packets */

/* m_connectedSocket: set up the connection socket */

/* netPacket: packets to be sent */

/ * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * /

void CNetComTemplate::SendPacket(SOCKET m_connectedSocket,NetPacket &netPacket)

{

if (m_connectedSocket==NULL || ! IS_VALID_PACKET(netPacket))// Exit if no connection has been established

       {

              return;

       }

::send(m_connectedSocket,(char*)&netPacket.netPacketHead,sizeof(NetPacketHead),0); // Send the packet first

::send(m_connectedSocket,netPacket.packetBody,netPacket.netPacketHead.nLen,0); // Send the packet body

}

 

/ * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * /

/* Description: receive packet */

/* comEndPoint: communication terminal [containing socket and associated buffer] */

/* recvPacketHandler: this function is called when a packet is received */

/ * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * /

BOOL CNetComTemplate::RecvPacket(ComEndPoint &comEndPoint,void (*recvPacketHandler)(LPVOID),LPVOID pCallParam)

{

       if (comEndPoint.m_socket==NULL)

              return FALSE;

      

       int nRecvedLen = 0;

       char pBuf[1024];

// Continue reading TCP segments from the socket if buffer data is insufficient for packet size

while (! (comEndPoint.m_netDataBuffer.IsHasFullPacket()))

       {

NRecvedLen = recv (comEndPoint m_socket, pbufs, 1024, 0).

             

If (nRecvedLen = = SOCKET_ERROR | | nRecvedLen = = 0) / / if the Socket errors or other connection already normally closed end reads

                     break;

comEndPoint.m_netDataBuffer.AddMsg(pBuf,nRecvedLen); // Store the newly received data into the buffer

       }

 

// There are three possible scenarios:

The read data meets a complete TCP packet segment

//2. A socket_error error occurred in the read

//3. The peer connection is closed before the normal reading is completed

      

 

// If no data is read or the complete packet segment is not read, this field is returned

if (nRecvedLen==0 || (! (comEndPoint.m_netDataBuffer.IsHasFullPacket())))

       {

              return FALSE;

       }

 

if (recvPacketHandler! =NULL)

       {

// Construct the packet to be passed to the callback function

              NetPacket netPacket;

              netPacket.netPacketHead = *(NetPacketHead*)comEndPoint.m_netDataBuffer.GetBufferContents();

netPacket.packetBody = new char[netPacket.netPacketHead.nLen]; // Dynamically allocate package space

             

// Construct the callback function argument

              PacketHandlerParam packetParam;

              packetParam.pPacket = &netPacket;

              packetParam.processor = pCallParam;

 

// Call the callback function

              recvPacketHandler(&packetParam);

 

              delete []netPacket.packetBody;

       }

 

// Remove the first packet from the buffer

       comEndPoint.m_netDataBuffer.Poll();

 

       return TRUE;

}

\

Article source: Click the open link