What is the RPC

Remote Procedure Call (RPC) A protocol used to implement inter-process communication. It facilitates the construction of distributed applications and provides powerful Remote Call capabilities without sacrificing the semantic simplicity of local calls. RPC framework should provide a transparent call mechanism so that users do not have to explicitly distinguish between local calls and remote calls, so that users can call remote services just as they call local services.

A basic RPC framework usually consists of two parts: transport protocol and serialization protocol. Mature RPC libraries encapsulate advanced service-oriented features such as “service discovery”, “load balancing”, and “circuit breaker degradation”.

Common transport protocols include HTTP/HTTP2 and TCP/UDP.

Common serialization protocols: XML and JSON based on text encoding, Protobuf and Hessian based on binary, etc.

HTTP supports connection pool reuse and data can be serialized using the Protobuf binary protocol, but HTTP requires more redundant request and response header messages. The customized TCP protocol can effectively control the packet size and improve communication efficiency.

Therefore, most RPC frameworks choose custom TCP + text-based serialization protocol as the channel for process communication.

What is gRPC

GRPC is the Google version of RPC, which uses HTTP2 for transport and Protobuf for serialization. In addition, gRPC also contains mature IDL scheme, interface and data structure can write IDL file (.proto) and then generate different platform code for call through tools.

GRPC uses HTTP2 as a general information protocol, HTTP2 has the following optimization:

  • Binary framing

Http1.x uses a newline character as a separator for plain text. HTTP/2 splits all transmitted information into smaller messages and frames and encodes them in binary format. Frames are the smallest unit of HTTP2 communication. Each frame contains the header of the frame, which identifies the message of the current frame. A message consists of one or more frames, such as a request message or a response message.

  • multiplexing

In HTTP1.x, concurrent requests use multiple TCP connections at the same time and are limited in number. HTTP2 uses binary framing. Requests for the same domain name can be made on a single connection. The data is sent as a message, and the message can be composed of multiple frames, which can be sent out of order, and the receiver reassembles the message according to the frame header identifier.

  • Flow optimization (request) priority

In HTTP/2, each request can have a priority value of 31 bits. 0 indicates the highest priority. A higher value indicates a lower priority. With this priority value, clients and servers can take different policies when dealing with different streams to optimally send streams, messages, and frames.

  • Server push

In contrast to http1.x, HTTP/2 allows the server to continuously send messages with multiple responses to a single request.

  • The head of compression

For the duration of the connection, the client and server maintain a separate header table. Header information is not sent for the same header data, and the key value pairs of the header are either updated or appended.

HTTP2 addresses the problems of Http1.x and is close to TCP in efficiency but more convenient than TCP custom protocols. The TCP user-defined protocol also needs to solve problems such as concurrent connection number control, disconnection and reconnection mechanism, intermittent network disconnection, downtime protection, message caching, and retransmission mechanism.

HTTP2 reference link

Protocol Buffers(Protobuf for short), a Serialization Protocol produced by Google, is independent of development language and platform and has good scalability. Protobuf, like all serialization frameworks, can be used for data storage and communication protocols. Protobuf serialization results are much smaller in size than XML and JSON, which have too much descriptive information, resulting in larger messages; Protobuf also uses Varint encoding to reduce data footprint. Protobuf serialization and deserialization are much faster than XML and JSON. Protobuf directly converts binary streams into complete objects through bit operations, while XML and JSON also need to be constructed into XML or JSON object structures for field matching.

GRPC uses the HTTP2 transport protocol to transfer protobuf serialized binary data, so it has high efficiency and low resource occupancy.

Protocol Buffer official document

How to write the proto file

The.proto file is used to describe the service and interface names and the data structures used in the request/response. It acts as an “instruction document” between different language platforms and services, and can be translated into interface implementation codes of Go, Java, C++, Python, Objective-C and other languages through special compilation tool Proto files. Proto syntax is very simple, take a look at the full proTO file:

// Specify the Protocol Buffer version syntax ="proto3"; // Used to specify the objective-C class prefix option objc_class_prefix ="RTG"; // Namespace package routeGuide; RouteGuide {RouteGuide {RouteGuide {RouteGuide { RPC GetFeature(Point) returns (Feature) {} RPC ListFeatures(Rectangle) returns (stream Feature) {}  rpc RecordRoute(stream Point) returns (RouteSummary) {} rpc RouteChat(stream RouteNote) returns (stream RouteNote) {} } Message Point {int32 latitude = 1; int32 longitude = 2; } message Rectangle { Point lo = 1; Point hi = 2; } message Feature { string name = 1; Point location = 2; } message RouteNote { Point location = 1; string message = 2; } message RouteSummary { int32 point_count = 1; int32 feature_count = 2; int32 distance = 3; int32 elapsed_time = 4; }Copy the code

The basic syntax for defining a service takes service as the keyword, followed by the service name. A service is encoded as a separate object, and interfaces within the service are encoded as methods within the object (Objective-C).

service RouteGuide {
   ...
}
Copy the code

After the service is defined, an interface can be added to the service. The interface starts with RPC keywords and contains three elements: interface name, request data structure, and response data structure

RPC GetFeature(Point) returns (Feature) {}Copy the code

Both the request and response of an interface can be defined as a continuous message flow, meaning that a client can send one request data and a server can return multiple response data. Similarly, the client can send multiple requests consecutively, and the server can return a response after receiving all the requests. Simply add the stream keyword before the corresponding data structure to indicate whether the current request or response is continuous.

// Request RPC ListFeatures(Rectangle) returns (Stream Feature) {// Request RPC RecordRoute(Stream Point) returns (RouteSummary) {}Copy the code

In addition to the above request or response, which can be defined as a continuous message flow, the request response can also be a continuous message flow at the same time by adding the STREAM identifier to the request response data structure. Continuous requests and continuous responses are independent of each other, and both client and server can process the received message in any combination. For example, the server can receive all the messages and then return the data to the client, or the server can return a message immediately after receiving a message, or any combination of the following. The sequence of messages received by both ends is the same as the sequence of messages sent.

rpc RouteChat(stream RouteNote) returns (stream RouteNote) {}
Copy the code

The name of the data structure is preceded by the message keyword, and the type of each field needs to be specified before the field.

Each field has a number, which is usually incremented from 1. In principle, there is no upper limit for the number of a single data structure field, but it is recommended that the number of a single data structure field not exceed 16, which will take up extra space during binary encoding.

Message is compiled into a file separate from the service interface, which relies on the source code compiled by Message.

message Point { int32 latitude = 1; // 1 is the field number int32 longitude = 2; / / same as above}Copy the code

Fields inside message support nesting similar to json to Model. For example, a Rectangle needs to be represented by two coordinates, and Point has been defined previously. Point can be used as the coordinate field type when defining the fields inside Rectangle.

message Point {
  int32 latitude = 1;
  int32 longitude = 2;
}

message Rectangle {
  Point lo = 1;
  Point hi = 2;
}
Copy the code

When compiling as an Objective-C interface library, you can specify a class prefix that applies to all service, data structure class names.

option objc_class_prefix = "RTG";
Copy the code

Compile proto file

The protoc and grpc_objective_c_plugin plugin tools can be downloaded to compile objective-C client code, but it is easier to compile it through Cocoapods and integrate it into an iOS project. The sample PodSpec file is provided, replacing the proto file path section.

By analyzing podSpecs, you know that podSpec files do two main things:

Download the Protoc compiler tool and Objective-C compiler plug-in via two empty Pod dependencies. These dependencies do not contain any source code and are simply made use of the dependency management download chain of the Cocoapods A tool, so they are not packaged into the project and have no impact on the project.

Before generating pod project, execute a compilation command through the compiler downloaded in the first step to generate the source file to the specified directory and integrate it into the project as a dependency of the project. This process is the same as managing third-party libraries with POD. The generated source code and required dependencies can be viewed directly in the POD project.

A complete gRPC service interface generates two objective-C files, pBOBJC and PBRPC. The pBOBJC type is the Message in the PROto file, known as the Model, while the PBRPC is the interface API.

  • *.pbobjc.h
  • *.pbobjc.m
  • *.pbrpc.h
  • *.pbrpc.m

Call way

The CALL to the API is very simple. Similar to the common network library interface call, gRPC provides two callback methods: delegate and block. In fact, gRPC provides two versions of the interface in the.pbrpc.h file. Only version 1 provides the block callback method, but the interface of version 1 is officially not recommended in the comments, so the only callback method recommended is delegate.

Delegate call mode

- (void)execRequest { RTGRectangle *rectangle = [RTGRectangle message]; rectangle.lo.latitude = 405E6; rectangle.lo.longitude = -750E6; rectangle.hi.latitude = 410E6; rectangle.hi.longitude = -745E6; GRPCUnaryProtoCall * call = [_service listFeaturesWithMessage: rectangle responseHandler: self / / < designated agent callOptions: nil]; [call start]; } delegate method dispatchQueue {dispatch_queue_t dispatchQueue {dispatch_queue_t dispatchQueue {return dispatch_get_main_queue();
}

- (void)didReceiveProtoMessage:(GPBMessage *)message {
  RTGFeature *response = (RTGFeature *)message;
  if (response) {
    ....
  }
}

- (void)didCloseWithTrailingMetadata:(NSDictionary *)trailingMetadata error:(NSError *)error {
  if(error) { ..... }}Copy the code

Block attempts to call the interface (not recommended)

RTGPoint *point = [RTGPoint message];
point.latitude = 40E7;
point.longitude = -74E7;

[service getFeatureWithRequest:point handler:^(RTGFeature *response, NSError *error) {
  if (response) {
    // Successful response received
  } else {
    // RPC error
  }
}];
Copy the code

Continuous sending and receiving

Continuous receipt and single-request response are called the same way, except that the didReceiveProtoMessage callback method for continuous requests is called multiple times.

Continuous sending is a call to the service object’s interface method that returns an object and calls the start method that returns the object. The request object is then continuously written to.

GRPCStreamingProtoCall *call = [_service recordRouteWithResponseHandler:self
                                                              callOptions:nil];
[call start];
for (id feature infeatures) { RTGPoint *location = [RTGPoint message]; . [call writeMessage:location]; } [call finish];Copy the code

Finally, call Finish to tell the server that the request data has been sent.

The call method when both send and receive are continuous is a combination of continuous receive and continuous send.

- (void)execRequest {
  NSArray *notes = @[[RTGRouteNote noteWithMessage:@"First message" latitude:0 longitude:0],
                     [RTGRouteNote noteWithMessage:@"Second message"latitude:0] ... ] ; GRPCStreamingProtoCall * call = [_service routeChatWithResponseHandler: self / / agent callOptions: nil]; [call start];for (RTGRouteNote *note innotes) { [call writeMessage:note]; } [call Finish]; // After the notification is sent, - (void)didReceiveProtoMessage:(GPBMessage *)message {RTGRouteNote *note = (RTGRouteNote) *)message;if(note) { ... }} / / errors or receive complete is called - (void) didCloseWithTrailingMetadata: (NSDictionary *) trailingMetadata error error: (NSError *) {if(! error) { ... }else{... }}Copy the code

The interface is compatible with

No matter which network communication mode the interface is built in, the problem of interface compatibility is inevitable. GRPC, as a general service invocation framework, naturally needs to solve the problem of interface compatibility. Interface compatibility can be divided into “forward compatibility” and “backward compatibility”. For example, if a receiver upgrades its own interface field and the sender still uses the old version of the interface field, the receiver can still parse the old version of the data is called backward compatibility. If the sender upgrades its own interface field and the receiver still uses the old version of the field, the receiver can still resolve the new version of the data is called forward compatibility.

GRPC uses Protobuf as the message encoding method, and the interface compatibility is mainly the compatible processing of message fields, so the interface compatibility is actually the compatible processing of Protobuf parsing fields. Processing backward compatibility means that when the receiver uses a new version, parsing the received data stream of the old version may result in fields being added to the new version or changed in the new version. Since there are no new or modified fields in the radix data stream, default values will be generated during parsing. Forward compatibility is when the received data stream contains fields that are not recognized by the recipient, in which case the field data is discarded directly.

Forward compatibility and backward compatibility require that field names and field numbers are not reusable within the base lifecycle, which means that you need to mark old fields as reserved when you need to change field names and field numbers to prevent them from being used in the future. Because already used fields may exist in some older clients, if other development uses used field names and numbers, the meaning of data received by older clients will change.

enum Foo {
  reserved 2, 15, 9 to 11, 40 to max;
  reserved "FOO"."BAR";
}
Copy the code

You can also make changes to the type, but these changes are limited to the same series. For example, (INT32, uint32, INT64, uint64, and bool) and (SINT32, SINT64) The types in the brackets can be converted to each other.

The total section

The function of gRPC in the client side is almost a network framework, API interface and Model data collection, gRPC helps us to achieve network request management, interface definition and Model analysis, it completely shields the network data conversion, so that users do not need to care about the details of Model conversion connection management. Invoking the gRPC interface is like invoking a local asynchronous service, and using gRPC can completely replace the client network layer.

advantages

  • The interface description shared by the front and back ends greatly reduces the cost of multi-terminal communication
  • Excellent transmission and data compression performance, effectively reduce communication bandwidth requirements
  • Better ease of use, easy access
  • Compression binary data, good security
  • The community is active and stable

disadvantages

  • There are certain learning costs
  • Third party library dependency, increase package size
  • Incompatible with existing communications, the demand is not too high
  • Binary compressed data communication brings additional cost of packet capture analysis