Moment For Technology

Define "Communication protocol" for Go language cloud application development

Posted on Aug. 9, 2023, 4:15 a.m. by Hazel Sims
Category: The back-end Tag: The back-end huawei Go

I shared it at Gopher 2020 and wrote down the core content for your reference.

Huawei established Cloud BU in 2016 and introduced CNCF projects such as Kubernetes and Prometheus. Most of these software were written in GO language, and the RD team naturally began to introduce GO language to build cloud services. However, the GO ecosystem was not perfect at that time, so they had to write basic capability modules from beginning to end for different cloud applications.

First from a simple cloud application to see his implementation process

A simple registration discovery Service Service Center, like Eureka, whose responsibilities are not described here.

Dynamic and static separation

In order to reduce the amount of data, the public part is selected for unified management, and the instance ownership is divided by static information. So micro service and micro service instance for the mapping of the 1 to n, the service name, version, described the information such as smoke in the public sector, by reducing the redundancy, to reduce the overhead of the network, such as "describe" of this information is not in the network transmission, at the same time can both micro service management ability, standardization of the micro service model. For example, a microservice must have version information. The actual instance information delivered to the microservice cache is much less, and many fields are for human viewing rather than machine execution.

As can be seen from the figure, microservices static information contains a schema, in which contract documents associated with microservices are associated, also a mapping relationship of 1 to N. Microservice documents can be uploaded manually or automatically generated by code. Microservice documents can be viewed in the registry and are bound to the version number of the microservice, which cannot be changed once uploaded. So why document first?

Wrong way: The client development team waits for back-end services to be written before starting integration development.

The right way: Both the client and the server are developed simultaneously, using documentation as a benchmark, and the client mocks to remove the dependency on the server.

In addition, if documents are not reviewed in a timely manner, very bad things can happen. Inconsistent naming conventions, similarly defined apis, poor scalability, any of these can add significantly to development costs. It's important to look early and avoid it. This is why the registry adds document upload and query capabilities.

Inter-service dependency management

Too high a hierarchy of calls can cause localization difficulties and performance degradation. The logical hierarchy is that three calls to services A - B - C can complete one call.

Two services that are mutually dependent on each other will spend more time to analyze the impact of function upgrade or change. For example, AB is mutually dependent. If a new function involves two changes, how can they be launched together?

In addition, simple dependencies facilitate system testing and analysis. This gives the architect a good way to look at the dependencies between microservices and adjust the architecture in a timely manner.

Caching mechanisms

The Service Center does not store persistent data. If the ETCD network is faulty, the Service Center becomes unavailable. However, etCD requires 3 different computer rooms to achieve high availability, such as nodes 2,2, and 1 distributed in 3 data centers, which obviously costs a lot.

We have to use soft means to circumvent it and gain some reliability. Therefore, Service Center introduces asynchronous caching mechanism. At the beginning of startup, Service Center establishes a long connection with ETCD, also known as Watch. In order to prevent the time window of watch from changing, a layer of protection is made and a full query is made before the watch. Resource changes queried during the run are cached locally in the Service Center and then looped asynchronously, which improves both availability and performance at a low cost.

Consul's implementation is actually integrated computing and storage, rather than separation. I think such an architecture is a good choice for low operation and maintenance costs before the scale of business is increased. However, it is questionable whether Consul can support a large volume of microservices, so I persist in separating computing and storage and put it into practice. Service Center also supports built-in ETCD, which means computing and storage integration helps reduce administrative costs early on.

Summary: We have taken several steps to improve microservices development efficiency, reduce network overhead, and improve performance and reliability through asynchronous caching, which is the capability we have accumulated on this service. Making the registry discovery component more user-friendly for technical managers helps architects continuously improve and manage the architecture. Cloud applications, however, are much more than just delivering business functionality.

What we've just seen is an iceberg above the surface of the water, but there's a lot of base capabilities that need to be written underneath. Let's first expand the Service Center architecture

Cloud service Service Center

This component is responsible for the registration and discovery of microservices and provides restful apis.

It has four main modules:

  • Service registration discovery: Realize service topology awareness through registration discovery

  • Contract discovery: Each service has a contract record, supporting various formats such as Open API, gRPC Proto

  • RBAC: Role-based access control. Administrators can manage accounts and distribute accounts to microservices or different people

  • Service governance: Deliver governance rules for micro services, such as retry, traffic limiting, fusing, routing policy, etc.

Delivering a cloud service is far more than delivering business functions, but to consider the security, resilience, privacy, operational and maintenance capabilities, of course, we can give some of the capabilities to some middleware, such as gateway. However, there is still a large amount of functionality that needs to be written and reused in every microservice, which is why the base capability library was written.

  • Quota management: Cloud resources are managed by tenant quota, and the resources that tenants can use are strictly limited

  • Alarm: When critical faults occur in the micro service, report them to the alarm system instead of setting alarm policies such as thresholds through the cloud service

  • Security: encryption and decryption certificate, password

  • ID generation: ID generation algorithm, used to generate microservice ids, instance ids, etc

  • Multiple middleware: call process needs to be audited, call chain tracing, generated indicators monitoring, etc

Project address

For these capabilities, extracting common library functions is also completely inadequate, hybrid cloud, public cloud, open source projects, so do the following capabilities:

  • Pluggable: That is, an implementation of a quota system, for example, that is introduced at compile time on demand (limited by go language capabilities), is not needed in the community

  • Heterogeneous system: A function needs to be implemented in multiple ways, for example, auditing. The public cloud has an auditing system that needs to be connected, and the community uses local log printing.

  • Different algorithms: decryption tools, ID generators, different implementations to replace algorithms for different delivery scenarios or security requirements. For example, ID generation could be snowflake, UUID. Encryption and decryption algorithms use -AES or other public algorithms. You can also consider a closed source core algorithm, write a simple algorithm as a new plug-in and open source it with the entire project, allowing the community to help improve peripheral capabilities while protecting the competitiveness of your commercial version

Go Chassis development framework

How do we deal with it

  • Pluggable: introduced at compile time on demand

  • Heterogeneous services: A backend service may have multiple implementations.

  • Different algorithms: decryption tools, ID generators, different implementations to replace algorithms for different delivery scenarios or security requirements.

  • Distributed systems are hard to govern: How to implement frameworks that can help satisfy cloud native applications

In order to face the diversity of requirements and benefit all newly planned components, we need a unified framework and standard to speed up development, which is the birth process of the development framework Go Chassis.

As can be seen from the figure, the business logic is the business code written by the user, so the framework itself is the protocol layer, the middle layer and the plug-in suite, and the management part is the cloud service. The applications developed by the framework can be connected to use these cloud capabilities without perception. Such as:

  • The register discovery plugin connects service Center with Kubenetes

  • The quota management plug-in connects to the quota management service of the cloud service

  • Middleware such as metrics monitoring pair connected to Prometheus

So how do we use this framework to speed up our development.

Approach 1: Use the back-end service as a plug-in

Common backend

  • Quota management

  • Authentication service

  • Object Storage Service

One element of cloud native is the use of back-end services as additional resources. The definition of what is treated as a back-end service is that it is not developed and maintained by your own organization, from application runtime to infrastructure-invisible, serverless API Servers, which are black boxes to you.

When we call the back-end services, actually they are not in the service system of governance, given the testability (such as a mock test) and replaceability (business continuously, and change the better service at any time, meets the needs of transformation, etc.), we need to put their plug-in, in a flexible to choose to replace or remove.

Develop your plug-in and install it into the framework so it can be pulled up and called by configuration.

Tool 2: Precipitate a demand baseline

Before we can provide any service, we need to meet some basic requirements, such as:

  • The request body must be sized

  • The API must be stream limiting

  • Passwords cannot be stored in plain text

  • Access for authentication

  • No single point of failure

  • Access to the audit

  • Operational capacity

The first step is to standardize the invocation of the runtime, since different departments may have proprietary protocol claims, leaving service governance to the core framework. It is up to the business unit to independently develop or integrate existing protocols. When you find that different departments within the company are developing their own protocols and doing their own service governance, it's very difficult to unify the business into one architecture, one tool chain.

We generalize the Non-protocol from the Invocation model so that it can be handled in a unified processing chain.

The design of the processing chain is similar to AOP, that is, adding code logic before and after the business process for special processing, such as auditing user actions and collecting request metric data.

ResponseCallBack is a ResponseCallBack that receives the return of a subsequent handler, so each handler can define its own ResponseCallBack to retrieve the result of subsequent handler or even business logic code execution. Help fully decouple generic logic (i.e., middleware) from business logic.

We can take a look at the currently supported middleware. No matter flow limiting, fusing, load balancing, authentication and authentication, audit, we all use this mechanism to realize it. All the tool chains, service governance means and security compliance of the company are put into the processing chain, which can quickly speed up research and development, unify standards and reduce management burden.

The framework provides both imperative invocation capabilities, such as metric collection, and declarative usage, such as traffic management. Declarative usage of traffic limiting capabilities based on traffic characteristics also supports canary publishing

Overview of plug-in capabilities:

As you can see, it already supports a number of ecologies and provides abstract interfaces to a variety of back-end systems to facilitate rapid application development.

With such a framework, we can let the business team focus on business code development without understanding the complexity of the back end and other non-functional requirements. Returns the following

  • Mock tests can be performed on large systems to improve delivery

  • Address different delivery scenarios

  • Ensure backend replaceable

  • Interface separation of rd responsibilities

From the perspective of architecture or business evolution, the technology used by the backend is rapidly evolving, and we need to ensure the timely evolution of the system and products through the rapid replacement of back-end services. Therefore, the design of interfaces should be considered. Substitutability is more important than reusability. This also satisfies the dependency inversion of programming principles. When we develop a new microservice, we only need to implement its business logic

Method 3: Configure governance

Please refer to

Means 4: Easy to handle

This means that they can be turned on and off in an instant. We won't talk about quick starts here, because the Go + Docker runtime platform can handle such a scenario, but we'll talk about accident-oriented handling.

The Protocol Server usually represents a protocol, but it can also be a programming model such as HTTP. The programming model such as Beego, gin, here is a configuration example of the framework, which means that two HTTP ports and GRPC port services are pulled up in a microserver process

Upon receiving a system signal, each server is iterated to stop

In addition, custom elegant shutdown features contributed by community developers allow users to hijack signal and shutdown processing, as well as customize the processing before and after.

Trick 5: Lightweight kernel

The basic dependencies are the necessary Prometheus, OpenTracing, JWT, K8S client, and Go-restful libraries. Registration discovery is also pluggable.

A variety of capabilities are provided in a separate repository as a series of extensions. Such as GRPC protocol, Kubernetes registry, etc., can be introduced on demand. Name reference chrome-Extension.

We were able to build such a lightweight core thanks to the three-plate axe, which is the core concept of go Chassis:

  • Middleware: Handler chain processes requests

  • Bootstrap: Replace the default implementation and execution logic arbitrarily

  • Plug-in: Backend and implementation of arbitrary replacement.

Have their own rebuilt wheel

This is the idea behind the Go Chassis development framework and logo. Chassis is the chassis. The logo is in the shape of a tire

We often hear that some people duplicate the wheel, and some criticize the wheel. I believe this is because there are still deficiencies in the open design of the existing scheme. Each team implanted too many things with its own project or business shadow in the development period of the project, so that everyone had to duplicate the wheel suitable for their own.

We should describe and implement a more abstract and defined paradigm, and build capabilities on top of this chassis. You take this original wheel and you enhance it to make it more suitable for you, and you can make and adjust it yourself, whether you want to go off-road or snow wheel. We abstract the accumulated capabilities of our rd team into a variety of interfaces and plug-ins, in order not to remanufacture wheels, but to rebuild them based on existing wheels, so that the project products can run faster, and all extended functions are compatible with each other. This is the biggest performance improvement for a GO team. At present, this wheel I prefer to call it the cloud service development wheel, which is very suitable for our cloud service team to use. You can see that in the following example.


Video call background for Huawei Honor mobile phones and smart screens

Huawei public Cloud has been launched, which supports terminal companies to communicate with hundreds of millions of registered users, and implements service governance based on Go-chassis and Service Center.

Edge calculation kubeEdge

§ Develop the service governance base based on Go Chassis

§ Managed nearly 100,000 edge nodes in 29 provinces and autonomous regions, and deployed more than 500,000 edge applications. It has supported the continuous adjustment and update of portal frame information collection service in more than 10,000 toll stations, and has met the requirement of collecting more than 300 million pieces of information every day.

§ Provides a good platform support for the future development of innovative businesses such as vehicle-road collaboration and automatic driving


The Gopher conference also featured Shopee to talk about the use of Go Chassis on their supply chain platform. Check out their share


1. Define your app Development communication protocol. Two things that are very important in a company are culture and code of conduct. These are the two things that every leader in a company must define first. That way leaders don't have to be hands-on. Clearly, it is important to define a set of communications protocols. Go Chassis is the communication protocol of our Go team. Each microservice is developed by a small team, which may be the same team or different teams. The framework we make is to define this set of communication protocols, so as to reduce the cost of research and development, while giving consideration to scalability, without excessive restrictions on development. We formalized API First to look at API design, relied on management to look at rational service relationships, and stipulated that all capabilities should be deposited into plug-ins and middleware in order to define "communication protocols" for development teams to develop and govern cloud services.

2. Go in the role of new infrastructure as we all know, the age of 5 g allowed more access to the equipment, we see evolution of the Internet is the first generation of PC era, is the second generation of mobile phones, the third generation of everything is interconnected, and small equipment is bound to have generated new semiconductor, the new operating system (such as huawei HongMeng), layers like this, There will be a need for a new language and a new framework, and THE GO language itself fits into that position. I won't go into it here, because you know where the GO language fits in. Distributed devices also need a framework for governance, and Go Chassis will play an important role here. It is likely to become a development base in the infrastructure space, as can be seen from the use of Go Chassis by projects such as KubeEdge and Video Cloud.

Welcome to participate in the community, open source project address:

Welcome to leave a comment on which part I need to expand the in-depth analysis.

If you think my content is helpful, please click a STAR for our project Service Center and Go Chassis. Thank you.

Please scan to follow my personal subscription number

About (Moment For Technology) is a global community with thousands techies from across the global hang out!Passionate technologists, be it gadget freaks, tech enthusiasts, coders, technopreneurs, or CIOs, you would find them all here.