By Sandy Sandy

Tars is a unified application framework of background logic layer that Tencent has been using from 2008 to today. It currently supports C++,Java,PHP,Nodejs and Golang languages. The framework provides users with a complete set of solutions related to development, operation and maintenance, and testing, helping a product or service to develop, deploy, test, and launch quickly. It integrates extensible protocol codec, high-performance RPC communication framework, name routing and discovery, release monitoring, log statistics, configuration management and so on. Through it, it can quickly build its own stable and reliable distributed applications in the way of microservices, and achieve complete and effective service governance. At present, this framework is being used in Tencent’s core businesses and is quite popular. The scale of service nodes deployed and operated based on this framework has reached tens of thousands. Tars became open source in April 2017 and joined the Linux Foundation in June 2018. TarsGo is Tars Go language version, in September 2018, open source, the project address github.com/TarsCloud/T… Welcome star.

New version of TarsGo has been released

After the last open source, some users reported some requirements, and based on the feedback from users, we implemented and released version 1.1.0. Support for PB, Support for Zipkin distributed tracking, support for Filter (custom plug-in writing), support for Context, in addition to a number of optimizations and bugfixes.

New features: PB support

Protocol Buffers (PB) is Google’s language-independent, platform-independent format for data exchange, first published in July 2008. With the development of microservices architecture and its own excellent performance, ProtoBuf can be used in many areas such as network transport, configuration files, data storage, and currently has a large number of applications on the Internet. For users who already use GRPC and use proto files, and want to convert to TARS protocol, they need to translate the above proto files into Tars files. This kind of translation is cumbersome and error-prone. To do this, we decided to write a plug-in to support proto files to generate TARS RPC logic directly. Protoc -gen-go code logic is reserved for the specification of plug-in writing, referring to GRPC, mainly GRPC /grpc.go and an import plug-in link_grpc.go. Here we write tarsrPC /tarsrpc.go and link_tarsrpc.go using aspects:

  • Place these two files under protoc-gen-go and go install regenerates the protoc-gen-go binary

  • Define the proto file

  • Generate serialization and RPC-related interface code using the protoc-gen-go recompiled installation

protoc --go_out=plugins=tarsrpc:. helloworld.proto
Copy the code
  • Write tARS client and server code, parameters using pB-generated structure, the rest of the code logic and normal TARS service.

  • Detailed principles and use of documentation, read Tencent cloud community articles

New: Filter mechanism, support for Zipkin distributed tracking

In order to support users to write plug-ins, we support the filter mechanism, which is divided into server-side filters and client-side filters. Users can implement their own TarsGo plug-ins based on this mechanism.

// Server filters, passed in dispatch, and f, are used to invoke user codes, REq, and RESP for incoming user requests and server corresponding package bodies
type ServerFilter func(ctx context.Context, d Dispatch, f interface{}, req *requestf.RequestPacket, resp *requestf.ResponsePacket, withContext bool) (err error)// Client filter, passed inmsg(including,objInformation,adapterInformation,reqandrespPackage), and a user - set call timeouttype ClientFilter func(ctx context.Context, msg *Message, invoke Invoke, timeout time.Duration) (err error)// Register server filter //func RegisterServerFilter(f ServerFilter)// Register client filter //func RegisterClientFilter(f ClientFilter)

Copy the code

With filters, we can do some filtering for server and client requests, such as the Span of OpenTracing using hooks for distributed tracing. Let’s look at the client filter example:

Func ZipkinClientFilter() tars.ClientFilter {returnfunc(ctx context.Context, msg *tars.Message, invoke tars.Invoke, Timeout time.duration) (err error) {var pCtx opentracing.SpanContext req := MSG.Req // If so, use this as the parent span. If not, create a new span with the name of the RPC request functionifparent := opentracing.SpanFromContext(ctx); parent ! = nil { pCtx = parent.Context() } cSpan := opentracing.GlobalTracer().StartSpan( req.SFuncName, Opentracing.ChildOf(pCtx), ext.spankindrPcclient,) defer span.finish () CFG := tars.getServerConfig () // Set the span information, For example, the IP address of the client we call, the requested interface, method, protocol, client version, etc."client.ipv4", cfg.LocalIP)
		cSpan.SetTag("tars.interface", req.SServantName)
		cSpan.SetTag("tars.method", req.SFuncName)
		cSpan.SetTag("tars.protocol"."tars")
		cSpan.SetTag("tars.client.version", tars.tarsversion) // Inject span into the request package body's Status, which is a map[strint]string structureifreq.Status ! = nil { err = opentracing.GlobalTracer().Inject(cSpan.Context(), opentracing.TextMap, opentracing.TextMapCarrier(req.Status))iferr ! = nil { logger.Error("inject span to status error:", err)
			}
		} else {
			s := make(map[string]string)
			err = opentracing.GlobalTracer().Inject(cSpan.Context(), opentracing.TextMap, opentracing.TextMapCarrier(s))
			iferr ! = nil { logger.Error("inject span to status error:", err)
			} elseErr = invoke(CTX, MSG, timeout) {req.Status = s}}iferr ! Ext.error.Set(cSpan, ext.error.true)
			cSpan.LogFields(oplog.String("event"."error"), oplog.String("message", err.Error()))
		}

		return err
	}
Copy the code

The server also registers a filter, whose main function is to extract the context of the call chain from the status of the request package body, and use this as the parent span to record the call information. An effect of the whole:

Detailed code see TarsGo/tars/plugin/zipkintracing complete zipkin tracing example of a client and service, See ZipkinTraceClient and ZipkinTraceServer under TarsGo/examples

New feature: Context support

TarsGo did not use the context in the generated client code, or in the implementation code passed in by the user. This makes it difficult for us to pass some framework information, such as client IP, port, etc., or for the user to pass some call chain information to the framework. Context is supported through a refactoring of the interface, and the context information is going to be implemented through the context. This refactoring uses a fully compatible design to accommodate older user behavior.

The server uses the context

type ContextTestImp struct{}// Just add the CTX context. context parameter to the interface
func (imp *ContextTestImp) Add(ctx context.Context, a int32, b int32, c *int32) (int32, error) {
	// We can use the context to get information passed by the framework, such as the following to get the IP, or even return some information to the framework, see the interface below tars/util/current
	ip, ok := current.GetClientIPFromContext(ctx)
    if! ok { logger.Error("Error getting ip from context")}return 0.nil
}
// we used to use AddServant, now we just need to AddServantWithContext
app.AddServantWithContext(imp, cfg.App+"."+cfg.Server+".ContextTestObj")
Copy the code

The client uses the context


    ctx := context.Background()
    c := make(map[string]string)
    c["a"] = "b"// You can pass the context to the framework if you want to set the tars request context, such as c, which is optional and in the format of... [string]string ret, err := app.AddWithContext(ctx, i, i*2, &out, c)Copy the code

See TarGo/examples for complete server and client examples

Other optimizations and fixes

  • Change the Sbuffer field of the Request package from vector to vector to solve the communication problem with other languages
  • Fix the stat monitoring reporting problem
  • The log level was updated from the remote device
  • Fixed deadlock issue in extreme case of routing refresh coroutine
  • Optimize the coroutine pool scheme and add the coroutine pool scheme
  • Fix panic caused by go coroutine startup sequence
  • Golint most of the code