background

In the GrowingIO server development, we use GRPC for data communication between micro-services. Each project that provides a service defines its own PROTOBUF message format, and then uses PROTOC to generate the corresponding language adaption code. In the projects we maintain, we mainly use Scala to implement each service, each service defines its own domain model (usually some case class), and Protoc generates code for the JVM platform by default in Java. There is some correspondence between the corresponding ProtoBuf message format and the domain model defined in the Scala project, and their attributes tend to be highly consistent. When we need to do data conversions between these two types, a lot of Protobuf-Java and Scala Case Classes conversion code appears.

In general, protobuf — Java and Scala will have corresponding types, such as java.util.List and Seq/List/Array, Timestamp and ZonedDateTime, etc. Also, some Scala Option types can be represented by protobuf as encapsulated types, such as Option[String] which can be represented by StringValue. Since each type has its own characteristics, and type nesting adds a lot of complexity, we’ve been looking for a generic conversion scheme that is type-safe to use, while eliminating as much bloated code as possible.

Refer to the https://github.com/playframew… The Reader, Writer design design concept and https://github.com/scalalandi… With the Scala macro approach for type conversion, we ended up using Scala DSL macro combined with implicit parameter design train of thought for https://github.com/changvvb/s… A set of solutions.

Effect of solution

Let’s define a Case Class User and ProtoBuf UserPB here to compare the effects of using this scheme before and after.

case class User (
  id:Long,
  name:String,
  phoneNumber: Option[String],
  hobbies: Seq[String]
)
message UserPB (
    int64 id = 1,
    string name = 2,
    google.protobuf.StringValue phone_number = 3,
    repeated string hobbies = 4
)

If we were to write the Scala Case Class to ProtoBuf-Java conversion by hand, it would look something like this:

val user = User(1,"Jack",Some("1234567890"),Seq("ping pong", "Reading ")) val Builder = UserPb. newBuilder // Build Builder. SetId (User. id). SetName (User. Name) // Set ID, Name(User. Name) If (user. PhoneNumber. IsDefined) {/ / here can also be abbreviated as the user directly. The phoneNumber. The map (StringValue. Of.) foreach (builder. SetPhoneNumber) Builder.setPhoneNumber (stringValue.of (User.phoneNumber.get))} Builder.setAllhobbies (User.holiday.asjava) // Set hobbies Field val UserPB = Builder. build // Build UserPB object

Converting ProtoBuf-Java objects to Scala Case Class objects requires a similar amount of code, except that the result of converting some set methods to get methods is passed to the User constructor. With our solution, the code would look like this:

val user = User(1,"Jack",Some("1234567890"),Seq("ping pong", "reading")) val userPB = Protoable[User, UserPB].toproto (user) // This is done in one line of code

As you can see, the code adds a dimension of simplicity and helps you do type-safety checking, really achieving the goal of simplicity, security, and ease of use. Below we will introduce the design method and thinking of this tool in detail, as well as the philosophy of it.

DSL design

Two of the most fundamental qualities in DSL are Protoable[-S, +P] and Scalable[+S, -P], where S stands for a Scala type and P for Protobuf-Java. Protoable means that this is something that can convert a Scala type toa ProtoBuf-Java type, and Scalable types can do the opposite. Covariant and contravariant are used here.

trait Protoable[-S, +P] {
  def toProto(entity: S): P
}
​
trait Scalable[+S, -P] {
  def toScala(proto: P): S
}

Next, we need to write some constructors, and some default converters, in their companion objects. So, you have some basic things.

Object Scalable {Deif Apply [S, P](Convert: P Tail S): Scalable[S, P] = x => Convert (x) // Implicit Val JavaIntegerScalable = Scalable[Int, Int] Java.lang.Integer](_.toint) // Implicit val StringValueScalable = Scalable[String, Scalable] StringValue] (_ getValue) / / Protobuf encapsulation type conversion implicit val zonedDateTimeProtoable = Scalable [ZonedDateTime, Timestamp] {proto flicker [flicker] [flicker] [flicker] [flicker] [flicker] [flicker] [flicker] Proto.getnanos).atZone(ZoneID. SystemDefault ())}} Object Protoable {def Apply [S, P](Convert: S Tail): Protoable[S, P] = x => convert(x) implicit val javaDoubleProtoable = Protoable[Double, java.lang.Double](_.toDouble) implicit val stringValueProtoable = Protoable[String, StringValue](StringValue.of) implicit val zonedDateTimeProtoable = Protoable[ZonedDateTime, Timestamp] {entity Timestamp. NewBuilder ().setSeconds(entity.toepochsecond).setNanos(entity.getNano).build()}}

Use macros to generate code automatically

Again, the two User and UserPB types defined earlier in the article, think about how we can use the above DSL to write the conversion between them. To look directly at the result, we could write:

new Protoable[User, UserPB] {   
  override def toProto(entity: User): UserPB = {
    val builder = UserPB.newBuilder()
    builder.setId(entity.id)
    builder.setName(entity.name)
    if(entity.phoneNumber.isDefined) {
      builder.setPhoneNumber(implicity[Protoable[String,StringValue]].toProto(entity.phoneNumber))
    }
    builder.addAllHobbies(implicitly[Protoable[Seq[String], java.util.List[String]]].toProto(entity.hobbies))
    builder.build
  }
}
​
new Scalable[User, UserPB] {   
  override def toScala(proto: UserPB): User = {
    new User(
      id = proto.getId,
      name = proto.getName,
      phoneNumber = if(proto.hasPhoneNumber) {
        Some(implicitly[Scalable[String,StringValue]].toScala(proto.getPhoneNumber))
      } else {
        None
      },
      hobbies = implicitly[Scalable[Seq[String, java.util.List[String]]].toScala(proto.getBobbiesList)
    )
  }
}

This is the code we need Scala macros to generate. This code takes full advantage of both Protoable and Scalable features we defined above, as well as Scala’s implicit parameter feature, and is designed to make it easier to construct abstract syntax trees. Benefits include:

  1. Data transformation and processing are all within our DSL design framework, enabling problems to be addressed through both Protoable and Scalable qualities.
  2. Take advantage of the compiler’s implicit parameter lookup feature by having the compiler look for a type converter in context for a field that involves a different type of conversion. IMPLICITLY [T] is the use of the Scala standard library method that can be used to find a corresponding T parameter in context. For example, here we need to find an implicit parameter for the type Scalable[String,StringValue] (as defined in TraitScalable).
  3. Combined with 2 points, we handle conversion between two objects, consider the problem of child objects don’t have to pass to go, let’s go to the generated code, just focus on the relationship between the current object field, including some simple Option type treatment, collection types, need not care about other things, other things through the implicit parameter to the compiler to do, This will greatly reduce the design cost.
  4. Easy type extension. If we need to define a system-level converter, add one to both Protoable and Scalable counterparts; If you need a business-related converter, just define it in the context that your code can reach, and Scala’s implicit parameter lookup rules will help you find the type converter you want to extend.

Obviously, we seem to have found a general rule to handle the conversion between fields here, and now we can use Scala’s macros to reflect at compile time to generate the code we need. You can define macro constructors in their respective companion objects.

object Scalable {
  def apply[S <: Product, P]: Protoable[S, P] = macro ProtoScalableMacro.protosImpl[S, P]
}
​
object Protoable {
  def apply[S <: Product, P]: Scalable[S, P] = macro ProtoScalableMacro.scalasImpl[S, P]
}

As you can see, each method needs only two type parameters. I won’t go into the details here, but the idea is to combine Scala macros to reflect various types at compile time to generate the syntax tree (AST) of the code we need.

We use it to do a two-way conversion, through very simple code can be done.

val user = User(1,"Jack", Some("1234567890"), Seq("ping pong","coding") val userPB = Protoable[User, userPB]. ToProto (User) // Convert a Scala case class object to ProtoBuf -Java Val user2 = Scalable[User, userPB].toscala (User) // Convert ProtoBuf toScala case class object assert(User == user2)

As you can see, to do a conversion, no matter how many fields you have, it only takes one line of code, so it takes one level off the code data, and it’s type-safe, because if you don’t have the right type, if you don’t have enough parameters, you’ll get an error at compile time.

For nested types, we just need to define an internal conversion and use it in whatever context the compiler can find. Let’s assume there’s a field of type Outer here that has type Inner, so we can write it like this.

implicit val innerScalable = Scalable[Inner,InnerPB]
Protoable[Outer,OuterPB].toScala(outerObj)

To further optimize, use Builder mode to customize the internal transformation logic

If we have a business scenario where, for example, we need to have an ID that is never less than 0 when UserPB goes to User, the above stuff is basically not good enough to implement this simple requirement, and if it did, it would probably be ugly. We introduce the Builder constructor to do this for us. The Builder constructor will help us inject some custom rules into the macro-generated code.

val scalable = ScalableBuilder[User,UserPB] .setField(_.id, If (userpB. getId < 0) 0 else userpB. getId). SetField (_.name, /* Build Scalable. Toscala (...). Build Scalable. Toscala (...)

The setField takes two arguments, the first being a field selector, and the second a lambda expression, which is used to input an object, print out what the field wants, and finally call the build method, which can then generate a Scalable object.

In addition to the ScalableBuilder, we also have the ProtoableBuilder to do the reverse conversion. Both of these Builders can be used to do some field-level logic control, or to generate some missing fields, which can be very useful in many cases.

Scala3 support

As we know, Scala3 was just released two months ago, and some of the features that Scala3 brings to the tool are:

  • More concise implicit parameter definitions
  • The new macro design based on inline and Quotes makes it easier to design the DSL. The macro implementation has to be completely rewritten.

Scala3 still has some problems

  • Cannot compile protoc generated Java file https://github.com/lampepfl/d… , but I have a PR is fix this https://github.com/lampepfl/d…
  • Compile-time reflection is a little bit less of an API, which leads to some types that have a little bit of a problem with type derivation and that’s something that we’ll have to overcome in the future when we support Scala3.