preface

It is well known that there are four things programmers hate most: commenting, documenting, others not commenting, and others not documenting. Therefore, it is necessary to find ways to reduce the cost of documentation and maintenance. The current mode for writing technical documents is shown below:

The pain points are summarized in the following three aspects:

For the above problems, our solutions are as follows:

  • Local editing and browsing work converges to the IDE, providing an immersive experience;
  • Establish strong association between documents and codes, reduce copy, improve linkage, and improve the reach rate of documents;
  • Code and document belong to the same Git repository. With the help of version management, the mismatch between document version and code caused by business iteration can be avoided.
  • Create tools to export documents online, accessible through a browser;

Program overview

Compared with the original model, the new solution can be completely separated from the browser/document editor, and the synchronization of online pages is completely left to the automatic deployment triggered by the timing.

The orange part in the figure is the focus of the plan, which is divided into offline and online parts according to the division of labor. Responsibilities are as follows:

  • Offline: IDEA Plugin
    • Implementation of custom language parsing, analysis;
    • Provide document content preview, editor;
    • Provides a series of utility functions, associated code and documentation;
  • Online: Gradle/Dokka Plugin
    • Bridge, reuse IDE Plugin semantic analysis, preview content generation ability;
    • Extend Dokka Renderer to export HTML and flybook documents;

There are many interesting techniques used in the scheme construction, which will be described in more detail later.

Effect of offline

The IDEA Plugin provides a sidebar and a powerful editor. The following are introduced from two perspectives: editing and browsing.

Editing experience

Suppose there is source code as follows:

public class ClassA { public static final String TAG = "tag"; ClassB b; /** * method document here. * * @param params input string */ public static void invoke(@NotNull String params) { System.out.println("invoke method!" ); System.out.println("this is method body: " + params); } public ClassA() { System.out.println("create new instance!" ); } private static final class ChildClass { /** * This is a method from inner class. */ void innerInvoke() { System.out.println("invoke method from child!" ); }}}Copy the code

Adding a reference to the class in the document looks like this:

Unlike copy and paste code, the new approach has the following advantages:

  • More relevant, the preview changes as the code snippet changes;
  • Easy to reconstruct, when the name of a referenced class, method, or field is renamed, the document content will automatically change accordingly, preventing invalid references.
  • More intuitive, editing, browsing can find the code source more quickly;
  • The input is more fluent, with perfect completion ability;

Browsing experience

Compared to regular Markdown, the new scheme is friendlier to use:

  • Immersive use, the interface is embedded in the IDE, no need to jump to other applications;
  • The source code is mentioned next to the line label, click a key to consult the document;
  • The document “browser” supports IDE-consistent code highlighting and reference jumps;

Effect of online

Documents in the code are periodically deployed to the remote side automatically. Using an example from a real business document, HTML deployed to a light service would look like this:

The corresponding product of the flying book looks like this:

These online pages are aimed at non-team readers and are regularly synchronized by CI without the ability to jump to the IDE.

The technical implementation

The architecture of the project is shown below:

Considering that the user experience part is mainly presented in IDEA (Android Studio), our technology stack is built based on IntelliJ. It can be divided into three parts according to the module:

  • The infrastructure layer
  • IDEA Plugin
  • Gradle / Dokka Plugin

General-purpose logic (language implementation-specific) is encapsulated in the infrastructure layer and relies solely on IntelliJ Core. Compared to the IntelliJ Platform, IntelliJ Core retains only language-related capabilities, simplifying code such as codeInsight and UI components, and is widely used in Various IntelliJ products (including Kotlin and Dokka in the picture).

The three main modules are described below.

infrastructure

Throughout the solution, the infrastructure layer is the cornerstone of all functionality, with the ability to establish code and documentation at its core. Here we have designed and implemented a markup language, CodeRef, to meet the following requirements:

  • Simple syntax, structure and source code one – to – one correspondence;
  • Pointing is accurate, that is, one-to-one relationship must be satisfied;
  • Support to keep only the statement (remove the body), improve the SNR;
  • Expansibility, convenient for subsequent iterations of new functions;

The CodeRef language is not complex. It uses a Kotlin/Java style of keywords, strings, and parentheses to form statements and blocks of code in which each node has a corresponding source node. Here is a simple example, with the correspondence marked with colored text:

Note: Even if the document content is not changed, any change to the “source” part of the image will change the rendering effect in real time, resulting in a “dynamic binding” effect. So how do you implement “dynamic binding”? Roughly disassemble into the following three steps:

  1. Design syntax, write language implementation;
  2. The existing capabilities (IntelliJ Core, Kotlin Plugin) are combined to obtain bilateral syntax trees, so as to establish one-way correspondence between document nodes and source nodes.
  3. Combined with existing capabilities (Markdown Parser) to generate document text for rendering;

Language-based implementation

Based on the IntelliJ Platform, implementing a custom language requires at least the following:

  1. Write BNF definitions and describe syntax;
  2. Generated with Grammar KitParser,PsiElementInterface, Flex definition, etc.
  3. Based on generated Flex files and JFlex generationLexer;
  4. To write Mixin classesPsiTreeUtilTools to implement the custom methods declared in PSI;

BNF is the basis for everything that follows, and the choice of each definition and value is crucial. A short example:

{/ *... IElementType | / AT='@' CLASS=' CLASS '] /* Tokens ='@' Some of the rules * / extends (" class_ref_block | direct_ref | empty_ref ") = ref extends (" package_location | class_location ") = ref_location extends("class_ref|method_ref|field_ref") = direct_ref } ref_location ::= package_location | class_location Package_def ::= AT package_def {pin=2 // only '@' and package_def appear together, Package_location} class_location ::= AT class_def {pin=2 // Only '@' and class_def together, To get the whole element as class_location} direct_ref: : = class_ref | method_ref | field_ref | empty_ref {the methods = [/ / some custom Method, GetNameStringLiteral getReferencedElement getOptionalArgs needs to be implemented in the mixin class specified below. mixin="com.bytedance.lang.codeRef.psi.impl.CodeRefDirectRefMixin" } class_ref ::= CLASS L_PAREN string_literal [COMMA Ref_args_element *] R_PAREN {methods = [property_value=""] pin=1Copy the code

The small fragments above defines @ class (” “), @ package (” “), the class (” “,…). Syntax. Key in real life are PIN, which affects the type of “unfinished” code, and recoverWhile, which controls when a rule ends. See Grammar-Kit for details.

Once written, we can use the grammar-kit to generate Parser for basic syntax highlighting and Lexer for output PSI trees. Register both with your custom ParserDefinition, combined with your custom LanguageFileType, and the corresponding type files will be parsed by the IDE into a tree of psiElements. Schematic diagram:

It is worth mentioning that the subsequent implementation of Formatter, CompletionContributor and other components will be greatly affected by the above process, and will inevitably face rework if the implementation is not good. The BNF definition for Fortran, which has a “relatively simple” language feature, is interesting to see.

Syntax trees correspond in one direction

Considering that the IDE has built-in support for The Java and Kotlin languages, with the result of the previous step, we have two syntax trees and it is time to associate the nodes of the two trees:

Here we use the PsiReferenceContributor to register a CrElement reference to the source PsiElement based on the contents of each line of double quotes. How do you find the elements of each string? Follow these three steps:

  1. Except for the root node, each node needs to recursively find each level of parent up to the root node;

  2. The root node is the package or class of the given full-qualified-name, and the position of the element in the package or class can be determined by the result of the previous step.

  3. The corresponding PsiElement in the source code is identified through a JavaPsiFacade and a series of lookup methods.

Note: The Kotlin Plugin provides a “Light” PsiElement implementation for Java, so let’s consider Java here.

Generate document text

With the syntax tree correspondence, you can generate text for preview. This part is more general, always pay attention to the read and write environment, according to the following steps:

  1. Create a copy of the source file pointed to by each CodeRef syntax root node;

  2. Iterate over each Ref or Location in the CodeRef tree, create or locate the corresponding Location in the copy, and copy elements (decorated) from the source file into the copy;

  3. Export copy string;

Considering that PSI and files are mapped in real time in IDE, in order not to affect the content of the original file, the syntax tree must be added, deleted and modified in the replica environment.

Although this part is not difficult, it is the most tedious. On the one hand, the aforementioned Kotlin Light PSI is no longer applicable because of the depth of detail, so you have to write separate implementations for Java and Kotlin. On the other hand, ensuring that the format of the copied code remains correct is a big problem, especially when comments are interspersed between elements. In the end, text content generation is done metaphysically in a cycle of breakpoints and debugging.

At this point, the infrastructure layer’s task of restoring CodeRef to code snippet is complete.

IDEA Plugin

With the previous foundation, IDEA Plugin is mainly responsible for making the local usage experience of the solution usable and easy to use. Specifically, the functions of plug-ins fall into two categories:

  1. CodeRef oriented, rich language functions;
  2. Markdown to improve editing and reading experience;

Next, they will be introduced from the above perspectives.

Language to optimize

For a “new language”, the completion of PSI is only the first step from an experiential perspective, and features such as auto-completion, keyword highlighting, and formatting will also have a decisive impact on usability. Especially with CodeRef’s syntax, expecting users to manually enter the correct package name, class name, and method name without being prompted is definitely too hardcore. Let’s pick out a few interesting ones.

Code completion

In IDEA, most (less complex) code completions use Pattern registration. The Pattern is equivalent to a Filter, and the corresponding CompletionContributor is triggered when the current cursor position satisfies this Pattern.

We can describe a Pattern using several of the built-in methods of PlatformPatterns. For example, a CodeRef code: method(“helloWorld”) has a PSI tree that looks like this:

- CrMethodRef          // text: method("helloWorld")
  - CrStringLiteral    // text: "helloWorld"
    - LeafPsiElement   // text: helloWorld
Copy the code

Pattern is thus:

val pattern = PlatformPatterns.psiElement()
    .withParent(CrStringLiteral::class.java)
    .withSuperParent(2, CrMethodRef::class.java)
Copy the code

For each Pattern, we need to implement a CompletionProvider to give completion information, such as a Provider that returns fixed keyword completion:

val keywords = setOf("package", "class", "lang") class KeywordCompletionProvider : CompletionProvider<CompletionParameters>() { override fun addCompletions( parameters: CompletionParameters, context: ProcessingContext, result: CompletionResultSet) {keywords. ForEach {keyword - > if (result. PrefixMatcher. PrefixMatches (keyword)) {/ / add a LookupElementBuilder, you can specify a simple style result. AddElement (LookupElementBuilder. Create (keyword). Bold ())}}}}Copy the code

With these skills, it is easy to complete keywords such as class, package, method, and even method and field names.

Trick is compared to the completion of the package name and the class name with the package name, which is of the form A.B.C. def. The difference is that each type of ‘.’ triggers a completion, and requires that “DE” directly entered at the beginning of the string can be correctly associated and completed. Limited to the space is not introduced, see com.intellij.codeInsight.com pletion. JavaClassNameCompletionContributor implementation.

formatting

For formatting, IDEA does not use PSI or ASTNode directly, but builds a “Block” system based on both. All of the indentation and spacing adjustments are made to the minimum granularity of blocks (some complex languages are too thin, which is a great way to reduce implementation complexity, fantastic).

There are not many concepts here, as follows:

  • ASTBlock: We build blocks from an existing ASTNode tree, thus inheriting this base class;
  • Indent: Controls the indentation of each line;
  • Spacing: controls the Spacing policy between blocks (minimum, maximum space, whether to force line breaks/no breaks, etc.)
  • Wrap: single line folding strategy for a long time;
  • Alignment: Alignment of oneself in the Parent Block;

When you actually type the code, most of your time is spent in the getSpacing method, which writes out something like this:

override fun getSpacing(child1: Block? , child2: Block): Spacing? {/ *... */ return when { // between ',' and ref node1? .elementType == CodeRefElementTypes.COMMA && psi2 is CrRef -> Spacing.createSpacing(/*minSpaces*/0, /*maxSpaces*/0, /*minLineFeeds*/1, /*keepLineBreaks*/true, /*keepBlankLines*/1) // between '[', literal, ']' node1? .elementType == CodeRefElementTypes.L_BRACKET && psi2 is CrStringLiteral || psi1 is CrStringLiteral && node2? .elementType == CodeRefElementTypes.R_BRACKET -> Spacing.createSpacing(/*minSpaces*/0, /*maxSpaces*/0, /*minLineFeeds*/0, /*keepLineBreaks*/false, /*keepBlankLines*/0) } }Copy the code

Formatting is one of those things that is easy to say but a pain to implement. In the process of practical operation, I was forced to make a wave of adjustment to the BNF written in front of me before reaching the ideal effect. Fortunately, our language more humble simplicity, step on a little hole, if face to more complex language, workload will be exponentially increase (reference com. Intellij. Psi. The formatter. Java package the amount of code).

MarkdownX

With all of the above mentioned enhancements to Markdown’s code blocks, CodeRef and Markdown are finally coming together.

In fact, Markdown has been officially supported (IDEA built-in, AS optional) with a full language implementation and an editor and preview. Here is the focus on its preview generation process, as shown in the figure:

Jetbrains: Markdown, org. Jetbrains: Markdown

  1. usingMarkdownParserParse the text into astNodes;
  2. usingHtmlGeneratorBuilt-in visitor visits each ASTNode to generate HTML text;
  3. Set the generated HTML Document to the built-in browser (if any) and render it on the screen;

To give some background: at the beginning of this project, IDEA was in the transition from javafx-webview to JCEF (as a direct result of AndroidStudio 4.0 or so having no built-in WebView implementation available).

The above scheme summary has the following problems:

  1. Poor compatibility, some IDE versions can not see the preview;
  2. Each MD change triggers full generateHtml, which can be a performance bottleneck if the document content is complex;
  3. Setting HTML text to the browser without diff logic triggers page reload, which can also cause performance problems (diff capability was later added for ides with JCEF, but not all ides have JCEF built in);

After comprehensive consideration, we decided not to directly use the native plug-in, but to create a new language “MarkdownX” based on it, reuse the original ability to the maximum extent, add support for CodeRef, and improve preview performance by creating a similar RecyclerView mechanism based on Swing.

The optimized scheme flow is similar to this:

Homemade solutions have many advantages:

  1. Lower memory footprint (browser vs. JComponent)
  2. Better performance (partial refresh, control reuse, etc.)
  3. The experience is better(Built-in browser pairs<code>Tag support is too basic for code highlighting, reference jumps, etc. Native controls don’t have these limitations.)
  4. Better compatibility (not explained)
CodeRef support

MarkdownX is presented as a “new language” and is implemented using MarkdownParser and HtmlGenerator. The main differences are file extensions and the handling of code-fence.

Code-fence is a block of code wrapped in a “‘” symbol in Markdown. Unlike the native implementation, we need to replace the content of the code block when we generate the preview and make the content change as the code changes.

Speaking, we need to implement a org. Intellij. Markdown. HTML. GeneratingProvider, abbreviated as follows:

class MarkDownXCodeFenceGeneratingProvider : GeneratingProvider { override fun processNode(visitor: HtmlGenerator.HtmlGeneratingVisitor, text: String, node: ASTNode) {visitor. ConsumeHtml ("<pre>") var state = 0 Some variables define */ for(child in childrenToConsider) {if (state == 1 && child.type in listOf(MarkdownTokenTypes.CODE_FENCE_CONTENT, MarkdownTokenTypes.EOL)) { /* ... */} if (state == 0 && child.type == markDownToKentypes.fence_lang) {/*... */ applicablePlugin = firstApplicablePlugin(language)} if (state == 0 && child.type == MarkdownTokenTypes.EOL) { /* ... If I go into the code snippet, Visitor. ConsumeTagOpen (node, "code", * Attributes. ToTypedArray ()) if (language! = null && applicablePlugin ! = null) { /* ... Hit custom processing logic (i.e. CodeRef) */ visitor.consumehtml (content) // Html} else {visitor.consumehtml (codeFenceContent) // Default content}} /*... Some finishing */}}Copy the code

As you can see, after traversing node’s children, you can determine the language of the current code snippet. If the language is CodeRef, you go to the “preview text generation” logic mentioned earlier and end up stitching the custom content into the HTML through a visitor, which acts as an HTML Builder.

Preview performance optimization

Considering that JList does not have “item recycling” capability, we chose to use Box directly on the List implementation. The processing process is as follows:

The mechanics are divided into two big steps:

  1. The Data layer splits the BODY of the HTML into several parts and notifies the View layer of the changes after diff.
  2. The View layer sets the changed data to the corresponding location in the List and reuses the existing data as much as possibleViewHolder. The process may involveViewHolderCreate and delete;

So far we have created three ViewHolder for text, images, and code:

  1. The textUse:JTextPaneComplete the restoration of text style with HTML + CSS;
  2. The picture: customJComponentZooming and drawing to keep the picture centered and fully displayed;
  3. code: provided by the IDEEditorAs a basis, necessary setup and logic simplification;

A lot of effort has been put into the Editor here:

  1. Created using the source code file as the contextPsiCodeFragmentFill as contentEditorIn order toEnsure that classes, methods, and fields imported from the original file are resolved properlyThis is important if you MockDocumentAs content, most code highlighting and jumps are disabled);
  2. Set up appropriatelyHighlightingFilterMake sure that “no red” is reported (the price of using the original file as context is that the classes of the current code fragment are likely to be considered class duplicates and the code structure may not be legal, so “red” level code analysis needs to be disabled);
  3. disableIntention, set read-only (improve performance, reduce interference);
  4. disableInspection 和 ExternalAnnotator; (Both are big performance hogs, the latter including Android Lint-related logic)

After the above optimization, the test preview can be smoothly displayed & refreshed in most cases. But if you have more than one document open at a time, or if you’re “working at incredible speed,” you can still get long delays from time to time. Analysis of the first wave found that the main performance cost was in HTML generation.

Regular MD-to-HTML performance overhead is limited due to Markdown syntax limitations (low node depth). However, in retrospect, our codeRef treatment was accompanied by a large number of PSI resolve complexity spikes, and frequent full generate was not appropriate. A natural idea would be to add a cache for each codeRef segment and use the cached content directly if the content does not change. This allows you to modify text paragraphs without parsing other files at all, and codeRef paragraphs only refresh the contents of the current code block.

Here’s the problem: if the user is modifying the referenced code instead of the document file, the preview doesn’t change immediately, thanks to the cache. What if, further, you registered listeners to all the files referenced and flushed the cache when changes were made? In fact, this does solve the problem, but it introduces a new problem: how to release file listening?

Insert context here: The intervention to the code-fence content is done based on the Visitor pattern callback, so being the generator itself does not know whether the code block being processed this time is caused by the same change as the previous or the next callback. For example, if there are codeRef blocks A, B, and C in A document, the generator receives three callbacks during an HTML generation and has no means of knowing the relevance of the three callbacks.

Currently, we can only notify the Generator before and after an HTML generation and maintain a queue + counter inside the generator, not so elegantly solving the leak problem.

At this point, the overall performance of the plug-in is finally within acceptable limits.

Gradle / Dokka Plugin

In order to reach a wider audience and make content readily readable, it is essential to make documentation exportable and automatically deployable. In terms of the scheme, we choose Dokka, also produced by IntelliJ, as the basic framework, and make use of its perfect data flow transformation capability to efficiently adapt to multi-output format scenarios.

Dokka process extension

As a document framework compatible with both Kotlin and Java, Dokka is characterized by the idea of “data pipelining” and strong scalability. The code conversion to a document page flows as follows:

Each node has at least one Extension Point, which is very flexible to extend.

The main roles in the figure are listed below:

  • Env: includes components such as code profilers (for output Document Models) based on the Kotlin Compiler and The Intellij-Core extension, developer custom plug-ins, etc.
  • Document Models: The abstraction of module, package, class, function, fields and other elements, organized in a tree, essentially some data classes;
  • Page Models: byPageCreatorUsing Document Models as input, a series of objects are created to encapsulate the “page” and describe the structure of the “page”.
  • Renderer: a product used to render the Page Models into a format (HTML, Markdown, etc.);

As you can see from the above, Dokka’s original purpose is to convert code into document pages, and it doesn’t support converting document files natively (and it doesn’t need to). In our scenario, however, MarkdownX’s rendering relies on source information, which is exactly what Dokka can do.

By rewriting PageCreator, we turn the project with the MarkdownX document into a node tree like this:

  • MdxDirNode corresponds to the folder node. The page content is the directory of the current folder. Click the link to jump to the next level.
  • MdxPageNode corresponds to the content of the MarkdownX document, which contains several types of children representing different types of content fragments.

When creating MdxPageNode, we use a similar IDEA – the practice of the Plugin, handled in rewriting a org. Jetbrains. Dokka. Base. Parsers. Parser and modify the code – the processing of a fence, Instead, call the code in the “Infrastructure” section that generates the CodeRef preview text and end up with the desired document text.

Fly book adaptation

Once you have the page content, and Dokka’s HtmlRenderer comes with it, it’s a snap to output a deployable HTML output. However, we prefer to converge the document on the flying book, which requires writing a custom Renderer for the flying book.

Considering that our tree structure for processing pages is too complex, we actually extend it based on the built-in DefaultRenderer base class:

abstract class DefaultRenderer<T>(
    protected val context: DokkaContext
) : Renderer {
    abstract fun T.buildHeader(level: Int, node: ContentHeader, content: T.() -> Unit)
    abstract fun T.buildLink(address: String, content: T.() -> Unit)
    abstract fun T.buildList(
        node: ContentList,
        pageContext: ContentPage,
        sourceSetRestriction: Set<DisplaySourceSet>? = null
    )
    abstract fun T.buildNewLine()
    abstract fun T.buildResource(node: ContentEmbeddedResource, pageContext: ContentPage)
    abstract fun T.buildTable(
        node: ContentTable,
        pageContext: ContentPage,
        sourceSetRestriction: Set<DisplaySourceSet>? = null
    )
    abstract fun T.buildText(textNode: ContentText)
    abstract fun T.buildNavigation(page: PageNode)

    abstract fun buildPage(page: ContentPage, content: (T, ContentPage) -> Unit): String
    abstract fun buildError(node: ContentNode)
}
Copy the code

Only part of the callback methods are listed above.

As you can see, the interface approach of this class is novel: Visitor traverses the page node tree and provides the developer with a list of Builder/ DSL-style methods to implement. For these abstract functions, the built-in HtmlRenderer is implemented in kotlinx.html (a DSL-style HTML builder), which means we will also implement a dSL-style flybook document builder.

Fly book open platform document view link: open feishu. Cn/document/ho…

I won’t go into detail in the DSL section, but here I will focus on the document structure of flying books. Markdown, as we all know, was designed to be Web-oriented, so it has a natural ability to interwork with HTML. However, the data structure of flybook documents is more like Pdf and Docx files, which have limited hierarchy and are relatively flat. For example, the same document content in MdxPageNode looks like this:

The structure of a flying book looks like this:

So the difference is huge. This part of the difference is smoothen by the custom FeishuRenderer, the specific method can only be introduced case by case, limited to space will not be expanded, the general idea is to expand or merge incompatible nodes, interspersed with necessary subtree traversal.

There are two special points: images and links.

Document links

When writing Markdown documents, you often need to insert links to other Markdown documents (typically using relative paths). At this point, we need to figure out how to map the relative path to the flybook link, and we need to do this after the Render step because we need to know what the flybook link of the corresponding document is when we map it.

The first thing to do is topologically sort the documents, uploading them one by one by dependency. However, this requires no circular dependencies between documents, which is obviously not guaranteed (it is not uncommon for two documents to refer to each other). Fortunately, the flybook document provides an interface to modify the document, so we can create a batch of empty documents in advance, get the link to the empty document, and then replace the relative path. In other words, the document upload process is as follows: Create an empty document -> replace the relative path with the corresponding document link -> modify the document content.

The picture

A Markdown can be accompanied by a picture, which is part of a Paragraph. In the structure of the flying book document, the picture belongs to the Gallery and can only be an exclusive line, which cannot go with the text. The two formats are not implementationally compatible. The current preliminary implementation scheme is to DFS down from the Group entry of the Paragraph, find all the images and put them in front of the text. Well, I’ll just have to suck it up.

By the way, images also need upload-and-replace logic, which is similar to document links.

conclusion

This is all the content of the document suite: Based on the IntelliJ technology stack, we have formed a complete document assistance solution by designing new languages, writing IDE plug-ins and Gradle/Dokka plug-ins, effectively establishing the correlation between documents and code, and greatly improving the writing and reading experience.

In the future, we will introduce more practical improvements to the framework, including:

  • Add graphical code element selector to reduce the cost of language learning and use;
  • Optimize preview rendering effect, align WebView
  • Explore automatic document generation capabilities against partial frameworks (Dagger, Retrofit, etc.);

At present the framework is still in the stage of internal testing, is gradually expanding the scope of promotion. After the scheme is mature and functions are stable, we will open source the whole scheme to serve more users and absorb ideas from the community. Please look forward to it!

Join us

We are byteDance live revenue client team, focusing on gifts, PK, live equity and other businesses, and exploring technical directions such as rendering, architecture, cross-end, efficiency and so on. At present, there are a lot of talents in Beijing and Shenzhen, welcome to send your resume to [email protected] to join us!