Hello everyone, I’m Xu Zhenwen. Today’s topic is “Tencent Game Big Data Service Application Practice Based on Flink+ServiceMesh”, which is mainly divided into the following four parts:

  1. Background and solution framework introduction
  2. Real-time big data computing OneData
  3. Data interface service OneFun
  4. Microservitization & ServiceMesh

I. Introduction of background and solution framework

1. Offline data operation and real-time data operation

First of all, I would like to introduce the background. We have been engaged in game data operation for a long time. We have been engaged in offline data analysis of games since 2013, and we can apply data to game operation activities.

But one of the drawbacks of the data back then was that most of it was offline, so for example, the data that was generated today, when we calculated it, we put it online the next day. So the real-time data, and the real-time intervention of game users, real-time operation effect will be very bad. Especially if I win the lottery today and only get the gift bag tomorrow, it’s very annoying.

Now advocate is: “what I see is what I want” or “I want to quickly I will”, so we start from 16 years, the entire game data from offline operation to real-time operation gradually, but at the same time we are in the process of doing, offline data definitely not, as some of the calculation, the cumulative value of offline calibration, data is valuable.

The real-time aspect complements our experience of running a game, such as completing a game or a task and immediately receiving rewards or instructions for the next step in the game. For users, this kind of timely stimulation and intervention will make their playing experience better.

In fact, not only games, other aspects are the same, so when we make this system, we use the combination of offline and real-time, but mainly to the real-time aspect, the future direction of big data is also, try to go to the real-time direction.

2. Application scenarios

■ 1) In-game mission system

This scene to introduce to you, is the task system in the game, we should have seen. For example, the first one is to eat chicken, how many games each day? Did you share? There are other campaigns that do resumes, but resumes that we now do in real time, especially if they need to be counted or shared with other communities. In the past, when we were doing data operation, all tasks were calculated after finishing, and the reward would be sent the next day, but now all tasks can be real-time intervention.

The task system of the game is a particularly important link in the game. We should not think that the task system is to let everyone complete the task and collect money. In fact, the task system gives players a good guide, so that players can get a better game experience in the game.

■ 2) Real-time ranking version

There is also a very important application scenario is the game’s leaderboard, such as king of Glory to Star Yao, king, in fact, are using the way of leaderboard. But our leaderboard may be more specific, such as today’s battle leaderboard, or today’s matchup leaderboard, these are real-time leaderboards calculated globally. And we have a snapshot feature, like when you take a snapshot at 00.00, you can immediately give rewards to the players in that snapshot.

These are typical examples of real-time computing, one quest system, one leaderboard, and more on that later.

The game’s need for data

Just to explain why there is such a platform, when we first started doing data operations, it was silo or manual workshop development. When received a demand, we will do a resource assessment, data access, data coding, coding, and data after the development, but also do online resources application, release, validation, and then to develop large data calculation after the completion of the service interface, and then develop pages and online system, these are finished and then sent to line up, online monitoring, There will be a reclamation at the end.

In fact, this way in the very early time is no problem, then why not now? The main thing is the process is too long. Now we have very high requirements for game operation. For example, we will access the data mining ability. After the real-time calculation of big data is completed, we will integrate the real-time user portrait and offline portrait, and then recommend them which tasks are suitable for them and guide them to complete them.

In this case, the original approach is relatively high threshold, each to do separately, and low cost efficiency, in the data reusability is also relatively poor, prone to error, and there is no way to precipitation. I’m going to recycle every single one of them, and at most the next time I do it, I think I have this code and I can use it a little bit, but it’s basically a manual way of doing it.

Therefore, we hope to have a platform-based approach, from project creation, resource allocation, service development, online testing, independent deployment, service on-line, online monitoring, effect analysis, resource recovery, project closure into a one-stop service.

In fact, we borrowed the idea from DevOps, which is that you should be able to develop and operate independently by one person, and there is such a system to support this. When a service is presented on the platform, it is possible to reuse calculated data, such as real-time logins or kills, that can be shared across subsequent services.

And with such a platform, developers only need to focus on their development logic, and the other two operations and distribution and online operations are guaranteed by the platform. Therefore, we hope to unify data computing and interface services in a platform-based way. Through the standardization of data and the unification of data dictionary, different data applications above can be formed. This is our first goal.

In fact, we are now is this way, the first is the guiding ideology in the conversation, especially when tencent do the amount of data services is very big, such as what have we done last year, a total of 5, 60000 marketing services, in this case if there is no platform support, no platform to governance and management of these services, People alone are very expensive.

4, train of thought

3 modern, big data application DevOps.

Our idea is the same, three modernizations, and the big data application DevOps idea to achieve.

  • Standardization: process specification, data development specification and development framework;
  • Automation: resource allocation, launch, deployment monitoring (a DevOps imperative);
  • Integration: data development, data interface development, test release, operation and maintenance monitoring.

Therefore, for the application system of big data, we will split it into three parts, one is the development of big data, the other is the development of data service interface, of course, behind the interface is some pages and clients, after these development must have a complete development process support.

In this way, we can provide one-stop data development and application solution services, unified activity management, data index calculation and development management and various data application interface automation production management one-stop services for various data application scenarios.

Such a system can guarantee these things, and we have a reasonable separation here, do not mix big data and interface, must do decoupling, this is a very key place.

5. Overall architecture of data service platform

**■ 1) Computing storage **

I think it can be used for reference. If you want to build a data service platform internally, the basic idea is the same. The underlying Iass can be ignored, and Tencent cloud or Ali cloud or other cloud services can be used directly.

We are mainly engaged in the upper part of the things, the bottom of the calculation and storage part of our internal system is not care, it is better to contract out this part. Now Iass is at the point where these things can be purchased in the cloud just like MySQL or Redis databases like Kafka, Pulsar, Flink, Storm.

In the storage area, we have TRedis and TSpider, which are basically updated versions of Redis and MySQL. The basics I recommend you don’t have to pay too much attention to if you build them yourself.

■ 2) Service scheduling

The core of the system is mainly in the middle of the service scheduling part, it is a unified scheduling API, is the upper level of some services can be sent down, and then to the unified scheduling. Another is the process development. We have an indispensable scheduling system, where we use DAG scheduling engine, so that we can combine offline task, real-time task, real-time + offline, offline + function interface services to complete more complex real-time data application scenarios.

For example, the real-time ranking we are doing now, after sending the real-time calculation task to Flink, Flink will also send a URL. After Flink gets the URL, it will send all the qualified data to the URL. This URL is actually a function service. Do the sorting in Redis and eventually generate a leaderboard.

The scheduler down here, you can expand laterally, for example I can do Storm scheduler, Flink scheduler, Spark scheduler, etc. In this section, you can form your own algorithm library, which can be done according to the scenario, for example, some of Flink’s SQL is divided, that is, the SQL is passed in, it can calculate and encapsulate Jar packages. In addition, for example, some simple data starting, rule judgment can also be done, directly put the algorithm library into this section.

In fact, this part is not directly related to the business scene, but the algorithm library must be related to the scene. In addition, we will have file writing channels at the lower level, such as the distribution of some Jar packages. Tencent uses COS here, which can do some data transmission and SUBMISSION of Jar packages.

There is also a command pipeline, which is specific to machines. For example, the Flink task must be submitted through the command pipeline, and then the Jar package is pulled down on one machine, and then the task is submitted to the Flink cluster at the same time. Data pipes serve a similar purpose.

■ 3) Various management

In addition, a very important content should be included. The operation monitoring, cluster management, system management (user rights management, business management, scene management, menu configuration management, etc.), message center, and help documents in the green section on the right are all complementary and indispensable to the whole system.

Another part is component management, including big data component management, function management, binary management of services can do unified management here.

Data assets, such as the data indicators that we can generate through Flink or Storm, and the management of its calculation logic are all in this, including the calculation, labeling or marking of these indicators, and we are also data assets.

The most important thing is the management of the data table. Whether Flink or Storm calculates the final landing location, it must be calculated by a data table. Everything else is fine, data reports, such as how much data is calculated every day, how much is successfully calculated, how many tasks are running every day, how many new tasks are added, all these can be done in it, including the release changes of our version. There is also the external management side, which can be done according to the business scenario, as you will see when we demonstrate the management side, actually our menu is relatively simple, according to our data access, for example, from the source to Kafka or Pulsar data access. Then the data index is calculated based on the data table, such as the Jar package of some features, which is the mixed calculation of the data of multiple tables, or the mixed calculation of the added table, and so on. A series of hard scenarios are done by some separation.

After we finally finish these, all the big data are exposed through external service API, such as whether the final game task is completed, after the user ID comes, we can check whether the user’s task is completed. Some application scenarios like this can be directly operated by USING API.

So this is the whole process, and it’s going to be a little bit more detailed and a little bit clearer later on.

Real-time big data calculation OneData

1. Data development process

This is our overall data application process:

Our Game Server uploads the data to the log Server (data access section), which then forwards the data to Kafka or Pulsar, the message queue.

After the data table, the data table is the description, based on the description of the table to develop indicators, data. For example, we have a total of three categories here, one is SQL, the other is the framework we have been installed, you can go to fill its personality code, and then you can complete the writing of Flink procedures online.

There is a brand new in their own local code written, and then sent to the system to test. As mentioned before, big data computing and data interface must be decoupled. The way we decoupled is storage, and Redis is used for storage. The way it does this is to combine Redis and SSD, and then add RockDB, which is Redis, it holds hot data, and it drops these data to SSD through this RockDB, so it has very good read and write performance. That is, the entire disk is stored as a database, unlike ordinary Redis, which uses memory as a storage object in the case of big data.

After storing data calculation in big data, the following is simple. We provide two kinds of query services. One is the index of calculation. Then we have another, also provide characteristic storage into the medium, I can define his SQL or query way, and then in the data processing, generate interface.

Another way is that we directly configure the data in Flink and Storm to a function interface on our side. For example, in the leaderboard way I just mentioned, an interface will directly process the data in Flink and then spit it out to the function interface, and the function interface will process the data twice.

This is the whole process, so what we talked about earlier is to build a comprehensive, hosted, configurable big data processing service based on Flink and Storm. The main consumption is Kafka data, Pulsar is currently in limited use.

In this way, we lower the threshold of data development. Many people do not need to know Flink or Storm, as long as they can write SQL or some simple logical functions, they can complete the development of big data.

2. Unified data calculation

In fact, when we were doing it before, we had some optimization process. In the past, every computing task was written by Jar package, and after writing, it was edited, packaged, developed and released. And then we divided it into three scenarios, one is sqL-ification, which is something that we can represent in SQL and we try to package it into SQL, and then we have a Jar package that can execute the submitted SQL.

There is also an online WebIDE, which is the logic of processing functions. For example, Storm can expose blot and spout. After writing these two functions, you can submit the parallelism and run them. But here we are implementing it based on Flink.

The other is scenarioalized configuration. Our personalized Jar packages can be uniformly scheduled and executed according to the scheduling logic.

3. Data computing service system

This is the process of our entire OneData computing system, which supports three kinds: one is self-developed SQL, one is Flink SQL, and Jar package.

How to store SQL developed by ourselves, Storm was used at first, but the efficiency of StormSQL was very low, so we assembled SQL made by SQL Parser, parsed SQL by ourselves, and formed functions by ourselves. After SQL was submitted, This way we compile it directly into Java bytecode and throw the bytecode into Storm to calculate.

Flink, we inherited that, and we’ll talk about the difference between the two. In fact, our homegrown SQL is a little more flexible than Flink SQL.

Here is to do platform, can not say directly put a FlinkSQL to run, because we want to count the execution of the whole business logic, such as SQL processing data amount, correct and wrong, including some attenuation, are to do statistics.

This is the basic process, after the completion of the above we formed some basic scenes, such as real-time statistics of the scene, PV, UV, use independent Jar package to calculate, configure the table can be calculated. In addition, real-time indicators of services, such as murder book, gold accumulation number, game times, king of Glory in the number of road down, such data can be used as real-time indicators.

Another is a rule-triggered service, where an interface is triggered when some data in a table meets some criteria. There are also real-time leaderboards and some customized services.

■ 1) Self-developed SQL

Next, we develop SQL process, we early in order to avoid like Hive (function stack call), and our own through SQL Paser syntax abstraction, it generated a section of function, do not need so many reconciliation calls.

This is the function generation process, the final generation is such a piece of code, it does the calculation logic, a function to complete, do not need to call the function stack, so that the efficiency will be greatly improved. We originally single-core run eighty thousand, now can run more than two hundred thousand.

During the whole process, we compile the SQL into bytecode, Flink consumes the data, and then converts it into a function that the SQL can execute, just like roll. Then Roll the entire data to the class for execution, and finally output.

So this is a scenario where, for example, FlinkSQL has a stateful value, and if we want to count a maximum value, we keep holding the maximum value of the user into memory. And the SQL that we developed, the function that we wrote, it put the data in third-party storage, like TRedis storage. Only need to read and write data each time, do not need to do too much memory hold.

FlinkSQL is able to perform calculations of more than 10GB or 100GB of data in real time. However, if FlinkSQL is able to perform calculations of more than 10GB of data in real time, its state value will always be stored in hould memory. In addition, it can only use its check point to recover.

So this is the scenario for these two types of SQL.

■ 2) SQL

There are other things we can do in SQL. Our data is stored in the storage persistently. If the storage is in the same table and at the same latitude, for example, we are using QQ and we have configured two indicators on this latitude, can we complete the calculation at one time? You consume it once you calculate it and then you store it once.

In fact, this kind of big data calculation is a lot of, we are doing the platform can be, for example, one is to calculate the number of login, the other is to calculate the highest level, the two calculation logic is different, but the consumption of data table is the same, and then the latitude of aggregation is the same, the aggregation keyword is the same. Then the data can be consumed once, and the data can be calculated and landed at the same time, which greatly reduces the storage and computing costs.

We now the whole game inside more than eleven thousand indicators, is calculated, storage latitude has more than two thousand six hundred, the actual savings calculation and storage about 60% of the above.

Two SQL, or even more SQL, it’s normal for us to calculate a dozen indexes in a table, and now we only need to consume a dozen times to calculate it once. And the situation is insensitive to the user. User A assigns an index of LATITUDE A to this table, and user B assigns an index of latitude A to this table, so the data of these two users will be consumed, calculated, stored, and finally obtained the same data.

**■ 3) Online real-time programming

**

  • No need to set up a local development environment;
  • Online development testing;
  • Strict input and output management;
  • Standardized input and output;
  • One-stop development test release monitoring.

In fact, most of the time for developers, it is very troublesome to build a local Flink cluster to do development commissioning, so we now provide a test environment, the upper code is fixed, can not be modified. For example, the data has already been consumed, processed, and eventually stuffed into storage.

In this way, we can unpack simple logic, which requires functional code, but is more complex than SQL, and simpler than automatic Jar package development. We can write code online, and submit and test the code directly to complete the output of the results. And the nice thing about this is, the logic of reporting the data, the logic of statistics, I’ve got it all packaged in here. Just take care of the development of business logic.

4. Flink feature application

  • Time feature: monitoring based on event time watermarking reduces computation and improves accuracy;
  • Asynchronous I/O: Improves throughput and ensures sequence and consistency.

When we first did it in Storm, the time when data was generated and the time when data entered the message queue was through the timestamp in the message, and each message was compared. With Flink, with watermark, that part of the calculation was reduced.

The result of the actual test is also quite good. We used to do single-core calculations in Storm, which is about the same as the previous QPS, plus read/write and processing performance, with five single-core threads. But with Flink we can go up to 10,000, plus the cost of Redis storing IO.

The other is that we want to take out the original data from Redis, and then calculate the maximum and minimum values, and then write to Redis. This is all synchronous to write, but synchronous IO has a problem is that the performance is not high. So now we are changing it to asynchronous IO, but asynchronous IO also has a characteristic that the whole data processing must be synchronous, must first take out the data from Redis, and then calculate the value, and then plug into it, ensure that the next unified data processing after plug.

Let’s do some more of these optimizations. Flink here are some features that can ensure consistency of our data and improve efficiency.

5. Unified Big data Development Service — Service case

More examples. If you play League of Legends, we designed this mission system, and you can think of me the next time you play and do this mission. There are also Tianlongbatu, CF, King of Glory LBS Glory zone (real-time calculation through big data +LBS data ranking), King of Glory daily activities (real-time data + interface + rule calculation), which friends are real-time online, match with you.

Data interface service OneFun

1. Exit of data application

The following function is introduced. There are also some problems when we do it originally. If the data is stored in the storage, if the storage is directly opened out and let others use it at will, in fact, the pressure and management of the storage are very problematic. So then we adopted a solution similar to Fass. We will manage the metadata stored in it. After the interface is reconfigured, you need to use my DB. I will compare the maximum QPS of this DB and allow you to use this amount.

For example, the maximum QPS of my DB is only 100,000. If you want to apply for 110,000, I will not be able to apply for you. I can only inform DB to expand the Redis and provide it to you after the expansion. So it involves the metadata management of our index data and the connection between the interfaces.

2. Integrated function execution engine — OneFun

This is the same as OneData. For example, this interface provides fast functions and some online function programming. You can write a little JavaScript or Golang code on it, and then generate the interface, which can directly provide services to the outside world, and make it into a product wrapper. In the upper layer according to the interface derived from a number of other applications.

3. Ssa-based online Golang function execution engine

In fact, we do it based on the sSA features of Golang language itself. We have an executor. This executor has been written, and its function is to submit the Golang code you write and load it into its executor.

And we can take the code we write as a function library, accumulate it and put it in, and it can call the function library at execution time, and write the code in the same syntax as Golang.

And when we’re doing it in here, we specify a coroutine, and each coroutine we specify the scope of, we’re going to sandbox it, and the first thing we implement is the external context, and we’re going to be able to do Web Golang development, which is kind of like Lua scripting language, You write the language online and submit it directly for execution.

4. Online function service engine based on V8 engine

This is our Javascript execution engine, we mainly do V8 engine pool, all Javascript written, thrown into the V8 engine to execute, this should be understandable, if you have played WITH JS can understand this way, is V8 engine directly to execute.

5. Integrated function execution engine — function as a service

Here is our online function writing process:

The lower right corner is our function code writing area, after writing, the black box on the left is click test, the output can be written here, click test will output the result, through this way, we greatly expand the development capability of our data platform. Is local to finish Golang code, and then finish debugging to online environment to test, and now we can be a lot of standardization, such as the introduction of data source, we can directly to set here, you can only introduce applied for a data source, you can’t casually introduced data sources, including your data source to introduce, QPS magnification can tell me that way.

  • Reduce start-up costs;
  • Faster deployment pipeline;
  • Faster development speed;
  • Higher system security;
  • Adapt to microservice architecture;
  • Automatic expansion capabilities.

So this is our one-stop shop, where you develop a function, you submit it, and we use Prometheus + Grafana to see real-time reports.

6. Case introduction

This is a typical application of Flink to calculate inside, his account of the data filtering, after for a remote call, the remote call executive function code, most of this kind of circumstance is a development can be completed the development of big data and the function of the interface development, can accomplish such a development, The bar for activity development is much lower, and what really makes DevOps possible is that development can do the whole process itself.

4. Microservitization & ServiceMesh

1, the data application must take the road – microservitization

The above describes the implementation principle and mechanism of OneData and OneFun, and how we apply this system internally. We have been promoting this system internally.

In fact, if we want to microservify, the big data we can do is just like that. We can use YARN or K8S to control resources and tasks, but the real service governance is in the interface. Right now we have about 3,500 interfaces up and down, and we’re adding 50 interfaces a week.

So we took that into account when we did it. In the past, our services were developed one by one, but there was no governance. Now, when we add services, we still develop them one by one. We even change some services into a service, but we join the governance of the service.

A lot of people in micro services, micro services if there is no platform to govern, it will be a disaster. Therefore, while microservitization brings us convenience, it also brings us some problems. Therefore, in our scenario, microservice is very good. Every interface can be used as a service, which is a natural microservice.

2. Integrated service governance design

However, this kind of micro-service governance will be a big problem for us, so we spent a lot of energy to build a micro-service governance system. When the project was registered, it registered the micro-service center of the project and registered the API. Then when the service was released and sent to the cluster, Consoul is a service that actively registers on our roster.

However, the service is not registered as a service route, but after the service, we do its health check and status collection in Prometheus. Once registered, I can immediately sense and collect the status, and then mainly do real-time reports and alarms.

One of the things we have in terms of the stability and health of the service, the other thing is that once the service information is registered in Consul, we have a gateway for the service, we use envoy, in fact we use it internally as SideCar, which we’ll talk about later.

Once registered, the envoy will load all of this load into this bit, which will do the routing for its services, and we’ll report the entire log to the log center, including the gateway logs, which will do offline reports and some real-time alarm monitoring.

Therefore, we have added a configuration based on Consul, that is, our real-time control including server can be configured through Consul. After configuration, watch will be available immediately and then execute.

This is basic service governance, but now we have upgraded service governance to be a little bit better than this, the basic principle is like this.

3. Unified control of north-south flow + east-west flow

And we’ve implemented a little bit of governance on the envoy here, what we call service governance, basically some governance on traffic, such as rich and poor load policies, routing policies, fuses, timeout control, fault injection and so on.

We use Consul’s configuration management, which is able to send this data to our Agent, who in turn sends this data to the Envoy through the Istio interface and K8s API, So inside this is the API GeteWay and SideCar are both envoy, so we write to his XDS interface via Istio, and we can send all our configuration information here.

This system can control the whole cluster, north-south flow and east-west flow unified control. We’re going to open source this system in the future, and it’s mainly for internal use now, and we’ve done a graphical configuration, so we’ve done YAML to Istio to UI for all the envoy and Istio configurations, and we’ve done a graphical configuration, so we can do unified control in this area.

And after we finished the development of Agent, we will support multiple clusters, that is, multiple K8s clusters as long as you join in, no problem can be supported. We manage API GeteWay.

And then there’s SideCar management, which is SideCar management in ServiceMash. The function interface or rule interface we just mentioned is a server.

There’s also a chaos Mesh feature that we’re working on right now, and we’re going to implement it in this system.

4. Whole-link traffic analysis based on ServiceMesh

This is an analysis we do through ServiceMesh. Although we can see how much pressure our interface puts on DB from a macro perspective, in fact, our monitoring of the pressure is not enough when we guide the traffic in. Therefore, in this case, we use ServiceMesh to control the flow at the outlet and at the inlet. Then, you can collect detailed statistics on traffic. After statistics are collected, you can generate a snapshot to compare the incoming traffic with the next snapshot and determine how much pressure is exerted on the following traffic.

This is the whole display diagram. We have multiple test cases. Between these two test cases, we can calculate the flow analysis of downstream pressure, the analysis of downstream pressure and the expansion and reduction of downstream resources are of great value.

5. Case introduction

Finally, we will introduce some cases that we use this system to achieve at present, such as the return of big data games, such as a game data review (career review), mission system, leaderboard.

Q & A

Q1: How is ServiceMesh deployed? What problem is it mainly used to solve?

The ServiceMesh technology implementation we are currently using is Istio, version 1.3.6. This version does not support physical machine deployment, so we deploy it in K8s. There are two deployment modes: istioctl command installation directly, or kubectl installation after Yaml file generation.

Servicemesh’s architecture addresses the governance of east-west traffic within the cluster. At the same time, Sidercar of Servicemesh serves as a protocol proxy service and can shield the following service development stack. The services behind Sidercar can be developed in various languages, but the traffic management and routing can be controlled uniformly.

Q2: Can you introduce the microservices governance architecture?

In my opinion, microservices governance architecture can be divided into two categories:

  • Service instance governance, this in the current K8s architecture, are basically managed by K8s, including the release of service instances, upgrade, expansion, service registration discovery and so on;
  • The governance of service traffic, which is commonly referred to as service governance, is mainly realized by two technologies, microservice gateway and service grid. The service gateway implements traffic governance within and outside the cluster, and the service grid implements traffic governance within the cluster.

Q3: What are the main technical backgrounds of the developers?

For big data developers, to use our system only need to know SQL statements and basic statistical knowledge.

For application interface developers, to use our system only need to know JavaScript or Golang, basic regular expressions, understand HTTP protocol, debugging HTTP API interface can be.

Q4: Real-time computing, any suggestions on Flink and Spark selection?

Spark was also used on a large scale in 15 and 16 years, and was also used in real-time calculation. However, the version at that time was still weak in real-time calculation, and data accumulation still occurred in 500ms batch processing, so there were certain problems in real-time performance. Spark is good at data iterative calculation and algorithm calculation. However, Spark is a good choice for scenarios with low real-time requirements and algorithmic requirements.

Flink is a loss processing model at the beginning of its design, so Flink is suitable for scenarios requiring high real-time performance. Our internal test found that Flink’s loss calculation throughput is indeed much better than Storm and Spark. And Flink’s current windowing mechanism is very useful for windowing calculations in real-time computing. Therefore, Flink is recommended for general real-time computing or scenarios requiring high real-time performance.

Q5: Is there any game playback data server storage scenario?

There are also such scenes. Generally, there are two ways of game playback. One is screen recording transmission playback, which costs a lot, but is simple and timely.

Q6: What protocol does the client use to send data to the server in the playback scenario?

It is usually a proprietary agreement for the game.