This article was shared by “Virwu”, an engineer on the WeChat development team.

1, the introduction

Recently, WeChat mini-games support one-button broadcast of video number. Upgrade WeChat to the latest version and open Tencent mini-games (such as Jump, Joy Doudizhu, etc.). In the upper right menu, you can see the button to initiate live broadcast and become the game anchor with one button (as shown in the figure below).

However, for performance and security reasons, the WeChat mini-game runs in a separate process. In this environment, the module related to live video number will not be initialized. This means that the audio and video data of the small game must be transmitted to the main process for pushing stream across the process, which brings a series of challenges for us to realize the small game live broadcast.

(synchronous published in this article: http://www.52im.net/thread-35.)

2. A series of articles

This is the fifth article in a series:

Chat Technology of Live Broadcast System (I) : Practice Road of Real-time Pushing Technology of Million Online Meipai Live Broadcast Barrage System


“Live streaming system chat technology (2) : technical practice of Ali e-commerce IM messaging platform in the scene of group chat and live broadcast”


“Live streaming system chat technology (3) : the evolution of WeChat live chat room single room 15 million online message architecture”


“Live streaming system chat technology (4) : practice of real-time message system architecture evolution for massive users of Baidu Live”


“Live system chat technology (5) : WeChat mini-game live on Android side of cross-process rendering push stream practice” (* article)

3. Video capture and push stream

3.1 Screen recording? In essence, live-streaming of mini-games is to show the content on the host’s mobile phone to the audience. Naturally, we can think of using MediaProjection, the screen recording interface of the system, to collect video data.

This scheme has the following advantages:

1) System interface, simple implementation, compatibility and stability is guaranteed; 2) It can be extended into general screen recording and live broadcasting in the later stage; 3) It has little influence on game performance, and the influence on frame rate is less than 10% after testing; 4) Data processing and push stream can be carried out directly in the main process, without dealing with the problem of cross-process of small games. But in the end, the plan was rejected, mainly for the following reasons:

1) The pop-up window of system authorization needs to be displayed; 2) It is necessary to carefully handle the situation of pausing the screen to push the stream after cutting out the mini-game, otherwise it may be recorded to the other interface of the anchor, with privacy risk; 3) The most critical point: in terms of product design, a comment widget (as shown in the figure below) should be displayed on the mini-game to facilitate the anchors to view the live comments and interact with each other. Screen recording and live broadcasting will also let the audience see this component, which will affect the viewing experience and expose some data that only the anchors can see.

On second thought, since the rendering of the mini-game is completely controlled by us, in order to have a better live streaming experience, can we transfer the rendering content of the mini-game across processes to the main process to push the stream?

In order to better describe the scheme we adopted, here is a brief introduction to the rendering architecture of the mini-game:

You can see that the left half of the image shows the progress of the mini-game in the foreground, where MagicBrush is the rendering engine for the mini-game. It receives render commands from the mini-game code and renders the screen onto the Surface provided by the SurfaceView on the screen. The main process does not participate in the background.

The video game has previously supported the recording of game content, which is similar to the principle of live broadcasting. Both of them need to obtain the current video content of the video game.

When the recording screen is enabled, the mini-game will switch to the following mode for rendering:

As you can see, the target of MagicBrush’s output is no longer the SurfaceView on the screen, but a SurfaceTexture generated by the Renderer.

Here’s what the Renderer does:

The Renderer is a standalone rendering module that represents a standalone GL environment that can create SurfaceTexture as input, SurfaceTexture receives an onFrameAvailable callback and converts the image data to a texture of type GL_TEXTURE_EXTERNAL_OES using the updateTexImage method. And you can output the rendering results to another Surface.

The process in the figure is explained step by step as follows:

1) MagicBrush receives a rendering instruction call from the minigame code and renders the minigame content onto the SurfaceTexture created by the first Renderer;

2) The Renderer then does two things:

2.1) The texture of the small game screen was rendered on the Surface of the screen again; 2.2) Provide the texture ID to the second Renderer (where the two renderers share textures by sharing GLContext). 3) The second Renderer renders the textures provided by the first Renderer onto the input SurfaceTexture provided by the MP4 encoder, and the final encoder encodes an MP4 screen file.

3.4 Recording screen reconstruction scheme? It can be seen that in the screen recording scheme, one Renderer is responsible for uploading the game content to the screen, and another Renderer renders the same texture to the encoder to record the game content. In fact, live broadcasting is similar, is it possible to replace the encoder with the push stream module for live broadcasting?

That’s true, but there’s one key missing: the push-stream module runs in the main process, and we need to implement the cross-process transfer of image data! How do you cross processes?

Speaking of cross-processes: The first thing that probably springs to mind is Binder, Socket, shared memory, and other traditional IPC communication methods. But if you think about it, the SurfaceView provided by the system is a very special View component. It is not drawn through the traditional View tree, but is composited directly onto the screen via the system’s SurfaceFlinger, which runs on the system process. The Surface content that we draw on the SurfaceView provides must be transferable across processes, and the way the Surface is transferable across processes is simple — it implements the Parcelable interface itself, which means that we can transfer the Surface object directly across processes using Binder.

So we came up with this rudimentary solution:

As you can see: Step 3 instead of rendering to the MP4 encoder, it will render to the Surface transmitted across processes by the main process. The Surface is wrapped in a SurfaceTexture created by the Renderer, and the minidame process will now render to the Surface as the producer. When a frame is rendered, the SurfaceTexture of the primary process will receive an onframeAvailable callback to notify that the image data is ready. The SurfaceTexture will then fetch the corresponding texture data via the UpdateTexImage. The SurfaceTexture will then receive the corresponding texture data via the UpdateTexImage. Here, since the live stream push module only supports GL_TEXTURE_2D textures, the main process Renderer will convert GL_TEXTURE_EXTERNAL_OES into GL_TEXTURE_2D texture and send it to the live stream push encoder to complete the push process.

After a makeover: The above solution successfully rendered the mini-game on screen while being passed to the main process for push, but is this really the best solution?

There are two renderers in the minigame process, one for rendering to the upper screen, one for rendering to the Surface from across processes, and one for converting textures and calling the push stream module in the main process. If you want to support the recording screen at the same time, you also need a Renderer in the minigame process for rendering to the MP4 encoder. Too many Renderer means too much extra rendering overhead, which will affect the performance of the minigame.

Throughout the whole process, in fact, only the Renderer of the main process is necessary. The extra Render used by the mini-game is nothing more than to meet the requirements of rendering on the screen and cross-process transmission at the same time. Let’s open our imagination — since the Surface itself is not constrained by the process, Then we simply put the small game process in the screen Surface transfer to the main process rendering on the screen!

In the end, we drastically cut out the two redundant renderers of the minigame progression, and MagicBrush directly rendered the Surface that was passed across the progression. The Renderer of the main process is not only responsible for the texture type conversion, but also responsible for rendering the texture to the screen Surface of the mini-game process transferred from the cross-process, so as to realize the rendering of the screen.

In the end, the number of Renderer required was reduced from the original 3 to the necessary 1, which improved the performance while making the architecture clearer.

If you need to support screen recording in the future, just make a small change to pass the input SurfaceTexture of the MP4 encoder across to the main process and add a Renderer to render the texture on top of it (as shown in the figure below).

3.6 Compatibility and Performance At this point, I can’t help but worry about the compatibility of the solution for transferring and rendering the Surface across processes.

In fact, it’s not very common, but the official documentation says you can draw across processes:

SurfaceView combines a surface and a view. SurfaceView’s view components are composited by SurfaceFlinger (and not the app), enabling rendering from a separate thread/process and isolation from app UI rendering.

And Chrome and Android OS WebView after the use of cross-process rendering solution.

In our compatibility tests, which covered all major versions and models of Android 5.1 and later, the Upscreen and Push streams were rendered normally, with the exception of the black screen rendered across processes on the Android 5.x model.

Performance: We use the Demo of WebGL aquarium performance tests, you can see the average frame rate at about 15%, the influence of main process of CPU for rendering and push flow was increased, but surprisingly little game process CPU overhead some drops, here the cause of the decline is not confirmed yet, doubt and moved to the main process related on the screen, and the Statistical methods are not excluded.

In order to achieve no recording of comments on the host side, we start with the rendering process of the small game. With the ability of Surface to render and transmit images across processes, the rendering process of the small game on the screen is moved to the main process, and texture is generated to push the stream at the same time, which meets the requirements in terms of compatibility and performance.

4, audio acquisition push stream

In the audio acquisition scheme, we note that the Android 10 and above system provides the AudioPlaybackCapture scheme, which allows us to collect the system audio within a certain limit. Some of the conclusions of the pre-study were as follows.

Capture side – the conditions under which the capture is performed:

1) Android 10(API 29) and above; 2) Record_Audio permission is obtained; . 3) through MediaProjectionManager createScreenCaptureIntent () MediaProjection permissions application (and recorded MediaProjection screen sharing); 4) by AudioPlaybackCaptureConfiguration. AddMatchingUsage ()/AudioPlaybackCaptureConfiguration excludeUsage () Add/exclude MEDIA types to be captured; 5) through AudioPlaybackCaptureConfiguration. AddMatchingUid ()/AudioPlaybackCaptureConfiguration excludeUid () add/exclude the application of can capture the UID. Captured side – the condition under which it can be captured:

1) The Usage of Player AudioAttributes is set to USAGE_UNKNOWN,USAGE_GAME or USAGE_MEDIA (most of these are default values and can be captured); 2) The applied CapturePolicy is set to Audioattributes# ALLOW_CAPTURE_BY_ALL, which can be set in three ways (whichever is the strictest, currently there is no configuration in WeChat, it can be captured by default); 3) through the manifest. XML set android: allowAudioPlaybackCapture = “true”, among them, the TargetApi 29 and applications of the above is true, by default to false otherwise; 4) API 29 and above can be set by the setAllowedCapturePolicy method at runtime; 5) API 29 and above can be set individually for each Player via AudioAttributes. In summary: Android 10 and above can use AudioPlaybackCapture solution for audio capture, but considering the Android 10 version is too limited, we chose to capture and mix all the audio played in the mini-game ourselves.

4.2 Cross-Process Audio Data Transfer Now we have an old problem: the audio data from the minigame mix is in the minigame process, and we need to transfer the data to the main process for push streaming.

Unlike normal IPC cross-process communication for method calls, in this scenario we need to transfer large chunks of data frequently (40 milliseconds at a time) (around 8K in 16 milliseconds).

Also: due to the nature of live streaming, the latency of this cross-process transfer needs to be kept as low as possible, otherwise the sound and picture will be out of sync.

In order to achieve the above goals, we tested several IPC solutions, such as Binder, LocalSocket, MMKV, SharedMemory and PIPE. In the test environment built, we simulated the real audio transmission process in the mini-game process, and sent the serialized data object every 16 milliseconds. The data object size is divided into three ranges of 3K /4M/10M, and the time stamp is stored in the object before sending. The time when the data is received in the main process and de-serialized into a data object is taken as the end time, and the transmission delay is calculated.

The final result is as follows:

Note: Among them, Xipcinvoker (Binder) and MMKV take too long when transmitting a large amount of data, so they are not shown in the results.

The analysis of each scenario is as follows (Caton rate represents the proportion of data with > delay 2 times the average delay and >10 milliseconds to the total) :

As you can see, the LocalSocket scheme has excellent transmission latency in all cases. The main reason for the difference is that after the bare binary data is transmitted across processes to the main process, a data copy operation is still needed to deserialize it into a data object. However, when using localSocket, ObjectStream and Serializeable can be used to realize streaming copy. This saves a lot of time compared to other solutions that receive data once and then copy it. (Of course, other solutions can also be designed for block streaming and copy at the same time, but they have a certain cost to implement and are not as stable and easy to use as ObjectStream.) We also tested the LocalSocket for compatibility and performance, and there were no cases of transmission failure or disconnection. The average latency was more than 10 milliseconds on the Samsung S6, and around 1 millisecond on the other models, which met our expectations.

The common Binder cross-process security is guaranteed by the system implementation authentication mechanism. As the LocalSocket is the package of UNIX domain sockets, we must consider its security issues.

“The Misuse and use of Android Unix Domain Sockets and Security Implications” provides a detailed analysis of The Security risks of using LocalSocket in Android.

PS: the original paper attachment downloads (section 4.3 download please from this link: http://www.52im.net/thread-35.)

Summary of the paper: because LocalSocket itself lacks authentication mechanism, any application can connect, thus intercepting data or passing illegal data to the receiver to cause exceptions.

Against this characteristic, we can do two kinds of defense:

1) Randomize the LocalSocket name. For example, MD5 is calculated as the name of LocalSocket using the APPID and user UIN information of the current live small game, so that the attacker cannot attempt to establish connection by fixing or exploiting the name; 2) Introduce the authentication mechanism, and send specific random information to verify the authenticity of the other party after successful connection, and then start the real data transmission. 4.4 summary In order to be compatible with Android 10 models can also live broadcast, we choose to handle the audio collection of small games by ourselves, and through comparison and evaluation, we choose LocalSocket as the cross-process audio data transmission scheme, which meets the demand of live broadcast in terms of delay.

At the same time, some countermeasures can effectively avoid the security risk of LocalSocket.

5. Problems with multiple processes

Looking back, although the whole scheme looks more smooth, but in the process of implementation is still due to the multi-process reasons to step on a lot of pits, the following two are more important.

5.1 GlFinish caused a serious drop in the frame rate of render push stream. After the implementation of the scheme of cross-process render push stream, we conducted a round of performance and compatibility tests, and found that the frame rate of some medium and low-end models dropped very seriously in the tests (as shown in the figure below).

After the replay, look at the frame rate of the minigame process rendering (that is, the frame rate of the minigame process drawing to the Surface from the cross-process) and find that it can reach the frame rate when the live broadcast is not on.

However, PerfDog, the test software we used, records the frame rate of drawing on the screen Surface, which indicates that the performance decline is not caused by the high overhead of live broadcasting, but by the low efficiency of screen Renderer in the main process.

Therefore, we Profile the running efficiency of the main process during live broadcasting and find that the time function is glFinish.

And there are two calls:

1) The first call to convert the external texture to 2D by Renderer will take more than 100 milliseconds; 2) The second call is inside the Tencent Cloud Live SDK, which takes less than 10 milliseconds. If you remove the first call, the one inside the live SDK will take more than 100 milliseconds.

To understand why this GL instruction is taking so long, let’s look at its description:

glFinish does not return until the effects of all previously called GL commands are complete.

The description is simple: it blocks until all previously invoked GL instructions are completed.

So there were too many GL instructions before? However, the GL instruction queue is isolated in the dimension of thread. In the Renderer thread of the main process, only a very small number of GL instructions will be executed before GlFinish, and students from Tencent Cloud know that the push stream interface will not execute many GL instructions in this thread. How could such a small number of GL directives keep GlFinish blocked for so long? Wait, a lot of GL instructions? Isn’t there a lot of GL instructions being executed in the minigamer process at this point? Is it the large number of GL instructions in the minigamer process that causes the glFinsih of the main process to take too long?

This makes sense: although the GL instruction queue is thread-isolated, there is only one GPU for processing instructions, and too many GL instructions in one process causes the other process to block too long when GlFinish is needed. A round of Googling failed to find a description of the relevant, you need to verify this guess yourself.

Re-observe the above test data: it is found that the 60 frames before the live broadcast can also be reached around 60 frames after the live broadcast. Does this mean that the time spent on GlFinish will also decrease when the GPU load of the mini-game is low?

On a model with a severe performance decline: when trying to run a low-load mini-game with other variables unchanged, glFinsih managed to drop to around 10 milliseconds, confirming the hypothesis that the mini-game process was actually executing a large number of GL instructions that blocked the main process from executing GlFinish.

What’s the solution? The high load of the minigame process cannot be changed. Can the minigame stop after one frame is rendered and wait for the main process’s glFinish to finish before rendering the next frame?

Various attempts have been made: OpenGL’s glFence synchronization mechanism does not work across processes; Due to GL instructions are executed asynchronously by cross-process communication thread lock to lock the glycaemic load (GL) of the game does not guarantee that the main process execution glFinish little game process instruction has been performed, and can ensure that it is only through glFinish is put in the small game, but it makes double buffering mechanism failure, lead to little game rendering frame rate dropped significantly.

Since the blocking caused by GlFinish is inevitable, let’s go back to the beginning of the question: Why do we need GlFinish? Because of the double-buffering mechanism, there is generally no need for GlFinish to wait for the previous drawing to complete, otherwise the double-buffering becomes meaningless. In the two GlFinish cases, the first texture processing call can be removed directly. After communication, the second Tencent cloud SDK call was found to be introduced to solve a historical problem, so you can try to remove it. With the help of students from TengxunCloud, the frame rate of rendering was finally consistent with the frame rate of the game output after removing GlFinish. After compatibility and performance tests, no problems caused by removing GlFinish were found.

The final solution to this problem is very simple: but the process of analyzing the cause of the problem has actually done a lot of experiments, and the scenario in which a process with a high GPU load in the same application will affect the GlFinish of another process is indeed very rare, and there is not much information for reference. This process also taught me that the performance impact of GlFinish invalidating the double-buffering mechanism is significant, and that GlFinish should be used with caution when rendering with OpenGL.

5.2 Priority of background process During the test, we found that no matter what frame rate the picture was sent to the live broadcast SDK, the frame rate of the picture seen by the audience was always only about 16 frames. After the background reasons were eliminated, it was found that the frame rate encoded by the encoder was insufficient. According to the test by Tencent Cloud student, the frame rate of the same in-process encoding can reach the set 30 frames, which indicates that the problem is caused by multiple processes. Here encoding is a very heavy operation, which requires a lot of CPU resources, so we first suspect that the priority of background process is the problem.

To confirm the problem:

1) We found the root cell phone and increased the priority of the encoding thread through CHRT command, and the frame rate of the viewer immediately increased to 25 frames; 2) On the other hand, if a floating window of the main process is displayed on the mini-game process (so that the main process has the foreground priority), the frame rate can be up to 30 frames. To sum up: you can confirm that the frame rate drop is due to the low priority of the background process (and the threads it owns).

To improve the practice of thread priority in WeChat are common, such as: small program of JS thread and small game render threads will be at runtime by android. OS) Process. The setThreadPriority method to set the priority of a thread. Tencent cloud SDK students soon provided an interface for us to set the thread priority, but when we actually ran it, we found that the frame rate of encoding only increased from 16 frames to about 18 frames. What is wrong with it?

Mentioned before: we through CHRT command sets the thread priority is valid, but android. OS) Process. The setThreadPriority this method set the thread priority is the corresponding renice this command to set the nice value. CHRT: CHRT -p [pid] [priority] = CHRT -p [pid] [priority] = CHRT -p [pid] [priority] = CHRT -p [pid] [priority] = CHRT Causes the thread’s scheduling policy to change from the Linux default SCHED_OTHER to the command default SCHED_RR, which is a “real-time policy,” causing the thread’s scheduling priority to become very high.

In fact: the renice (namely android. OS. Process. SetThreadPriority) set of thread priority, for thread background processes have not much help.

Someone has explained this before:

To address this, Android also uses Linux cgroups in a simple way to create more strict foreground vs. background scheduling. The foreground/default cgroup allows thread scheduling as normal. The background cgroup however applies a limit of only some small percent of the total CPU time being available to all threads in that cgroup. Thus if that percentage is 5% and you have 10 background threads all wanting to run and one foreground thread, the 10 background threads together can only take at most 5% of the available CPU cycles from the foreground. (Of course if no foreground thread wants to run, the background threads can use all of the available CPU cycles.)

For those of you who are interested in setting thread priorities, check out another great article: “Android’s bizarre trap: the WeChat Kadon tragedy caused by setting thread priorities”.

Finally: In order to increase the encoding frame rate and prevent the background main process from being killed, we finally decided to create a foreground Service in the main process during live streaming.

6. Summary and Outlook

Multi-process is a double-edged sword, which not only brings us isolation and performance advantages, but also brings the problem of cross-process communication. Fortunately, with the ability of system Surface and a variety of cross-process solutions, the problems encountered in small game live broadcasting can be solved well.

Of course: the best solution to the cross-process problem is to avoid cross-process. We also considered the solution of running the push stream module of the live video number in the process of the mini-game, but we did not choose this solution due to the consideration of the cost of transformation.

At the same time: The practice of rendering the SurfaceView across processes has implications for other businesses as well. For scenarios where the SurfaceView needs to be rendered with high memory pressure or security risks, the logic can be rendered in a separate process and rendered across processes on the main process’s View. It gains the advantage of individual processes while avoiding the fragmentation of experience caused by interprocess jumps.

Appendix 1: Summary of articles related to live broadcasting technology

The real-time live audio and video technology, a mobile terminal (a) : the opening, the mobile terminal real-time audio and video broadcast technology, (2) : acquisition, the real-time live audio and video technology, a mobile terminal (3) : processing, the mobile terminal real-time audio and video broadcast technology, (4) : coding and encapsulation of the mobile terminal real-time audio and video broadcast technology, (5) : “Push streaming and transmission”, “Mobile terminal real-time audio and video broadcast technology detail (6) : delay optimization”, “Theory with practice: to achieve a simple real-time video broadcast based on HTML]5”, “real-time video broadcast client technology review: Native, HTML]5, WebRTC, WeChat small program “Android Live Introduction Practice: Build a simple live streaming system” “Taobao Live Technology Dry: Decryption of HD and Low Delay Real-time Video Live Technology” “Technology Dry: Optimization Practice of the First Screen Time of Real-time Video Broadcast within 400ms, sina weibo Technology Share: Practice of the Multi-Million High Concurrent Structure of Micro-blog Real-time Broadcast Answers, Technical Principles and Practice Summary of Real-time Audio Mixing in Live Video Broadcast, Qiniu Cloud Technology Share: Use of Quic Protocol to Realization of Real-time Video Broadcast with Zero Stagnation! How does P2P technology reduce the bandwidth of live video broadcasting by 75%? Some Optimization Ideas of netease Yunxun Real-time Video Broadcast in TCP Data Transmission Layer First Disclosure: How can Kuaishou make millions of viewers watch live broadcast in the same spot and still open in seconds without getting slow? Brief Discussion on Several Key Technical Indicators Directly Affecting User Experience in Real-time Audio and Video Broadcast, Technical Disclosure: Facebook Real-time Video Broadcast that Supports the Interaction of Millions of Fans, and Technical Practice of Real-time Video Broadcast on Mobile Terminal: How to achieve real-time second opening, smooth and non-blocking, Practice Sharing of Realizing 1080P Real-time Audio and Video Broadcast with Delay Less than 500 milliseconds, Technical Key Points of Real-time Video Broadcast Platform, Chat Technology of Live Broadcast System (I) : The beauty of millions of online broadcast barrage system of real-time road to push technology practice, the live chat system technology (2) electric business IM message platform, ali in group chat, live under the scenario of technology practice the live chat system technology (3) : WeChat live chat room 15 million online message schema evolution road “the live chat system technology (4) : Practice on Architecture Evolution of Real-time Messaging System for Massive Users of Baidu Live WeChat mini-game live streaming practice on Android side of Cross-process rendering push stream “, “massive real-time news video live broadcasting system architecture evolution road (video +PPT)[download attachment]”, “YY live broadcasting in the mobile weak network environment depth optimization practice sharing (video +PPT)[download attachment]”, from 0 to 1: Ten thousand people online real-time audio and video broadcast technology practice sharing (video +PPT) [download the attachment] “” Online audio and video studio server architecture best practice (video +PPT) [download the attachment]”

More articles of the same kind…

Appendix 2: Summary of technical articles shared by the WeChat team

“Technical Challenges and Practice Summary Behind the 100 Billion Visits of WeChat Moments”, “WeChat Team Share: WeChat Mobile Fulltext Retrieval Polyphonic Solution”, “WeChat Team Share: Technology Practice of High Performance General Key-Value Components of WeChat on iOS”, “WeChat Team Share: How does WeChat prevent special characters from causing explosions and crashes? “WeChat Team Original Share: Technical Practice of iOS WeChat Memory Monitoring System”, “iOS Background Awakening Practice: Summary of WeChat Voice Reminder Technology on Receivables”, “WeChat Team Share: Principles and Application Scenes of Super Resolution Technology of Video Images”, “WeChat Team Share: The Technology Decipher behind WeChat Real-time Audio and Video Chats a Day, WeChat Team Sharing: The Pit Filling by WeChat Android Video Coding, WeChat Mobile Local Data Fulltext Retrieval Optimization, WeChat Enterprise Client Synchronization Update Scheme, WeChat Team Disclose: How to test the Android compatibility of the 889 million monthly live super IM WeChat? Get everything about the open source mobile database component WCDB! “WeChat Client Team Leader Technical Interview: How to Start Monitoring and Optimization of Client Performance”, “WeChat Background Design Practice of Massive Data Hierarchical Structure Based on Time Sequence”, “WeChat Team Original Share: The Overstaffed Problem of Android WeChat and the Way of Modularization Practice”, “WeChat Background Team: WeChat Background Asynchronous Message Queue Optimization and Upgrading Practice Share, WeChat Team Original Share: WeChat Client SQLite Database Damage Repair Practice, WeChat MARS: Network Layer Enclosure Library Used in WeChat, to Be Open Source, As promised: WeChat’s self-used mobile IM network layer cross-platform component library MARS has been officially open source, Open Source LIBO Library: Backstage Framework Foundation for Ten Million Connections on Single Machine and Support WeChat 800 Million Users [source download], WeChat New Generation Communication Security Solution: Detailed MMTLS Based on TLS1.3, WeChat team original share Android version WeChat background live actual combat share (process live) “WeChat team original share: Android version WeChat background live actual combat share (network live)” “Android version of WeChat from 300KB to 30MB technology evolution (PPT speech) [attachment download]” “WeChat team original share: Android version of WeChat from 300KB to 30MB technology evolution, WeChat technical director talk about architecture: WeChat road to simple (full text speech), WeChat technical director talk about architecture: WeChat road to simple (PPT speech) [download attachment], how to read “WeChat technical director talk about architecture: The Way of WeChat — The Way to Simplicity, the Backstage System Storage Structure (Video +PPT) Behind Massive WeChat Users [Download Attachment], and the Practice of WeChat Asynchronous Transformation: The background solution behind 800 million monthly life and 10 million single computer connections “, “WeChat Moments of Massive Technology PPT [Download Attachment]”, “WeChat Technology Test and Analysis on the Influence of Network (Full Paper)”, “A Summary Note on WeChat Background Technology Architecture”, “Architecture Architecture: “Fast Fission: Witness the Evolution of WeChat’s Powerful Backend Architecture from 0 to 1 (1)” “Fast Fission: Witness the Evolution of WeChat’s Powerful Backend Architecture from 0 to 1 (2)” “WeChat Team Original Share Android Memory Leak Monitoring and Optimization Tips Summarization of Android Memory Leak Monitoring and Optimization Tips Summarization of iOS WeChat Upgrade iOS9 All kinds of “Pits” WeChat Team Original Resource Confusing Tool: Make Your APK Subside by 1M Android version WeChat installation package “weight loss” actual combat record, iOS version WeChat installation package “weight loss” actual combat record, mobile terminal IM practice: iOS version WeChat interface lag monitoring scheme, technical problems behind WeChat “red packet photos”, mobile terminal IM practice: “Mobile terminal IM practice: Android version WeChat how to greatly improve the interactive performance (I)” “Mobile terminal IM practice: Android version WeChat how to greatly improve the interactive performance (II)” “Mobile terminal IM practice: Android version WeChat how to greatly improve the interactive performance (II)” Implement the Intelligent Heartbeat Mechanism of WeChat on Android, Mobile Terminal IM Practice: Heartbeat Strategy Analysis of WhatsApp, Line and WeChat, Mobile Terminal IM Practice: Research on Google Message Push Service (GCM) (from WeChat), Mobile Terminal IM Practice: Discussion on Multi-device Font Adaptation Scheme of iOS Version WeChat, IPv6 Technical Explanation: Basic Concepts, Application Status and Technical Practice (Part I), IPv6 Technical Explanation: Basic Concepts, Application Status and Technical Practice (Part II), WeChat Multimedia Team Interview: Audio video development study, WeChat audio and video technology and challenges of the tencent technology sharing: WeChat small program audio and video technology the story behind the tencent senior architects dry summary: article read all aspects of the large-scale distributed system design “, “WeChat multimedia team jun-bin liang interview: talk about what I know of the audio and video technology, the tencent audio-visual lab: Using AI Black Technology to Realize High-Definition Real-time Video Chat with Ultra Low Bit Rate, Tencent Technology Sharing: Technical Ideas and Practice of WeChat Audio and Video Interworking with WebRTC, Hand in Hand Teaching You to Read Chat Records of Android WeChat and Hand Q (Only for Technical Research and Learning), WeChat Technology Sharing: WeChat’s Massive IM Chat Message Serial Number Generation Practice (Algorithm Principle), WeChat Technology Share: WeChat’s Massive IM Chat Message Serial Number Generation Practice (Disaster Relief Plan), Tencent Technology Share: GIF Technology Details and Mobile QQ Dynamic Expression Compression Technology Practice, WeChat Team Share: Kotlin is gradually recognized, the technology of Android version of WeChat, “social software red envelope technology decryption (2) : decryption of WeChat and shake red envelope technology evolution from 0 to 1,” “social software red envelope technology decryption (3) : technical details behind the WeChat and shake red envelope rain,” “social software red envelope technology decryption (4) : How the WeChat Lucky Money System Cope with High Concurrency, How the WeChat Lucky Money System Realize High Availability, Social Software Red Envelope Technology Decryption (VI) : Evolution Practice of WeChat Lucky Money System Storage Layer, Social Software Red Envelope Technology Decryption (XI) WeChat Lucky Money random algorithm (including code implementation), WeChat team share: Extreme optimization, iOS version of WeChat compiler speed up to 3 times the practice summary, IM “scan” function is good to do? See WeChat “scan recognizer” complete technical implementation, WeChat team share: Reflections on Mobile Software Architecture Based on the Reconstruction of WeChat Payment Code, IM Development Bibliography: The Most Complete Data Collection of WeChat Functional Parameters and Logical Rules, and WeChat Team Sharing: Evolution of WeChat Live Chatroom Single Room with 15 Million Online Messages Architecture

More articles of the same kind…


(This article was published simultaneously at:
http://www.52im.net/thread-35…)