In live video, the first frame rendering speed directly affects the user experience. Imagine that you enter into a live broadcast studio with great interest. After entering the studio, you do not see the live broadcast picture, but stay on the background picture of the live broadcast room for a long time. This is an experience that most users cannot accept.

In order to improve users’ viewing experience on iQiyi’s live streaming service and on the iOS side of the qixiu app, a small video platform, iQiyi optimized Qixiu Second broadcast in the middle of last year, mainly in terms of business logic and vision.

What’s the problem?

From entering the studio to the first frame rendering. The playback process generally involves the following aspects:

To analyze time-consuming operations, first find the time-consuming logic. Two approaches have been tried here:

** (1) Logs are generated. ** Before entering the method and after the execution of the method, print the log to output the time of the method.

** (2) Use buried point delivery. ** Set several key nodes in the secast path. When each node is triggered, the time consuming data is delivered to the policy.

For log printing, there are obvious disadvantages in the actual operation, such as:

(1) The number of tests is generally not large, so it is not representative.

(2) The test data may be affected in different broadcast rooms and under different network conditions.

In order to ensure the accuracy of the analysis data, the buried point delivery combined with TimeProfile was finally adopted.

First of all, the second broadcast process is divided into several important stages: entering the live broadcast room, the room data request return, ready to play, the first frame rendering success.

The time consuming of different stages can be calculated through the burial point. Here is the 5.5.0 version before optimization as an example, as shown in the figure:

As you can see from the figure, the room data request parsing phase and playback phase take longer.

To narrow down the problem, we used XCode’s built-in debug tool, Time Profile, to do specific Time analysis.

The principle of TimeProfile is to track the stack information of each thread at a fixed time interval. By statistically comparing the stack state between different time intervals, you can deduce how long a method has been executing and get an approximate value. As shown in figure:

According to the comparison of multiple test data, there are two main time consuming logic:

(1) The SetFrame operation of the player will be stuck to the main thread for a long time.

(2) Processing the data returned by the room interface consumes a lot of time.

02 How Can I Solve this Problem?

Problem to be solved first, and most be caton SetFrame causes problems, through the communication with broadcast kernel classmate, speculation may be associated with apple system animated transitions (suspected resource competition, the players in a short period of time can’t get the context), lead to players in the process of system for animation for the related resources, eventually leading to the first frame render failure. So here we introduce a state machine to solve this problem:

All playback operations were performed after the system informed us that the UI layout was over. After the adjustment, the second playback returned to the normal range (about 56%), and the problem of lag did not occur again.

Second, for processing the data returned by the interface, a series of initialization operations are time-consuming. Specifically divided into two points:

(1) The initialization of each function module will lead to a lot of time, such as the chat area historical message function.

(2) As iOS requires that UI-related operations should be carried out in the main thread, after the interface request is returned, there is logic asynchronously to the main thread for many times, which consumes a lot of time.

Aiming at the problem of time consuming in the initialization of function modules, we abstract a ** function management module in the broadcast room. On the one hand, it can control the loading time of the module, and on the other hand, it can decouple the function module from the broadcast room. * * as shown in figure:

For the repeated asynchronous to the main thread of the problem, we at the bottom of all network requests are asynchronous to the main thread of the operation, so that the business layer by the development of students to asynchronous main thread of the operation.

After this problem was changed, the second broadcast increased significantly, and it was stable at about 78%.

Some non-technical adjustments

The main discussion above is the optimization of second broadcast from the outside into the live broadcast room. We also have a more important way to switch the live broadcast room — up and down switch.

The optimization of the up-and-down technique is explained in the next section. We’re talking about a way to make the secs in the up-and-down operation look “faster” with some visual manipulation.

Compared with the previous method of loading the studio UI first, the new method delays the creation of the studio UI (created after getting the room data), because the audience first sees the live screen, so it will give people the illusion of playing “faster”.

04 Direction of continuous improvement

Look at the technology side.

At present, some optimization has been made for the upslide, mainly in the slide operation, when the finger leaves the screen, it can judge whether the animation will slide into the next broadcast room when the end of the animation, so as to load the playback data in advance.

Compared with the previous waiting to fully slide into the next broadcast room, then loading the broadcast data, has a certain improvement. But in fact, there are still some disadvantages, such as:

(1) It has been determined to slide to the next broadcast room, but before the end of the slide animation, the user suddenly touches the screen, which causes the slide to stop. At this point, if the user gives up sliding to the next broadcast, and returns to the previous broadcast, then the playback data has to be reloaded, and the experience is not very good.

(2) The time between swiping and leaving the screen is wasted.

For the operation of sliding and switching between broadcast rooms, a better way is to use dual players at present, so that the data of the next broadcast room can be loaded directly at the beginning of sliding, thus avoiding complex logic processing for some special cases and maximizing the loading time.

However, due to the large size of our playback kernel, the current dual-player approach would lead to increased complexity of the studio architecture and high marginal cost, so we have not tried dual-player further.

05 epilogue

Through the optimization of the logical and visual effects, the final second broadcast rate is increased to about 78%.

In addition to some new mechanisms (such as lazy loading and module management mentioned above) to ensure that the second play rate will not be affected in the subsequent iteration of the version, we will also try to dual-player to further improve the second play rate and bring better playing experience to users.