Existing problems:

  • 1. Use GPUImage third-party framework to render pictures, and the bottom layer is realized based on OpenGL
  • 2. Chain rendering. When multiple images or results need to be relied on, a large amount of shader code needs to be written by ourselves. GPUImage is only suitable for single image or few rendering operands
  • 3. Subsequent complex functions such as special effects, small videos and other GPUImage cannot meet our business requirements
  • 4. The phone heats up and consumes more power during use (GPU and CPU are overloaded)

Explore reconstruction schemes:

Use OpenGL or Metal to do it, but which one is better?

OpenGL will be deprecated on Mac OS10.14 and OpenGL ES will be deprecated on iOS. Although the API is still accessible, it is likely that the deprecated API will be deprecated at some point in the future. It does not support multi-threaded operations and asynchronous processing. OpenGL’s design problems have affected the performance of the GPU. Metal simplifies the steps of THE CPU in rendering and lets the GPU control resources as much as possible. At the same time, with a more modern design, so that the operation in a controlled, predictable state of results. While optimizing the design, it is still a framework with direct access to the hardware. Compared to OpenGL, it is closer to the GPU for better performance. In the MacOS 10.14 documentation, Apple says that apps built using OpenGL and OpenCL will continue to run on MacOS 10.14 (but even if MacOS supports OpenGL, The built-in version is still OpenGL 3.3, released eight years ago, not 4.6, released last year). Apple says these “legacy technologies” are not recommended. Apple is clearly trying to promote Metal as a replacement for the “old” OpenGL and OpenCL interfaces.

After a review, we found that our team’s knowledge reserve in image processing was basically 0. Based on this consideration, we chose Metal, because Metal has advantages over OpenGL

In view of the drawbacks and limitations of GPUImage, the alternative scheme of Metal has been developed and the target needs to be achieved

1. Metal implements basic image rendering and processing 2. Metal implements complex dynamic effects and special effects processing 3. 4. In order to better expand, we have developed a set of rendering rules and parameters, as well as json files to facilitate the expansion of rendering mode, which can be delivered by the server or support local configuration

For the problem of power consumption and heat, two problems were found through review code

1. The effect of the rendering process needs to be saved locally, using the previous mode, by using CoreGraphics to render the result of view screenshots for many times, and finally a piece of paper picture composite MP4 file 2. Rendering effects are not cached, meaning that they need to be repeated for each click

Solution:

1. Use rendered textures instead of screenshots 2. Cache rendered effects

Here is a question, how to convert texture to render process mp4? How do I cache?

CVPixelBuffer to write texture locally:

ReplayKit:

Problem: 1. When the start recording method is called the user will receive a permission request warning, this warning will appear every time the start recording. However, once the user selects one of the preferences, the system remembers that choice for the next eight minutes. 2. The whole screen will be recorded, and our rendering effect only shows a small view in the middle of the screen, which will result in redundant content being recorded

So what to do? Can we get some inspiration from the video?

I used MPMoviePlayerController when I made a player, but since it is a highly packaged class of Apple, it is very convenient for us to use, and the more limited the customizability is, so I will make a player of our own. AVAssetReader can retrieve decoded audio and video data from raw data and, combined with AVAssetReaderTrackOutput, can read thousands of frames of CMSampleBufferRef. CMSampleBufferRef can be converted to CGImageRef. So this is the process of reading the video, shouldn’t there be a class that reverses the process?

With that in mind, I went back to the Apple API and found AVAssetWriter, a class that writes CVPixelBuffer data directly to a specified file. How to convert metal texture data MTLTexture to CVPixelBuffer? Apple provides a way to convert the two

Display mode:

Output after rendering or display while rendering?

1. Display after rendering, because a whole rendering process has been completed

1. Advantages:

  • The rendering process is only used once, and when the results are displayed, the locally saved process video can be played directly
  • It can reduce GPU resource consumption, save power, and improve the heating problem
  • There is no need to limit the user to click the save button, the user can directly click save, and then save to the album

2. Disadvantages: it takes a little time to wait for the rendering to complete before displaying

2. Write while rendering:

1. Users don’t have to wait

2. The disadvantage is that if the user stays on the same page all the time, the rendering will be repeated, because some of the effects are circular

Both schemes are supported. Through online gray scale test, it is found that the scheme after rendering is displayed needs to wait for 0.5-1.5s for more than 95% of online users. Therefore, we consider using the scheme after rendering and then displaying, which has the advantage of saving more power