“As we all know, videos can be P.” Today we’re going to learn how to add filters to videos.

In iOS, there are generally two ways to process video images: GPUImage and AVFoundation.

A, GPUImage

In previous articles, we have had some understanding of GPUImage. Previously, it was generally used to process the image data collected by the camera. However, it is just as convenient to process the local video.

Look directly at the code:

// movie
NSString *path = [[NSBundle mainBundle] pathForResource:@"sample" ofType:@"mp4"];
NSURL *url = [NSURL fileURLWithPath:path];
GPUImageMovie *movie = [[GPUImageMovie alloc] initWithURL:url];

// filter
GPUImageSmoothToonFilter *filter = [[GPUImageSmoothToonFilter alloc] init];

// view
GPUImageView *imageView = [[GPUImageView alloc] initWithFrame:CGRectMake(0.80.self.view.frame.size.width, self.view.frame.size.width)];
[self.view addSubview:imageView];

// chain
[movie addTarget:filter];
[filter addTarget:imageView];

// processing
[movie startProcessing];
Copy the code

There are only a few lines of core code. GPUImageMovie is responsible for reading video files, GPUImageSmoothToonFilter is responsible for filter effect processing, GPUImageView is responsible for the final image display.

The three are strung together by a filter chain, and the startProcessing method of GPUImageMovie is called to begin processing.

Although GPUImage is simple to use, it has many disadvantages, such as no sound, UI invocation in non-main thread, troublesome file export, and no playback control.

Summary: Although GPUImage is convenient to use, it has many disadvantages and cannot meet the needs of production environment.

Second, the AVFoundation

1. Use of AVPlayer

First, let’s review the easiest ways to use AVPlayer:

NSURL *url = [[NSBundle mainBundle] URLForResource:@"sample" withExtension:@"mp4"];
AVURLAsset *asset = [AVURLAsset assetWithURL:url];
AVPlayerItem *playerItem = [[AVPlayerItem alloc] initWithAsset:asset];
    
AVPlayer *player = [[AVPlayer alloc] initWithPlayerItem:playerItem];
AVPlayerLayer *playerLayer = [AVPlayerLayer playerLayerWithPlayer:player];
Copy the code

The first step is to build AVPlayerItem, then create AVPlayer with AVPlayerItem, and finally create AVPlayerLayer with AVPlayer.

AVPlayerLayer is a subclass of CALayer that can be added to any Layer. When AVPlayer calls the Play method, the image is rendered on AVPlayerLayer.

AVPlayer is very simple to use. However, in the above way, only the original image will be rendered on the AVPlayerLayer. If we want to process the original image while playing, we need to modify the rendering process of AVPlayer.

2. Modify the AVPlayer rendering process

To modify the rendering process of AVPlayer, start with AVPlayerItem. There are four main steps:

Step 1: Customize the AVVideoCompositing class

AVVideoCompositing is a protocol that our custom class implements. In this custom class, you can take the raw image of each frame, process it and output it.

In this agreement, the key is startVideoCompositionRequest method implementation:

// CustomVideoCompositing.m
- (void)startVideoCompositionRequest:(AVAsynchronousVideoCompositionRequest *)asyncVideoCompositionRequest {
    dispatch_async(self.renderingQueue, ^{
        @autoreleasepool {
            if (self.shouldCancelAllRequests) {
                [asyncVideoCompositionRequest finishCancelledRequest];
            } else {
                CVPixelBufferRef resultPixels = [self newRenderdPixelBufferForRequest:asyncVideoCompositionRequest];
                if (resultPixels) {
                    [asyncVideoCompositionRequest finishWithComposedVideoFrame:resultPixels];
                    CVPixelBufferRelease(resultPixels);
                } else {
                    // print error}}}}); }Copy the code

Through newRenderdPixelBufferForRequest method derives from the AVAsynchronousVideoCompositionRequest CVPixelBufferRef after to the processing output, look at the implementation of this method:

// CustomVideoCompositing.m
- (CVPixelBufferRef)newRenderdPixelBufferForRequest:(AVAsynchronousVideoCompositionRequest *)request {
    CustomVideoCompositionInstruction *videoCompositionInstruction = (CustomVideoCompositionInstruction *)request.videoCompositionInstruction;
    NSArray<AVVideoCompositionLayerInstruction *> *layerInstructions = videoCompositionInstruction.layerInstructions;
    CMPersistentTrackID trackID = layerInstructions.firstObject.trackID;
    
    CVPixelBufferRef sourcePixelBuffer = [request sourceFrameByTrackID:trackID];
    CVPixelBufferRef resultPixelBuffer = [videoCompositionInstruction applyPixelBuffer:sourcePixelBuffer];
        
    if(! resultPixelBuffer) { CVPixelBufferRef emptyPixelBuffer = [self createEmptyPixelBuffer];
        return emptyPixelBuffer;
    } else {
        returnresultPixelBuffer; }}Copy the code

In this method, we derive to sourcePixelBuffer from AVAsynchronousVideoCompositionRequest by trackID, namely the current frame original image.

Then call videoCompositionInstruction applyPixelBuffer method, will sourcePixelBuffer as input, after processing results resultPixelBuffer. That is, everything we do to the image happens in the applyPixelBuffer method.

In newRenderdPixelBufferForRequest this method, we have got the current frame original image sourcePixelBuffer, actually can also directly in the method of image processing.

So why do you need to put the processing operations inCustomVideoCompositionInstructionIn the?

Because the instance creation of the custom AVVideoCompositing class is done in-house at the actual rendering time. That is, we cannot access the final AVVideoCompositing object. So you can’t make some dynamic changes to render parameters. And from the AVAsynchronousVideoCompositionRequest, can get AVVideoCompositionInstruction object, So we need a custom AVVideoCompositionInstruction, thus indirectly by modifying the properties of AVVideoCompositionInstruction, to dynamically modify the rendering parameters.

Step 2: custom AVVideoCompositionInstruction

The key to this class is the implementation of the applyPixelBuffer method:

// CustomVideoCompositionInstruction.m
- (CVPixelBufferRef)applyPixelBuffer:(CVPixelBufferRef)pixelBuffer {
    self.filter.pixelBuffer = pixelBuffer;
    CVPixelBufferRef outputPixelBuffer = self.filter.outputPixelBuffer;
    CVPixelBufferRetain(outputPixelBuffer);
    return outputPixelBuffer;
}
Copy the code

This encapsulates the processing details of OpenGL ES into a filter. The implementation details of this class can be ignored for now, except that it accepts the original CVPixelBufferRef and returns the processed CVPixelBufferRef.

Step 3: build AVMutableVideoComposition

The code built is as follows:

self.videoComposition = [self createVideoCompositionWithAsset:self.asset];
self.videoComposition.customVideoCompositorClass = [CustomVideoCompositing class];
Copy the code
- (AVMutableVideoComposition *)createVideoCompositionWithAsset:(AVAsset *)asset {
    AVMutableVideoComposition *videoComposition = [AVMutableVideoComposition videoCompositionWithPropertiesOfAsset:asset];
    NSArray *instructions = videoComposition.instructions;
    NSMutableArray *newInstructions = [NSMutableArray array];
    for (AVVideoCompositionInstruction *instruction in instructions) {
        NSArray *layerInstructions = instruction.layerInstructions;
        // TrackIDs
        NSMutableArray *trackIDs = [NSMutableArray array];
        for (AVVideoCompositionLayerInstruction *layerInstruction in layerInstructions) {
            [trackIDs addObject:@(layerInstruction.trackID)];
        }
        CustomVideoCompositionInstruction *newInstruction = [[CustomVideoCompositionInstruction alloc] initWithSourceTrackIDs:trackIDs timeRange:instruction.timeRange];
        newInstruction.layerInstructions = instruction.layerInstructions;
        [newInstructions addObject:newInstruction];
    }
    videoComposition.instructions = newInstructions;
    return videoComposition;
}
Copy the code

The process of constructing AVMutableVideoComposition mainly do two things.

First thing, put videoComposition customVideoCompositorClass properties, set custom CustomVideoCompositing for us.

The second thing, first of all, through the method of the system to provide videoCompositionWithPropertiesOfAsset construct AVMutableVideoComposition object, Then it modifies CustomVideoCompositionInstruction types for custom instructions attribute. (as mentioned in the “first step”, can follow-up in CustomVideoCompositing CustomVideoCompositionInstruction object.)

Note: CustomVideoCompositionInstruction can be preserved, and then by modifying the properties of it, to modify the rendering parameters.

Step 4: Build AVPlayerItem

After a AVMutableVideoComposition behind things are much more simple.

Just assign one more videoComposition property when you create the AVPlayerItem.

self.playerItem = [[AVPlayerItem alloc] initWithAsset:self.asset];
self.playerItem.videoComposition = self.videoComposition;
Copy the code

To string together, so that the whole link AVPlayer when playing, can in CustomVideoCompositionInstruction applyPixelBuffer CVPixelBufferRef method receives the original image.

3. Apply the filter effect

This step is to add the filter effect to CVPixelBufferRef and output the processed CVPixelBufferRef.

There are many ways to do this. Options include: OpenGL ES, CIImage, Metal, GPUImage, etc.

In order to use the same GPUImageSmoothToonFilter we used earlier, here is how GPUImage works.

The key codes are as follows:

- (CVPixelBufferRef)renderByGPUImage:(CVPixelBufferRef)pixelBuffer {
    CVPixelBufferRetain(pixelBuffer);
    
    __block CVPixelBufferRef output = nil;
    runSynchronouslyOnVideoProcessingQueue(^{
        [GPUImageContext useImageProcessingContext];
        
        / / (1)
        GLuint textureID = [self.pixelBufferHelper convertYUVPixelBufferToTexture:pixelBuffer];
        CGSize size = CGSizeMake(CVPixelBufferGetWidth(pixelBuffer),
                                 CVPixelBufferGetHeight(pixelBuffer));
        
        [GPUImageContext setActiveShaderProgram:nil];
        / / (2)
        GPUImageTextureInput *textureInput = [[GPUImageTextureInput alloc] initWithTexture:textureID size:size];
        GPUImageSmoothToonFilter *filter = [[GPUImageSmoothToonFilter alloc] init];
        [textureInput addTarget:filter];
        GPUImageTextureOutput *textureOutput = [[GPUImageTextureOutput alloc] init];
        [filter addTarget:textureOutput];
        [textureInput processTextureWithFrameTime:kCMTimeZero];
        
        / / (3)
        output = [self.pixelBufferHelper convertTextureToPixelBuffer:textureOutput.texture
                                                         textureSize:size];
        
        [textureOutput doneWithTexture];
        
        glDeleteTextures(1, &textureID);
    });
    CVPixelBufferRelease(pixelBuffer);
    
    return output;
}
Copy the code

(1) At the beginning, the video frame read is in YUV format. First, the CVPixelBufferRef in YUV format is converted into OpenGL texture.

(2) GPUImageTextureInput to construct the starting point of the filter chain, GPUImageSmoothToonFilter to add the filter effect, GPUImageTextureOutput to construct the end of the filter chain, and finally output OpenGL texture.

(3) Transform the processed OpenGL texture into CVPixelBufferRef.

In addition, since CIImage is easy to use, the usage is also mentioned in passing.

The key codes are as follows:

- (CVPixelBufferRef)renderByCIImage:(CVPixelBufferRef)pixelBuffer {
    CVPixelBufferRetain(pixelBuffer);
    
    CGSize size = CGSizeMake(CVPixelBufferGetWidth(pixelBuffer),
                             CVPixelBufferGetHeight(pixelBuffer));
    / / (1)
    CIImage *image = [[CIImage alloc] initWithCVPixelBuffer:pixelBuffer];  
    / / (2)
    CIImage *filterImage = [CIImage imageWithColor:[CIColor colorWithRed:255.0 / 255  
                                                                   green:245.0 / 255
                                                                    blue:215.0 / 255
                                                                   alpha:0.1]]./ / (3)
    image = [filterImage imageByCompositingOverImage:image];  
    
    / / (4)
    CVPixelBufferRef output = [self.pixelBufferHelper createPixelBufferWithSize:size];  
    [self.context render:image toCVPixelBuffer:output];
    
    CVPixelBufferRelease(pixelBuffer);
    return output;
}
Copy the code

(1) CVPixelBufferRef is converted into CIImage.

(2) Create a CIImage with transparency.

(3) Superimpose CIImage by systematic method.

(4) Transform the superimposed CIImage into CVPixelBufferRef.

4. Export the processed video

After the video is processed, you eventually want to export it and save it.

The exported code is also simple:

self.exportSession = [[AVAssetExportSession alloc] initWithAsset:self.asset presetName:AVAssetExportPresetHighestQuality];
self.exportSession.videoComposition = self.videoComposition;
self.exportSession.outputFileType = AVFileTypeMPEG4;
self.exportSession.outputURL = [NSURL fileURLWithPath:self.exportPath];

[self.exportSession exportAsynchronouslyWithCompletionHandler:^{
    // Save to album
    // ...
}];
Copy the code

Here is the key part is set videoComposition AVMutableVideoComposition object of one of the first structure, and then set the output path and file format can begin after the export. After the video file is successfully exported, you can save the video file to the album.

Summary:AVFoundationAlthough the use of cumbersome, but powerful, can be very convenient to export the results of video processing, is used to do video processing is the only choice.

The source code

Check out the full code on GitHub.

For a better reading experience, go to Lyman’s Blog to add filters to your videos on iOS