oneGPUImageThe introduction and basic use of the framework

1.GPUImageThe introduction of

GPUImage is an open source framework for image and video processing based on OpenGL ES, which provides a large number of filters. Users can achieve good effects through the combination of these filters, and at the same time, it is convenient to achieve self-defined filters on the original basis. For large-scale parallel operations, such as processing images or real-time video frames, gpus have significant performance advantages over cpus. However, all the filters of GPUImage are realized based on OpenGL Shader, so the filter effect and image processing are carried out on GPU, with relatively high processing efficiency. In iPhone4 and above, real-time smooth effects can be achieved. And it hides the complexity of objective-C interaction with the OpenGL ES API. At present, more than 95% of image and video processing apps on the market use GPUImage, so it is necessary to learn its use and principle. GPUImage supports both iOS and Android platforms. Address: iOS version Android version also supports Swift version. This paper mainly introduces its OC version, and the functions and principles of core classes are similar to Andorid version. IOS Developers use: Direct CocaPods integration:

pod 'GPUImage'
Copy the code

Let’s take a look at its basic structure:

From this figure we can see several core classes of GPUImage :GPUImageOutput GPUImageFilter GPUImageInput protocol GPUImageFrameBuffer. Next we will focus on these classes.

2. Description of core functions

GPUImageOutput

GPUImageOutput is the base class of all filter input sources, which is the starting point of the filter chain.



Explain each of these types:

  • GPUImagePicture

Initialize by image, essentially converting image to CGImageRef and then CGImageRef to texture.

  • GPUImageVideoCamera: initialization via camera, essentially encapsulating AVCaptureVideoDataOutput to get continuous video stream data output in the proxy methodcaptureOutput:didOutputSampleBuffer:fromConnection:The process of taking the CMSampleBufferRef and converting it to a texture.GPUImageStillCameraGPUImageVideoCamera is a subclass of GPUImageVideoCamera, which can be used to achieve the photography function.

  • GPUImageUIElement: can be initialized via UIView or CALayer. This class can be used to add text watermarks to videos.

  • GPUImageTextureInputInitialize with an existing texture.

  • GPUImageRawDataInput: Initializes the binary data, and then converts the binary data to a texture.

  • GPUImageMovie: Initialization via local video. The AVAssetReader first reads the video frame by frame and then converts the frame data into a texture.
  • GPUImageFilterIn particular, it inherits from GPUImageOutput and complies with GPUImageInput, so it can be used as the source of the filter chain and output rendered textures to classes that comply with GPUImageInput. It is the core of the filter, which will be explained separately later.

Core functions and methods:

Imagine what a filter chain source could do:

  1. We need to produce a render object, and the render object isGPUImageFrameBuffer. Several methods for frameBuffer:
- (GPUImageFramebuffer *)framebufferForOutput;
Copy the code

This method gets the frameBuffer currently being rendered

- (void)removeOutputFramebuffer;
Copy the code

This method is used to remove the currently rendered frameBuffer

- (void)setInputFramebufferForTarget:(id<GPUImageInput>)target atIndex:(NSInteger)inputTextureIndex;
Copy the code

This method is called when the current output has been rendered and the next receiver needs to be notified that rendering can begin. The FrameBuffer of the current output is passed to the next Input so that it can render with the results of the FrameBuffer.

  1. Target is added and managed to generate the entire FilterChain.

Since GPUImageOutput is the source of a filter, the corresponding FrameBuffer must have receivers that receive its output. These receivers are targets, and there may be multiple receivers. The main ways to manage these targets:

- (void)addTarget:(id<GPUImageInput>)newTarget;
- (void)addTarget:(id<GPUImageInput>)newTarget atTextureLocation:(NSInteger)textureLocation;
Copy the code

Both addTarget methods add the next object that implements the GPUImageInput protocol to the FilterChain. Once added to the filter chain, it is notified when the current Output has been rendered for further processing.

- (NSArray*)targets;
Copy the code

We can add multiple targets to each Output. This method gets all the targets for the current Output.

- (void)removeTarget:(id<GPUImageInput>)targetToRemove;
- (void)removeAllTargets;
Copy the code

The purpose of these methods is to remove one or all of the targets from the FilterChain. When a target is removed from the FilterChain, it will no longer receive any notification that the current Output has been rendered.

  1. Gets the result of the current GPUImageOutput processing on the FrameBuffer
- (CGImageRef)newCGImageFromCurrentlyProcessedOutput;
- (CGImageRef)newCGImageByFilteringCGImage:(CGImageRef)imageToFilter;
- (UIImage *)imageFromCurrentFramebuffer;
- (UIImage *)imageFromCurrentFramebufferWithOrientation:(UIImageOrientation)imageOrientation;
- (UIImage *)imageByFilteringImage:(UIImage *)imageToFilter;
- (CGImageRef)newCGImageByFilteringImage:(UIImage *)imageToFilter;
Copy the code

One of the most core method is newCGImageFromCurrentlyProcessedOutput, basically all the method finally call the method. But instead of providing a default implementation for this method, GPUImageOutput provides a method definition. The concrete implementation is in its two important subclasses GPUImageFilter and GPUImageFilterGroup. In fact, the methods that are eventually called are implemented in GPUImageFilter.

GPUImageInputagreement

GPUImageInput is a protocol that defines the basic functions that must be implemented by a receiver capable of receiving a FrameBuffer. Classes that implement this protocol can be used as endpoints for rendering. Classes that implement the GPUImageInput interface:

Explain these classes:

  • GPUImageMovieWriter: Encapsulates AVAssetWriter to read data frame by frame from the render result of the frame cache, and finally saves the video file to the specified path with AVAssetWriter.

  • GPUImageView: inherits from UIView and executes the rendering process through the input texture. We usually use it to render the results.

  • GPUImageTextureOutputIt fetches texture objects from the input Framebuffer.

  • GPUImageRawDataOutputThe rawBytesForImage property retrieves binary data for the current input texture.
Core functions and methods:

Can be the end of the filter chain. The basic functions mainly include:

  • Receive the output information of GPUmageOutput.
  • Receive notification that the last GPUImageOutput rendering is complete and complete its own processing;
  1. Receiving the output information of GPUmageOutput
- (void)setInputFramebuffer:(GPUImageFramebuffer *)newInputFramebuffer atIndex:(NSInteger)textureIndex;
- (NSInteger)nextAvailableTextureIndex;
- (void)setInputSize:(CGSize)newSize atIndex:(NSInteger)textureIndex;
- (void)setInputRotation:(GPUImageRotationMode)newInputRotation atIndex:(NSInteger)textureIndex;
Copy the code

As you can see from these methods, GPUImageInput can receive information about the FrameBuffer Output of the last Output, the size of the FrameBuffer, and the rotation. These textureIndexes are prepared to provide a Filter that requires multiple inputs. 2. Receive notification of completion of GPUImageOutput rendering

- (void)newFrameReadyAtTime:(CMTime)frameTime atIndex:(NSInteger)textureIndex;
Copy the code

The last GPUImageOutput render will notify it of all its targets. See its implementation in GPUImageFilter.

GPUImageFrameBuffer

The GPUImageFrameBuffer provides the medium for data transfer between GPUImageOutput and GPUImageInput. The GPUImageFrameBuffer acts as a link between different elements throughout the rendering process; Each GPUImageFrameBuffer has its own OpenGL Texture. Each GPUImageOutput outputs a GPUImageFrameBuffer object. And every GPUImageInput implements a setInputFramebuffer: atIndex: method, to receive a after handling the Output of the texture.

  • The GPUImageFrameBuffer fetching logic is provided byGPUImageFrameBufferCacheIf needed, it is retrieved from BufferCache and reclaimed by BufferCache. Creating and storing the FrameBuffer consumes resources, so GPUImage stores the completed FrameBuffer in the cache to minimize resource consumption. Each time the input texture size and TextureOptions are used as keys from the Hash map.
GPUImageFilter

GPUImageFilter is the core of the entire GPUImage framework, and more than 100 filter effects built into GPUImage are inherited from this class. For example, we often use some filters:

  • GPUImageBrightnessFilter: Brightness adjustment filter
  • GPUImageExposureFilter: Exposure adjustment filter
  • GPUImageContrastFilter: Contrast adjustment filter
  • GPUImageSaturationFilter: Saturation adjustment filter
  • GPUImageWhiteBalanceFilter: White balance adjustment filter
  • GPUImageColorInvertFilter: Reverses the color of the image
  • GPUImageCropFilter: Crop the image to a specific area
  • GPUImageGaussianBlurFilter: Variable radius Gaussian blur
  • GPUImageSketchFilter: Sketch filter
  • GPUImageToonFilter: Cartoon effect
  • GPUImageDissolveBlendFilter: a blend of two images
  • GPUImageFilterPipeline: Chain combination filter

.

Core functions and methods:
  1. GPUImageFilter is a subclass of GPUImageOutput, but it also implements the GPUImageInput protocol. Therefore, it contains all the functions of an Input and Output. It can take an object to render and pass it on to the next recipient implementing the GPUImageInput protocol. The specific method calls will be explained in the filter underlying source code analysis in the next section.

  2. Provides methods to initialize the GLProgram according to different Vertexshaders and fragmentshaders, but the entire rendering process is the same, so the process is encapsulated in the base class;

- (id)initWithVertexShaderFromString:(NSString *)vertexShaderString fragmentShaderFromString:(NSString *)fragmentShaderString;
- (id)initWithFragmentShaderFromString:(NSString *)fragmentShaderString;
- (id)initWithFragmentShaderFromFile:(NSString *)fragmentShaderFilename;
Copy the code

So here’s a quick overview of some of the OPenGL terms

  • VertexShader: vertex shader,OPenGLReceives geometry data (vertex information and geometry primitives) passed by the user, which, after passing through the vertex shader, determines the shape and position of the graph. The vertex shader is the first shader in the OPenGL rendering process.
  • Rasterization: the process of converting the stereoscopic position of a graph into pixel tiles displayed on the screen;
  • FragmentShaderTo color a rasterized pixel, use a chip shader. It is aOPenGLThe last shader in the rendering process.
  • GLProgram: OpenGL ESProgram object – oriented encapsulation, including VertexShader, FragmentShader loading, Program link and attribute and uniform access and management.

Here are some methods for creating programs based on different shaders.

  1. A base class that provides methods that subclasses can override.

The purpose of GPUImageFilter is to receive the source image (FrameBuffer), render the new image with custom vertex and slice shaders, and notify the next object in the response chain when the drawing is complete.

3. Use of GPUImage filter

Let’s see how it works

(1) Add a filter to the image

Directly on the code:

/ / gpuImageEpicture *imagePic = [[gpuImageEpicture alloc] initWithImage:[UIImage imageNamed:@" picone.jpg "]]; / * * create filters * / GPUImageGaussianBlurFilter * gaussianBlur = [[GPUImageGaussianBlurFilter alloc] init]; gaussianBlur.blurRadiusInPixels = 10; /** Add receiver, that is, target*/ [imagePic addTarget:gaussianBlur]; / * * increase the frameBUffer count prevent removed * / [gaussianBlur useNextFrameForImageCapture]; /** start processing image */ [imagePic processImage]; According to the frameBuffer / * * get picture * / self. The showImageView. Image = [gaussianBlur imageFromCurrentFramebuffer];Copy the code
Process description:
  • Initialize the filter source with the imageGPUImagePicture
  • Initialize the filter effectGPUImageGaussianBlurFilter
  • Adds the receiver Target to the current filter sourceaddTarget
  • useNextFrameForImageCaptureThe Framebuffer () method is used to prevent the Framebuffer from being removed. If this method is not called, the Framebuffer will be removed, causing a Crash
  • Export the image according to the FrameBuffer rendered by the filter[gaussianBlur imageFromCurrentFramebuffer]
(2) Add filters to capture video streams

Core code:

- (void)setupCamera { //videoCamera self.gpuVideoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPreset640x480 cameraPosition:AVCaptureDevicePositionBack]; self.gpuVideoCamera.outputImageOrientation = [[UIApplication sharedApplication] statusBarOrientation]; / / GPUImageView filling self. GPUImageView. FillMode = kGPUImageFillModePreserveAspectRatioAndFill; GPUImageFilter *clearFilter = [[GPUImageFilter alloc] init]; [self.gpuVideoCamera addTarget:clearFilter]; [clearFilter addTarget:self.gpuImageView]; //Start camera capturing, encapsulation of AVFoundation session startRunning [self.gpuVideoCamera startCameraCapture]; //Start camera capturing, encapsulation of AVFoundation session startRunning [self.gpuVideoCamera startCameraCapture] } #pragma mark - Action && Notification - (IBAction)originalBtnDown:(id)sender {/** remove target*/ [self.gpuvideocamera removeAllTargets]; GPUImageFilter *clearFilter = [[GPUImageFilter alloc] init]; [self.gpuVideoCamera addTarget:clearFilter]; [clearFilter addTarget:self.gpuImageView]; }Copy the code
(3) The use of mixed filters

Core code:

GPUImageView *filterView = [[GPUImageView alloc] initWithFrame:self.view.frame]; filterView.center = self.view.center; filterView.fillMode = kGPUImageFillModePreserveAspectRatioAndFill; [self.view addSubview:filterView]; / * initializes the hybrid filter * / filter = [[GPUImageDissolveBlendFilter alloc] init]; / * set filter mixed degree * / [(GPUImageDissolveBlendFilter *) filter setMix: 0.5]; /* Initialize the video output source */ NSURL *sampleURL = [[NSBundle mainBundle] URLForResource:@"IMG_4278" withExtension:@"MOV"]; movieFile = [[GPUImageMovie alloc] initWithURL:sampleURL]; movieFile.runBenchmark = YES; movieFile.playAtActualSpeed = YES; / * initialize camera output source * / videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset: AVCaptureSessionPreset640x480 cameraPosition:AVCaptureDevicePositionBack]; videoCamera.outputImageOrientation = UIInterfaceOrientationPortrait; NSString *pathToMovie = [NSHomeDirectory() stringByAppendingPathComponent:@"Documents/Movie.m4v"]; unlink([pathToMovie UTF8String]); NSURL *movieURL = [NSURL fileURLWithPath:pathToMovie]; / / initialize the recipient movieWriter = [[GPUImageMovieWriter alloc] initWithMovieURL: movieURL size: CGSizeMake (480.0, 640.0)]. GPUImageFilter* progressFilter = [[GPUImageFilter alloc] init]; [movieFile addTarget:progressFilter]; / / set the output direction [progressFilter setInputRotation: kGPUImageRotateRight atIndex: 0]; [progressFilter addTarget:filter]; [videoCamera addTarget:filter]; / / set the audio source movieWriter. ShouldPassthroughAudio = YES; movieFile.audioEncodingTarget = movieWriter; [movieFile enableSynchronizedEncodingUsingMovieWriter:movieWriter]; [filter addTarget:filterView]; // Add to the receiver [filter addTarget:movieWriter]; [videoCamera startCameraCapture]; [movieWriter startRecording]; [movieFile startProcessing]; /* Save the video after writing */ __weak Typeof (self) weakSelf = self; [movieWriter setCompletionBlock:^{ __strong typeof(self) strongSelf = weakSelf; [strongSelf->filter RemoveTarget :strongSelf->movieWriter; strongSelf->movieWriter finishRecording; /* Save the video locally from the movieURL */ //...Copy the code
Process description:
  • The core of the hybrid filter isGPUImageDissolveBlendFilterIt inherits fromGPUImageTwoInputFilterIt requires two input sources
  • Initialize two input sourcesGPUImageVideoCamerawithGPUImageMovie
  • Add the input source to DissolveBlendFilter
  • Add a filter to the output data sourceGPUImageMovieWriter
(4) Add watermark to the video

Core code:

GPUImageView *filterView = [[GPUImageView alloc] initWithFrame:self.view.frame]; self.view = filterView; / / initialization hybrid filter filter = [[GPUImageDissolveBlendFilter alloc] init]; / / mixed degrees [(GPUImageDissolveBlendFilter *) filter setMix: 0.5]; // NSURL *sampleURL = [[NSBundle mainBundle] URLForResource:@"IMG_4278" withExtension:@"MOV"]; AVAsset *asset = [AVAsset assetWithURL:sampleURL]; CGSize size = self.view.bounds.size; // Set moive source movieFile = [[GPUImageMovie alloc] initWithAsset:asset]; movieFile.runBenchmark = YES; movieFile.playAtActualSpeed = YES; UILabel *label = [[UILabel alloc] initWithFrame:CGRectMake(100, 100, 100, 100)]; Label. text = @" I am a watermark "; label.font = [UIFont systemFontOfSize:30]; label.textColor = [UIColor redColor]; [label sizeToFit]; UIImage *image = [UIImage imageNamed:@"watermark.png"]; UIImageView *imageView = [[UIImageView alloc] initWithImage:image]; UIView *subView = [[UIView alloc] initWithFrame:CGRectMake(0, 0, size.width, size.height)]; subView.backgroundColor = [UIColor clearColor]; imageView.center = CGPointMake(subView.bounds.size.width / 2, subView.bounds.size.height / 2); [subView addSubview:imageView]; [subView addSubview:label]; GPUImageUIElement *uielement = [[GPUImageUIElement alloc] initWithView:subView]; The filter / / GPUImageTransformFilter animation nsstrings * pathToMovie = [NSHomeDirectory () stringByAppendingPathComponent:@"Documents/Movie.m4v"]; unlink([pathToMovie UTF8String]); NSURL *movieURL = [NSURL fileURLWithPath:pathToMovie]; / / initialize the recipient movieWriter = [[GPUImageMovieWriter alloc] initWithMovieURL: movieURL size: CGSizeMake (480.0, 640.0)]. GPUImageFilter* progressFilter = [[GPUImageFilter alloc] init]; [movieFile addTarget:progressFilter]; / / set direction [progressFilter setInputRotation: kGPUImageRotateRight atIndex: 0]; [progressFilter addTarget:filter]; [uielement addTarget:filter]; movieWriter.shouldPassthroughAudio = YES; movieFile.audioEncodingTarget = movieWriter; [movieFile enableSynchronizedEncodingUsingMovieWriter:movieWriter]; [filter addTarget:filterView]; [filter addTarget:movieWriter]; // startRecording [movieWriter startRecording]; [movieFile startProcessing]; __weak typeof(self) weakSelf = self; / / each frame processing is completed about 30 frames per second [progressFilter setFrameProcessingCompletionBlock: ^ (GPUImageOutput * output, CMTime time){ CGRect frame = imageView.frame; frame.origin.x += 1; frame.origin.y += 1; imageView.frame = frame; UIElement [UIElement updateWithTimestamp:time];}]; [movieWriter setCompletionBlock:^{ __strong typeof(self) strongSelf = weakSelf; [strongSelf->filter RemoveTarget :strongSelf->movieWriter; strongSelf->movieWriter finishRecording; /* Save the video locally from the movieURL */ //...Copy the code
Process description:
  • The core of a hybrid filter isGPUImageDissolveBlendFilterIt inherits fromGPUImageTwoInputFilterIt requires two input sources
  • Initialize two input sourcesGPUImageVideoCamerawithGPUImageUIElement
  • Other same as above
(5) The use of filter group

The core code

/ / create the camera view GPUImageView * filterView = [[GPUImageView alloc] initWithFrame: self. The bounds]; / / display mode is full of the whole frame filterView fillMode = kGPUImageFillModePreserveAspectRatioAndFill; [self.view addSubview:filterView]; / / initialize the filter source self. StillCamera = [[GPUImageStillCamera alloc] initWithSessionPreset: AVCaptureSessionPresetPhoto cameraPosition:AVCaptureDevicePositionBack]; / / output image rotation self. StillCamera. OutputImageOrientation = UIInterfaceOrientationPortrait; / / the color filter GPUImageColorInvertFilter * filter1 = [[GPUImageColorInvertFilter alloc] init]; // GPUImageEmbossFilter *filter2 = [[GPUImageEmbossFilter alloc]init]; //GPUImageToonFilter *filter3 = [[GPUImageToonFilter alloc] init]; GPUImageFilterGroup *groupFilter = [[GPUImageFilterGroup alloc]init]; [groupFilter addFilter:filter1]; [groupFilter addFilter:filter2]; //[groupFilter addFilter:filter3]; [filter1 addTarget:filter2]; //[filter2 addTarget:filter3]; // Define a variable to hold the last filter in the filter-chain, which will be used in the method to save the image. self.lastFilter = filter2; / / set the first filter groupFilter. InitialFilters = @ [filter1]; / / set the final filter groupFilter. TerminalFilter = filter2; [self.stillCamera addTarget:groupFilter]; [groupFilter addTarget:filterView]; // To resolve the first frame black screen, the audio buffer is written before the video buffer. [self.stillCamera addAudioInputsAndOutputs]; [self.view bringSubviewToFront:self.catchBtn]; Dispatch_after (dispatch_time(DISPATCH_TIME_NOW, (int64_t)(0.1 * NSEC_PER_SEC)), dispatch_get_main_queue(), dispatch_get_main_queue(), ^{// Start capturing [self.stillCamera startCameraCapture]; });Copy the code
Process description:
  • The core of the hybrid filter isGPUImageFilterGroupThe use of
  • Initialize multiple filters and add them to the filter group
  • Set the first and last filters for the Group
  • The output

twoGPUImageUnderlying source code analysis

1. Analysis of filter chain loading process

Through the above Demo example, we can analyze the flow of the filter chain:

Next, we take the example of adding a filter to an image to analyze the filter method call process of GPUImage:

  • Initialize the filter source with the imageGPUImagePicture, call the method:
- (id)initWithImage:(UIImage *)newImageSource;
Copy the code

It’s called again in this method

outputFramebuffer = [[GPUImageContext sharedFramebufferCache] fetchFramebufferForSize:pixelSizeToUseForTexture onlyTexture:YES];
Copy the code

The main function of this method is to get a FrameBuffer from GPUImageFramebufferCache based on the size of the image

  • Initialization of filter, initialization of filter according to current own vertex shader and slice shader, and create OPenGL ES rendererGLProgram
  • Add Target to filter source:- (void)addTarget:(id<GPUImageInput>)newTarget;.is called in this method

[self setInputFramebufferForTarget:newTarget atIndex:textureLocation]; Will call [target setInputFramebuffer: [self framebufferForOutput] atIndex: inputTextureIndex]; The main function of this method is to pass the current Output Framebuffer to the receiver.

  • - (void)useNextFrameForImageCapture;Setting member variablesusingNextFrameForImageCapture = YESRepresents the output results that will be used to get the image, so in the core method of rendering
- (void)renderToTextureWithVertices:(const GLfloat *)vertices textureCoordinates:(const GLfloat *)textureCoordinates;
Copy the code

Lock the outputFramebuffer, because by default the FrameBuffer is released after the next input rendering is complete. If you want to take a screenshot of the current Filter output, you need to keep the FrameBuffer.

  • Next call the method[imagePic processImage];: Starts the filter processing flow, then calls the method-(BOOL)processImageWithCompletionHandler:(void (^)(void))completion;Inside this method, two Target methods are called to render and pass down the OutputFrameBuffer.
 [currentTarget setInputFramebuffer:outputFramebuffer atIndex:textureIndexOfTarget];
 [currentTarget newFrameReadyAtTime:kCMTimeIndefinite atIndex:textureIndexOfTarget];
Copy the code

The first method takes the Framebuffer passed from the last Output and locks it. The role of the second method is to use its own GLProgram rendering, and call the – (void) informTargetsAboutNewFrameAtTime (CMTime frameTime); Pass the render results to the next filter that implements the GPUImageInput protocol.

  • [gaussianBlur imageFromCurrentFramebuffer];Method: Get the picture from the Framebuffer- (CGImageRef)newCGImageFromCurrentlyProcessedOutputMethod to complete the image acquisition and release GCD semaphore.
if (dispatch_semaphore_wait(imageCaptureSemaphore, convertedTimeout) ! = 0) { return NULL; }Copy the code

The role of the semaphore here is to wait for the rendering to complete. After that, go through the following process to get the image. The entire method call flow can be seen in the following image:

2. Analysis of filter rendering process

Rendering is the core of the entire GPUImageFilter. After the OpenGL ES Program is created in the initialization method and the link is successful, we can use this Program to render. The whole rendering process happened – (void) renderToTextureWithVertices: textureCoordinates:. We also parse this method to understand the rendering process of OpenGL ES:

  • [GPUImageContext setActiveShaderProgram:filterProgram];After initialization, the Progrm context is set to the default context and activated. Of the callGPUImageContextmethods
+ (void)setActiveShaderProgram:(GLProgram *)shaderProgram;
{
    GPUImageContext *sharedContext = [GPUImageContext sharedImageProcessingContext];
    [sharedContext setContextShaderProgram:shaderProgram];
}
Copy the code
  • Gets one to renderGPUImageFrameBufferThe FrameBuffer is removed based on the inputTextureSize (inputTextureSize) and texture information (outputTextureOptions)GPUImageFrameBufferCaheIn the acquisition. If a Framebuffer exists that meets the requirements, return one. If no Framebuffer exists, create one.
  • According to theusingNextFrameForImageCaptureDetermine whether the current Framebuffer is used to get images and lock it if so.
If (usingNextFrameForImageCapture) {/ / will this outputFrameBuffer to lock it. [outputFramebuffer lock]; }Copy the code
  • Empty the entire FrameBuffer using backgroundColor:
glClearColor(backgroundColorRed, backgroundColorGreen, backgroundColorBlue, backgroundColorAlpha);
glClear(GL_COLOR_BUFFER_BIT);
Copy the code
  • Render the FrameBuffer passed from the last Output as the texture:
glActiveTexture(GL_TEXTURE2);
glBindTexture(GL_TEXTURE_2D, [firstInputFramebuffer texture]);
glUniform1i(filterInputTextureUniform, 2);
Copy the code
  • Pass vertex position information and vertex texture coordinate information as attributes to GPU:
glVertexAttribPointer(filterPositionAttribute, 2, GL_FLOAT, 0.0.vertices);
glVertexAttribPointer(filterTextureCoordinateAttribute, 2, GL_FLOAT, 0.0, textureCoordinates);
Copy the code
  • Render:
glDrawArrays(GL_TRIANGLE_STRIP, 0.4);
Copy the code
  • I’m going to go lastGPUImageOutputThe FrameBuffer mission passed has completed, unlocks it:
[firstInputFramebuffer unlock];
Copy the code

The entire rendering process is complete.

3. Custom filters

1. How to load a custom filter

As we learned above, the effect of the filter is actually based on the vertex shader and the slice shader. A custom filter is essentially a custom shader. There are two ways to load our custom filters

  • Custom filter class, inherited fromGPUImageFilterAnd then load our Shader code as a string constant for example:
NSString *const kGPUImageBrightnessFragmentShaderString = SHADER_STRING
(
 varying highp vec2 textureCoordinate;
 uniform sampler2D inputImageTexture;
 uniform lowp float brightness;
 
 void main()
 {
     lowp vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
     gl_FragColor = vec4((textureColor.rgb + vec3(brightness)), textureColor.w); });Copy the code

It is then loaded according to the initialization method provided by GPUImageFilter.

- (id)initWithVertexShaderFromString:(NSString *)vertexShaderString fragmentShaderFromString:(NSString *)fragmentShaderString;
- (id)initWithFragmentShaderFromString:(NSString *)fragmentShaderString;
- (id)initWithFragmentShaderFromFile:(NSString *)fragmentShaderFilename;
Copy the code
  • Another way: if only customFragmentShader, which can be a file that wraps the Shader statement into an FSH ending and then loads it by calling the following method
- (id)initWithFragmentShaderFromFile:(NSString *)fragmentShaderFilename;
Copy the code

2. Some special custom filters



Some special filter effects, such as Tiktok filter effects (flash white, Out-of-body, shake, zoom, burr, vertigo, etc.) can be viewed in mineGitHub.About the custom filter part need you toOPenGL ESBasic knowledge and familiarity with linear algebra and algorithmsGLSL coloring languageFor further study, please refer to the official GLSL Quick Start guideOpenGL ESWe don’t cover it in this article.

4. To summarize

This article mainly introduces the use of GPUImage, filter chain loading process, rendering logic, there are some modules is not involved, such as GLProgram process of creation, link GPUImageMovieComposition video edit module, filter the custom process, etc., Interested students are required to explore their own.

1. Learn more about what you need to know

The OpenGL Shading Language is an introduction to built-in functions of GLSL

2. Some references

Github.com/BradLarson/… www.khronos.org/opengles/sd… www.jianshu.com/u/8367278ff…