AVAssetWriter introduction

Media samples can be re-encoded with AVAssetWriter. Only one AVAssetWriter can be used to write to a video file, so each file needs to correspond to a new AVAssetWriter instance.

AVAssetWriter initialization

Initialize AVAssetReader with a video file path and specify the file type.

 NSError * error;
_mAssetWriter = [[AVAssetWriter alloc] initWithURL:videoUrl fileType:AVFileTypeAppleM4V error:&error];
Copy the code

AVAssetWriter set Input

Before writing, you need to set the Input. Just like AVAssetReader’s Output, you can also set AVAssetWriterInput to AVMediaTypeAudio or AVMediaTypeVideo. The following uses AVMediaTypeVideo as an example. When setting Input, you can specify the Output setting, which mainly contains the video parameter. AVVideoCompressionPropertiesKey corresponding attribute values are encoded related, such as the parameters:

  1. * AVVideoAverageBitRateKey: video size ratio, equivalent to 10.1 AVCaptureSessionPresetHigh, numerical value, the greater the shows, more fine only supports h. (264).
  2. AVVideoMaxKeyFrameIntervalKey: key frames the largest interval, if setting 1 per frame are all key frames, the greater the value the higher compression ratio (only supports h. 264).
  3. AVVideoProfileLevelKey: Image quality level, device-specific.
  • P-baseline Profile: Baseline. Support I/P frames, only support Progressive and CAVLC;
  • EP-Extended Profile: Advanced image quality. Support I/P/B/SP/SI frames, only support Progressive and CAVLC;
  • Mp-main profile: mainstream image quality. Provides I/P/B frames, supports Progressive and Interlaced, and also supports CAVLC and CABAC support;
  • Hp-high Profile: Advanced picture quality. 8×8 internal prediction, custom quantization, lossless video encoding and more YUV formats are added to the main Profile.

AVVideoCodecKey: video encoding mode, set to H.264. Avvideoowidthkey, AVVideoHeightKey: video width and height. More Settings can be reference Documentation: Video Settings | Apple Developer Documentation

NSDictionary *codec_settings = @{AVVideoAverageBitRateKey: @(_bitRate)};     
NSDictionary *video_settings = @{AVVideoCodecKey: AVVideoCodecH264,                                      
AVVideoCompressionPropertiesKey: codec_settings,                                     
AVVideoWidthKey: @(1920),                                      
AVVideoHeightKey: @(1080)};     
_mAssetWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:video_settings];
Copy the code

For AVAssetWriterInput can also set the corresponding AVAssetWriterInputPixelBufferAdaptor to receive CVPixelBuffer. AVAssetWriterInputPixelBufferAdaptor provides a CVPixelBufferPoolRef, you can use it to allocation used for pixel buffers are written to the output file. It is written in the documentation that buffer allocation using the supplied pixel buffer pool is generally more efficient than attaching pixel buffers allocated using a separate pool. Related parameters can be set during initialization, such as the color format of CVPixelBuffer, the memory sharing mode between CPU and GPU, etc. CVPixelBuffer can be provided by the AVAssetWriterInputPixelBufferAdaptor buffer pool is created. CVOpenGLESTextureCacheRef create a specially used to store the texture buffer, so every time transfer data to the GPU texture pixels, direct use of the buffer memory, avoid the repeated to create and improve the efficiency.

NSMutableDictionary * attributes = [NSMutableDictionary dictionary]; attributes[(NSString *) kCVPixelBufferPixelFormatTypeKey] = @(kCVPixelFormatType_32BGRA); NSDictionary *IOSurface_properties = @{@"IOSurfaceOpenGLESFBOCompatibility": @YES, @"IOSurfaceOpenGLESTextureCompatibility": @YES}; attributes[(NSString *) kCVPixelBufferIOSurfacePropertiesKey] = IOSurface_properties; _mAssetWriterPixelBufferInput = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:_mAssetWriterInput sourcePixelBufferAttributes:attributes]; CVPixelBufferRef renderTarget; CVOpenGLESTextureCacheRef videoTextureCache; CVReturn err; if (videoTextureCache == NULL) { err = CVOpenGLESTextureCacheCreate(kCFAllocatorDefault, NULL, [EAGLContext currentContext], NULL, & videoTextureCache); If (err) {/ / error handling}} err = CVPixelBufferPoolCreatePixelBuffer (NULL, [_mAssetWriterPixelBufferInput pixelBufferPool]. &renderTarget); If (err) {// error handling} // add additional information to CVPixelBuffer, Do the color format conversion CVBufferSetAttachment (renderTarget kCVImageBufferColorPrimariesKey, kCVImageBufferColorPrimaries_ITU_R_709_2, kCVAttachmentMode_ShouldPropagate); CVBufferSetAttachment(renderTarget, kCVImageBufferYCbCrMatrixKey, kCVImageBufferYCbCrMatrix_ITU_R_601_4, kCVAttachmentMode_ShouldPropagate); CVBufferSetAttachment(renderTarget, kCVImageBufferTransferFunctionKey, kCVImageBufferTransferFunction_ITU_R_709_2, kCVAttachmentMode_ShouldPropagate);Copy the code

Create OpenGL’s texture from CVPixelBuffer. The number of pixels in the renderTarget will be passed to OpenGL. Draw on that texture can be encoded into the file.

  CVOpenGLESTextureRef renderTexture;
  err = CVOpenGLESTextureCacheCreateTextureFromImage (kCFAllocatorDefault,
                            videoTextureCache,
                            renderTarget,
                            NULL,
                            GL_TEXTURE_2D,
                            GL_RGBA,
                            [1920],
                            [1080],
                            GL_BGRA,
                            GL_UNSIGNED_BYTE,
                            0,
                            & renderTexture);
Copy the code

Set the Input before writing, then call the startWriting method.

if ([_mAssetWriter canAddInput:_mAssetWriterInput]){
    [_mAssetWriter addInput:_mAssetWriterInput];
  }
  [_mAssetWriter startWriting];
  [_mAssetWriter startSessionAtSourceTime:kCMTimeZero];
Copy the code

Data is written to

The sampleBuffer read by AVAssetReader is used as the input source to write data, and there are many exceptions to handle. Pay attention to writer’s state processing.

Code sample

/ / determine whether the input ready to accept new data while (_mAssetWriterInput. IsReadyForMoreMediaData) {CMSampleBufferRef sampleBuffer = [the output copyNextSampleBuffer]; if (sampleBuffer) { BOOL error = NO; if (_reader.status ! = AVAssetReaderStatusReading || _writer.status ! = AVAssetWriterStatusWriting) { error = YES; } if (_videoOutput == output) { // update the video progress _lastSamplePresentationTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer); CVPixelBufferRef pixelBuffer = (CVPixelBufferRef)CMSampleBufferGetImageBuffer(sampleBuffer); if (! [_mAssetWriterPixelBufferInput appendPixelBuffer:pixelBuffer withPresentationTime:_lastSamplePresentationTime]) { error = YES; } dispatch_async(dispatch_get_main_queue(), ^ {_progress (CMTimeGetSeconds (_lastSamplePresentationTime) / _duration * 0.8); }); } if (error){ return NO; [_mAssetWriterInput markAsFinished]; [_mAssetWriterInput markAsFinished]; return NO; }}Copy the code