CDN mixed -CMSampleBufferRef clipping

At present, a business scenario needs to cut TRTC mixed video stream and customize rendering through OpenGL

First, look at mixed flow

In the case of online education, there is a scene where students raise their hands. Video stream rendering of stage interaction:

  1. Direct access to TRTC real-time audio and video streaming has the advantages of lower delay, higher picture definition and higher fees

  2. The method of CDN mixed stream is adopted to access TRTC video stream for students who interact on stage, and to pull CDN mixed stream-clipping – custom rendering for students who do not

    CDN mixed-stream broadcast delay is slightly worse than TRTC with high picture clarity and relatively cheap charges

Session collection

The following camera acquisition source replaces CDN mixed stream resources

The preparatory work

  1. Import AVFoundation

    #import <AVFoundation/AVFoundation.h>
    Copy the code
  2. Configure device permissions.

        Privacy - Camera Usage Description  
        Privacy - Microphone Usage Description
    Copy the code
  1. Initialize the session management object AVCaptureSession

    Session = [[AVCaptureSession alloc] init]; / / set the resolution if [the self. The session canSetSessionPreset: AVCaptureSessionPresetHigh]) {[self. The session setSessionPreset:AVCaptureSessionPresetHigh]; } else{ [self.session setSessionPreset:AVCaptureSessionPreset1280x720]; }Copy the code
  2. Configure audio and video input and output objects

    [self.session beginConfiguration]; // Set the videoI /O object and add it to session [self videoInputAndOutput]; // Set the audioI /O object and add it to session [self audioInputAndOutput]; [self.session commitConfiguration];Copy the code

    Note: When configuring AVCaptureSession, you must start the configuration, beginConfiguration, and submit the commitConfiguration. Otherwise, the configuration is invalid

  3. Video Object Configuration

    - (void)videoInputAndOutput{// initialize the videoDevice object self.videoDevice = nil; / / to get video equipment management object (due to divided into front camera and the rear camera So the returned array) is AVCaptureDeviceDiscoverySession * disSession = [AVCaptureDeviceDiscoverySession  discoverySessionWithDeviceTypes:@[AVCaptureDeviceTypeBuiltInWideAngleCamera] mediaType:AVMediaTypeVideo position:AVCaptureDevicePositionUnspecified]; NSArray *videoDevices = disSession.devices; For (AVCaptureDevice * device in videoDevices) {/ / / the default first open front-facing camera if (device. The position = = AVCaptureDevicePositionFront) { self.videoDevice = device; } // Video input // initializes the video input object according to the video device NSError *error; self.videoInput = [[AVCaptureDeviceInput alloc] initWithDevice:self.videoDevice error:&error]; If (error) {NSLog(@" camera error "); return; } / / add input object to the managers of AVCaptureSession / / need to determine whether to add input object if ([self. The session canAddInput: self. The videoInput]) {[self. The session addInput:self.videoInput]; } self.videoOutput = [[AVCaptureVideoDataOutput alloc] init]; / / self. Whether to allow the card immediately lost frames videoOutput. AlwaysDiscardsLateVideoFrames = NO; If ([self supportsFastTextureUpload]) {/ / support all frequency YUV color coding a colour coding way, namely YCbCr, video now generally USES the color space, can separate brightness and color, SupportFullYUVRange = NO; supportFullYUVRange = NO; / / get output object pixel format supported by NSArray * supportedPixelFormats = self. VideoOutput. AvailableVideoCVPixelFormatTypes; for (NSNumber *currentPixelFormat in supportedPixelFormats) { if ([currentPixelFormat integerValue] == kCVPixelFormatType_420YpCbCr8BiPlanarFullRange) { supportFullYUVRange = YES; If (supportFullYUVRange) {[self.videoOutput setVideoSettings:[NSDictionary] {if (supportFullYUVRange) {[self.videoOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarFullRange] forKey:(id)kCVPixelBufferPixelFormatTypeKey]]; } else { [self.videoOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange] forKey:(id)kCVPixelBufferPixelFormatTypeKey]]; } } else { [self.videoOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey]]; } dispatch_queue_t videoQueue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0); / / set agent [self videoOutput setSampleBufferDelegate: self queue: videoQueue]; / / whether the session can add video output object if ([self. The session canAddOutput: self. VideoOutput]) {[self. The session addOutput: self. VideoOutput]; / / video link I/O objects [self connectionVideoInputVideoOutput]; }}Copy the code

    The input object must be converted to AVCaptureDeviceInput

  4. Audio Object Configuration

    - (void)audioInputAndOutput{// the initial audioDevice object self.audioDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio]; // Audio input object NSError *error; self.audioInput = [[AVCaptureDeviceInput alloc] initWithDevice:self.audioDevice error:&error]; If (error) {NSLog(@"== recording device error %@", error); } / / whether the session can add audio input object if ([self. The session canAddInput: self. AudioInput]) {[self. The session addInput: self. AudioInput]; } self.audioOutput = [[AVCaptureAudioDataOutput alloc] init]; / / determine whether you can add audio output object if ([self. The session canAddOutput: self. AudioOutput]) {[self. The session addOutput: self. AudioOutput]; }}Copy the code
  5. Set the AVCaptureConnection for the link management object

    AVCaptureConnection *captureConnection = [self.videoOutput connectionWithMediaType:AVMediaTypeVideo]; self.captureConnection = captureConnection; // [captureConnection setVideoOrientation:AVCaptureVideoOrientationPortraitUpsideDown]; captureConnection.videoScaleAndCropFactor = captureConnection.videoMaxScaleAndCropFactor; / / video stably established the if ([captureConnection isVideoStabilizationSupported]) {captureConnection preferredVideoStabilizationMode = AVCaptureVideoStabilizationModeAuto; }Copy the code
  6. Add a preview layer AVCaptureVideoPreviewLayer

        AVCaptureVideoPreviewLayer *previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:self.session];
        previewLayer.frame = CGRectMake(0, 0, self.view.frame.size.width/2, self.view.frame.size.height);
        [self.view.layer  addSublayer:previewLayer];
    Copy the code
  7. Start capturing sessions

     [self.session startRunning];
    Copy the code
  8. Output collection results

    # pragma mark - AVCaptureVideoDataOutputSampleBufferDelegate / / get the frame data - (void) captureOutput (AVCaptureOutput *) output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { // CaptureSession session will not be executed without a strong reference. Get video pixel buffer object from sampleBuffer CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer (sampleBuffer); Size_t width = CVPixelBufferGetWidth(pixelBuffer); size_t height = CVPixelBufferGetHeight(pixelBuffer); CMSampleBufferRef cropSampleBuffer; CFRetain(sampleBuffer); cropSampleBuffer = [self cropSampleBufferByHardware:sampleBuffer]; dispatch_async(dispatch_get_main_queue(), ^{ CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(cropSampleBuffer); [self.mGLView displayPixelBuffer:pixelBuffer]; CFRelease(sampleBuffer); }); CFRelease(cropSampleBuffer); }Copy the code

CMSampleBuffer clipping

  1. Create clipping area

    int _cropX = 0 ; int _cropY = 0 ; CGFloat g_width_size = 1080/4; / / 1280; CGFloat g_height_size = 1920/4; / / 720; CGRect cropRect = CGRectMake(_cropX, _cropY, g_width_size, g_height_size);Copy the code
  2. Create CVPixelBuffer

        OSStatus status;
        
        /* Only resolution has changed we need to reset pixBuffer and videoInfo so that reduce calculate count */
        static CVPixelBufferRef            pixbuffer = NULL;
        static CMVideoFormatDescriptionRef videoInfo = NULL;
        
        if (pixbuffer == NULL) {
            
            CFDictionaryRef empty; // empty value for attr value.
            CFMutableDictionaryRef attrs;
            empty = CFDictionaryCreate(kCFAllocatorDefault, // our empty IOSurface properties dictionary
                                       NULL,
                                       NULL,
                                       0,
                                       &kCFTypeDictionaryKeyCallBacks,
                                       &kCFTypeDictionaryValueCallBacks);
            attrs = CFDictionaryCreateMutable(kCFAllocatorDefault,
                                              1,
                                              &kCFTypeDictionaryKeyCallBacks,
                                              &kCFTypeDictionaryValueCallBacks);
    ​
            CFDictionarySetValue(attrs,
                                 kCVPixelBufferIOSurfacePropertiesKey,
                                 empty);
            status = CVPixelBufferCreate(kCFAllocatorSystemDefault, g_width_size, g_height_size, kCVPixelFormatType_420YpCbCr8BiPlanarFullRange, attrs/*(__bridge CFDictionaryRef)options*/, &pixbuffer);
            // ensures that the CVPixelBuffer is accessible in system memory. This should only be called if the base address is going to be used and the pixel data will be accessed by the CPU
            if (status != noErr) {
                NSLog(@"Crop CVPixelBufferCreate error %d",(int)status);
                return NULL;
            }
        }
    Copy the code

    First, to render to a texture, you need an image that is compatible with the OpenGL texture cache. Images that were created with the camera API are already compatible and you can immediately map them for inputs. Suppose you want to create an image to render on and later read out for some other processing though. You have to have create the image with a special property. The attributes for the image must have kCVPixelBufferIOSurfacePropertiesKey as one of the keys to the dictionary.

    To render a page, you need an image that is compatible with OpenGL buffering. The images created with the camera API are already compatible, and you can map them immediately for input. Suppose you are taking a new image from an existing one and using it for other purposes, you must create a special property to create the image. For image attributes must have a kCVPixelBufferIOSurfacePropertiesKey as the Key of the dictionary.

CVReturn CVPixelBufferCreate(CFAllocatorRef allocator,
                                size_t width,
                                size_t height,
                                OSType pixelFormatType, 
                                CFDictionaryRef pixelBufferAttributes, 
                                CVPixelBufferRef  _Nullable *pixelBufferOut);
Copy the code

3. Obtain CVImageBufferRef

    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(buffer);
Copy the code
  1. Cut out the image

    CIImage *ciImage = [CIImage imageWithCVImageBuffer:imageBuffer]; ciImage = [ciImage imageByCroppingToRect:cropRect]; Ciimage get real image is not in the original point after excute crop. So we need to pan imageByApplyingTransform:CGAffineTransformMakeTranslation(-_cropX, -_cropY)];Copy the code
  2. Render images to fill pixBuffer

        static CIContext *ciContext = nil;
        if (ciContext == nil) {
                    NSMutableDictionary *options = [[NSMutableDictionary alloc] init];
                    [options setObject:[NSNull null] forKey:kCIContextWorkingColorSpace];
                    [options setObject:@0            forKey:kCIContextUseSoftwareRenderer];
            EAGLContext *eaglContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES3];
            ciContext = [CIContext contextWithEAGLContext:eaglContext options:options];
        }
            [ciContext render:ciImage toCVPixelBuffer:pixbuffer bounds:cropRect colorSpace:nil];
    Copy the code

    Keep a reference to CIContext, which provides a bridge between our Core Image object and the OpenGL context. We create it once and we use it forever. This context allows Core Image to do backend optimizations, such as caching and reusing resources like textures. The important thing is that we use this context over and over again.

    1. Obtain video information duration, PTS, and DTS

          CMSampleTimingInfo sampleTime = {
              .duration               = CMSampleBufferGetDuration(buffer),
              .presentationTimeStamp  = CMSampleBufferGetPresentationTimeStamp(buffer),
              .decodeTimeStamp        = CMSampleBufferGetDecodeTimeStamp(buffer)
          };
      Copy the code
    2. Video Format Description

      if (videoInfo == NULL) { status = CMVideoFormatDescriptionCreateForImageBuffer(kCFAllocatorDefault, pixbuffer, &videoInfo); if (status ! = 0) NSLog(@"Crop CMVideoFormatDescriptionCreateForImageBuffer error %d",(int)status); }Copy the code
    3. Create CMSampleBuffer

      CMSampleBufferRef cropBuffer; status = CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault, pixbuffer, true, NULL, NULL, videoInfo, &sampleTime, &cropBuffer); if (status ! = 0) NSLog(@"Crop CMSampleBufferCreateForImageBuffer error %d",(int)status);Copy the code

OpenGL render CMSampleBuffe

The preparatory work

Import header file

#import <QuartzCore/QuartzCore.h>
#import <AVFoundation/AVUtilities.h>
#import <mach/mach_time.h>
#import <GLKit/GLKit.h>
Copy the code

Variable definitions

enum { UNIFORM_Y, UNIFORM_UV, UNIFORM_COLOR_CONVERSION_MATRIX, NUM_UNIFORMS }; GLint uniforms[NUM_UNIFORMS]; // Attribute index. enum { ATTRIB_VERTEX, ATTRIB_TEXCOORD, NUM_ATTRIBUTES }; // Color Conversion Constants (YUV to RGB) including adjustment from 16-235/16-240 (video range) // BT.601, Which is the standard for SDTV. Static const GLfloat kColorConversion601[] = {1.164, 1.164, 1.164, 0.0, -0.392, 2.017, 1.596, 0.813, 0.0}; // bt.709, which is the standard for HDTV. Static const GLfloat kcolorVersion709 [] = {1.164, 1.164, 1.164, 0.0, -0.213, 2.112, 1.793, -0.533, 0.0,}; // BT.601 full range (ref: http://www.equasys.de/colorconversion.html) const GLfloat kColorConversion601FullRange [] = {1.0, 1.0, 1.0, 0.0, -0.343, 1.765, 1.4, -0.711, 0.0,};Copy the code
  1. Initialize EAGLContext

    CAEAGLLayer *eaglLayer = (CAEAGLLayer *)self.layer; eaglLayer.opaque = TRUE; eaglLayer.drawableProperties = @{ kEAGLDrawablePropertyRetainedBacking :[NSNumber numberWithBool:NO], kEAGLDrawablePropertyColorFormat : kEAGLColorFormatRGBA8}; _context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2]; if (! _context || ! [EAGLContext setCurrentContext:_context] || ! [self loadShaders]) { return nil; } _preferredConversion = kColorConversion709;Copy the code
  2. Frame buffer, render buffer

    - (void)setupBuffers
    {
      glDisable(GL_DEPTH_TEST);
      
      glEnableVertexAttribArray(ATTRIB_VERTEX);
      glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, GL_FALSE, 2 * sizeof(GLfloat), 0);
      
      glEnableVertexAttribArray(ATTRIB_TEXCOORD);
      glVertexAttribPointer(ATTRIB_TEXCOORD, 2, GL_FLOAT, GL_FALSE, 2 * sizeof(GLfloat), 0);
      
      glGenFramebuffers(1, &_frameBufferHandle);
      glBindFramebuffer(GL_FRAMEBUFFER, _frameBufferHandle);
      
      glGenRenderbuffers(1, &_colorBufferHandle);
      glBindRenderbuffer(GL_RENDERBUFFER, _colorBufferHandle);
      
      [_context renderbufferStorage:GL_RENDERBUFFER fromDrawable:(CAEAGLLayer *)self.layer];
      glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &_backingWidth);
      glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &_backingHeight);
    ​
      glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, _colorBufferHandle);
      if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) {
        NSLog(@"Failed to make complete framebuffer object %x", glCheckFramebufferStatus(GL_FRAMEBUFFER));
      }
    }
    Copy the code
  3. Load shader link shader

    - (BOOL)loadShaders { GLuint vertShader, fragShader; NSURL *vertShaderURL, *fragShaderURL; self.program = glCreateProgram(); // Create and compile the vertex shader. vertShaderURL = [[NSBundle mainBundle] URLForResource:@"Shader" withExtension:@"vsh"]; if (! [self compileShader:&vertShader type:GL_VERTEX_SHADER URL:vertShaderURL]) { NSLog(@"Failed to compile vertex shader"); return NO; } // Create and compile fragment shader. fragShaderURL = [[NSBundle mainBundle] URLForResource:@"Shader" withExtension:@"fsh"]; if (! [self compileShader:&fragShader type:GL_FRAGMENT_SHADER URL:fragShaderURL]) { NSLog(@"Failed to compile fragment shader"); return NO; } // Attach vertex shader to program. glAttachShader(self.program, vertShader); // Attach fragment shader to program. glAttachShader(self.program, fragShader); // Bind attribute locations. This needs to be done prior to linking. glBindAttribLocation(self.program, ATTRIB_VERTEX, "position"); glBindAttribLocation(self.program, ATTRIB_TEXCOORD, "texCoord"); // Link the program. if (! [self linkProgram:self.program]) { NSLog(@"Failed to link program: %d", self.program); if (vertShader) { glDeleteShader(vertShader); vertShader = 0; } if (fragShader) { glDeleteShader(fragShader); fragShader = 0; } if (self.program) { glDeleteProgram(self.program); self.program = 0; } return NO; } // Get uniform locations. uniforms[UNIFORM_Y] = glGetUniformLocation(self.program, "SamplerY"); uniforms[UNIFORM_UV] = glGetUniformLocation(self.program, "SamplerUV"); uniforms[UNIFORM_COLOR_CONVERSION_MATRIX] = glGetUniformLocation(self.program, "colorConversionMatrix"); // Release vertex and fragment shaders. if (vertShader) { glDetachShader(self.program, vertShader); glDeleteShader(vertShader); } if (fragShader) { glDetachShader(self.program, fragShader); glDeleteShader(fragShader); } return YES; } - (BOOL)compileShader:(GLuint *)shader type:(GLenum)type URL:(NSURL *)URL { NSError *error; NSString *sourceString = [[NSString alloc] initWithContentsOfURL:URL encoding:NSUTF8StringEncoding error:&error]; if (sourceString == nil) { NSLog(@"Failed to load vertex shader: %@", [error localizedDescription]); return NO; } GLint status; const GLchar *source; source = (GLchar *)[sourceString UTF8String]; *shader = glCreateShader(type); glShaderSource(*shader, 1, &source, NULL); glCompileShader(*shader); #if defined(DEBUG) GLint logLength; glGetShaderiv(*shader, GL_INFO_LOG_LENGTH, &logLength); if (logLength > 0) { GLchar *log = (GLchar *)malloc(logLength); glGetShaderInfoLog(*shader, logLength, &logLength, log); NSLog(@"Shader compile log:\n%s", log); free(log); } #endif glGetShaderiv(*shader, GL_COMPILE_STATUS, &status); if (status == 0) { glDeleteShader(*shader); return NO; } return YES; } - (BOOL)linkProgram:(GLuint)prog { GLint status; glLinkProgram(prog); #if defined(DEBUG) GLint logLength; glGetProgramiv(prog, GL_INFO_LOG_LENGTH, &logLength); if (logLength > 0) { GLchar *log = (GLchar *)malloc(logLength); glGetProgramInfoLog(prog, logLength, &logLength, log); NSLog(@"Program link log:\n%s", log); free(log); } #endif glGetProgramiv(prog, GL_LINK_STATUS, &status); if (status == 0) { return NO; } return YES; } - (BOOL)validateProgram:(GLuint)prog { GLint logLength, status; glValidateProgram(prog); glGetProgramiv(prog, GL_INFO_LOG_LENGTH, &logLength); if (logLength > 0) { GLchar *log = (GLchar *)malloc(logLength); glGetProgramInfoLog(prog, logLength, &logLength, log); NSLog(@"Program validate log:\n%s", log); free(log); } glGetProgramiv(prog, GL_VALIDATE_STATUS, &status); if (status == 0) { return NO; } return YES; }Copy the code
  4. UseProgram

    glUseProgram(self.program); glUniform1i(uniforms[UNIFORM_Y], 0); glUniform1i(uniforms[UNIFORM_UV], 1); glUniformMatrix3fv(uniforms[UNIFORM_COLOR_CONVERSION_MATRIX], 1, GL_FALSE, _preferredConversion); // Create CVOpenGLESTextureCacheRef for optimal CVPixelBufferRef to GLES texture conversion. if (! _videoTextureCache) { CVReturn err = CVOpenGLESTextureCacheCreate(kCFAllocatorDefault, NULL, _context, NULL, &_videoTextureCache); if (err ! = noErr) { NSLog(@"Error at CVOpenGLESTextureCacheCreate %d", err); return; }}Copy the code
  5. Apply colours to a drawing

 - (void)displayPixelBuffer:(CVPixelBufferRef)pixelBuffer
{
	CVReturn err;
	if (pixelBuffer != NULL) {
		int frameWidth = (int)CVPixelBufferGetWidth(pixelBuffer);
		int frameHeight = (int)CVPixelBufferGetHeight(pixelBuffer);
		
		if (!_videoTextureCache) {
			NSLog(@"No video texture cache");
			return;
		}
        if ([EAGLContext currentContext] != _context) {
            [EAGLContext setCurrentContext:_context]; // 非常重要的一行代码
        }
		[self cleanUpTextures];
		
		
		/*
		 Use the color attachment of the pixel buffer to determine the appropriate color conversion matrix.
		 */
		CFTypeRef colorAttachments = CVBufferGetAttachment(pixelBuffer, kCVImageBufferYCbCrMatrixKey, NULL);
		
		if (colorAttachments == kCVImageBufferYCbCrMatrix_ITU_R_601_4) {
            if (self.isFullYUVRange) {
                _preferredConversion = kColorConversion601FullRange;
            }
            else {
                _preferredConversion = kColorConversion601;
            }
		}
		else {
			_preferredConversion = kColorConversion709;
		}
		
		/*
         CVOpenGLESTextureCacheCreateTextureFromImage will create GLES texture optimally from CVPixelBufferRef.
         */
		
		/*
         Create Y and UV textures from the pixel buffer. These textures will be drawn on the frame buffer Y-plane.
         */
		glActiveTexture(GL_TEXTURE0);
		err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
														   _videoTextureCache,
														   pixelBuffer,
														   NULL,
														   GL_TEXTURE_2D,
														   GL_LUMINANCE,//按照亮度值存储纹理单元
														   frameWidth,
														   frameHeight,
														   GL_LUMINANCE,
														   GL_UNSIGNED_BYTE,
														   0,// 对于planner存储方式的像素数据,这里填写对应的索引。非planner格式写0即可
														   &_lumaTexture);
		if (err) {
			NSLog(@"Error at CVOpenGLESTextureCacheCreateTextureFromImage %d", err);
		}
		
        glBindTexture(CVOpenGLESTextureGetTarget(_lumaTexture), CVOpenGLESTextureGetName(_lumaTexture));
		glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
		glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
		glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
		glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
		
		// UV-plane.
		glActiveTexture(GL_TEXTURE1);
        
        /** 该函数有两个作用:
         *  1、renderTarget像素数据传给opengl es,类似于相当于glTexImage2D(),当然renderTarget中数据可以是由CVPixelBufferCreate()创建的默认值都是
         *  0的像素数据,也可以是具体的像素数据
         *  2、生成对应格式的CVOpenGLESTextureRef对象(相当于glGenTextures()生成的texture id)
         *  CVOpenGLESTextureRef对象(它是对Opengl es中由glGenTextures()生成的texture id的封装)
         */
		err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
														   _videoTextureCache,
														   pixelBuffer,
														   NULL,
                                                           GL_TEXTURE_2D,
                                                           GL_LUMINANCE_ALPHA,//按照亮度和alpha值存储纹理单元
                                                           frameWidth / 2,
                                                           frameHeight / 2,
                                                           GL_LUMINANCE_ALPHA,
														   GL_UNSIGNED_BYTE,
														   1,
														   &_chromaTexture);
		if (err) {
			NSLog(@"Error at CVOpenGLESTextureCacheCreateTextureFromImage %d", err);
		}
		
		glBindTexture(CVOpenGLESTextureGetTarget(_chromaTexture), CVOpenGLESTextureGetName(_chromaTexture));
//        NSLog(@"id %d", CVOpenGLESTextureGetName(_chromaTexture));
		glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
		glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
		glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
		glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
		
		glBindFramebuffer(GL_FRAMEBUFFER, _frameBufferHandle);
		
		// Set the view port to the entire view.
		glViewport(0, 0, _backingWidth, _backingHeight);
	}
	
	glClearColor(0.1f, 0.0f, 0.0f, 1.0f);
	glClear(GL_COLOR_BUFFER_BIT);
	
	// Use shader program.
	glUseProgram(self.program);
	glUniformMatrix3fv(uniforms[UNIFORM_COLOR_CONVERSION_MATRIX], 1, GL_FALSE, _preferredConversion);
	
	// Set up the quad vertices with respect to the orientation and aspect ratio of the video.
	CGRect vertexSamplingRect = AVMakeRectWithAspectRatioInsideRect(CGSizeMake(_backingWidth, _backingHeight), self.layer.bounds);
	
	// Compute normalized quad coordinates to draw the frame into.
	CGSize normalizedSamplingSize = CGSizeMake(0.0, 0.0);
	CGSize cropScaleAmount = CGSizeMake(vertexSamplingRect.size.width/self.layer.bounds.size.width, vertexSamplingRect.size.height/self.layer.bounds.size.height);
	
	// Normalize the quad vertices.
	if (cropScaleAmount.width > cropScaleAmount.height) {
		normalizedSamplingSize.width = 1.0;
		normalizedSamplingSize.height = cropScaleAmount.height/cropScaleAmount.width;
	}
	else {
		normalizedSamplingSize.width = 1.0;
		normalizedSamplingSize.height = cropScaleAmount.width/cropScaleAmount.height;
	}
	
	/*
     The quad vertex data defines the region of 2D plane onto which we draw our pixel buffers.
     Vertex data formed using (-1,-1) and (1,1) as the bottom left and top right coordinates respectively, covers the entire screen.
     */
	GLfloat quadVertexData [] = {
		-1 * normalizedSamplingSize.width, -1 * normalizedSamplingSize.height,
			 normalizedSamplingSize.width, -1 * normalizedSamplingSize.height,
		-1 * normalizedSamplingSize.width, normalizedSamplingSize.height,
			 normalizedSamplingSize.width, normalizedSamplingSize.height,
	};
	
	// 更新顶点数据
	glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, 0, 0, quadVertexData);
	glEnableVertexAttribArray(ATTRIB_VERTEX);
    
    GLfloat quadTextureData[] =  { // 正常坐标
        0, 0,
        1, 0,
        0, 1,
        1, 1
    };
	
	glVertexAttribPointer(ATTRIB_TEXCOORD, 2, GL_FLOAT, 0, 0, quadTextureData);
	glEnableVertexAttribArray(ATTRIB_TEXCOORD);
	
	glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);

	glBindRenderbuffer(GL_RENDERBUFFER, _colorBufferHandle);
    
    if ([EAGLContext currentContext] == _context) {
        [_context presentRenderbuffer:GL_RENDERBUFFER];
    }
    
}
Copy the code

demo