preface

From the previous study of image Advanced Processing 1 and image Advanced processing 2, we know that in the project for performance reasons, it is better not to overlay too much UIView, CALayer level display; But in many cases it is necessary to use image composition or pixel and filter processing. These commonly used image processing using different graphics processing framework for coding practices. Project code.

  • Large picture compression processing and manual decoding pictures.
    • Efficient display of large images locally.
    • Store or upload images with size/quality requirements
    • Pictures are manually decoded.
  • Image pixel modification operation.
    • Image grayscale.
    • Modify the RGB value of the picture.
    • Image coding.
  • Use different graphics frames to compose pictures, add filter watermarks and so on. (The theory is the same as the above pixel modification, but different solutions are used here, some are good and some are bad, but also depending on the use scenario)
    • Direct drawing composition.
    • Compositing images using the CoreGraphics framework
    • Use the CoreImage framework to compose images in the form of filters.
    • Use the GPUImage framework to compose images in the form of filters.

First, large picture compression processing and manual decoding pictures

1. Efficient display of large picture in local area.

Project scenario: 1. Display the large picture on the screen after downloading it; 2, local read large picture displayed on the screen.

Best solution: WWDC2018 Apple to the solution, in order to prevent repetition, this part please move to the picture advanced processing 2 part four.

2, there are size/quality requirements of the picture storage or upload.

First of all, the two simplest and most common compression methods are introduced. The following complex compression methods are also an extension of this, which can be adjusted according to the actual situation.

Apple offers a way to compress mass:

UIImageJPEGRepresentation(image, compression);
Copy the code

For this method, the smaller the value, the lower the quality of the image, the smaller the image file. But it’s not 0 for compression, it’s 0b, it’s 1 for the original image. And if you have a large image, even if compression = 0.0001 or less, the image can’t be compressed any more after it has been compressed to a certain size.

1. Compress the image according to the specified compression ratio:

// According to the quality of compression // main disadvantages: If I have a big picture and I do it this way, Size may still be large - (UIImage *) compressWithQuality: (CGFloat) rate {NSData * data = UIImageJPEGRepresentation (self, rate); UIImage *resultImage = [UIImage imageWithData:data];return resultImage;
}
Copy the code

2. Compress the picture according to the specified size:

Compression / / / according to size/main disadvantages: picture may be deformation, also cannot assure quality - (UIImage *) compressWithSize: (CGSize) size {UIGraphicsBeginImageContext (size); [self drawInRect:CGRectMake(0, 0, size.width, size.height)]; UIImage *resultImage = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext();return resultImage;
}
Copy the code

Project scenario:

  • 1. Upload or store pictures with size requirements.
  • 2. Upload or store pictures with quality requirements.
  • 3, in the case of size has an upper limit to ensure quality as far as possible.

1. The solution theory of the first case: gradually reduce the size of the picture in a cycle until the picture is slightly smaller than the specified size. The advantage of this is that the picture size can be the maximum at this time after we limit the picture size. The problem is that the number of cycles, low efficiency, long time. Dichotomy can be used to improve efficiency:

// The loop gradually reduces the image size until the image is slightly smaller than the specified size // The same problem is that the loop times are too many, the efficiency is low, and the time is long. Dichotomy can be used to improve efficiency, the specific code omitted. Here is another method, which is better than dichotomy, compresses the image less than the specified size (not just < maxLength, > maxLength * 0.9). - (UIImage *)compressWithCycleSize:(NSInteger)maxLength { UIImage *resultImage = self; NSData *data = UIImageJPEGRepresentation(resultImage, 1); NSUInteger lastDataLength = 0;while(data.length > maxLength && data.length ! = lastDataLength) { lastDataLength = data.length; CGFloat ratio = (CGFloat)maxLength / data.length; CGSize size = CGSizeMake((NSUInteger)(resultImage.size.width * sqrtf(ratio)), (NSUInteger)(resultImage.size.height * sqrtf(ratio))); // Use NSUInteger to prevent white blank UIGraphicsBeginImageContext(size); // Use image to draw (drawInRect:), image is larger but more compression time // Use result image to draw, image is smaller but less compression time [resultImage drawInRect:CGRectMake(0, 0, size.width, size.height)]; resultImage = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); data = UIImageJPEGRepresentation(resultImage, 1); }return resultImage;
}
Copy the code

2, the second case of the solution theory: cycle compression picture quality until the picture is slightly less than the specified size, the default cycle 6 times, cycle too many times behind also can not go down, of course, this number can be configured. The advantage is to ensure the maximum quality of the picture. We also use dichotomy to improve efficiency.

// Compress the image repeatedly until the image is slightly smaller than the specified size. //⚠️ : Note: When the image quality is below a certain level, further compression has no effect. The default compression up to 6 times, through dichotomy to optimize the number of cycles // the advantage of compression picture quality is that, as far as possible to retain the picture definition, the picture will not be obvious blurred; The disadvantage is that there is no guarantee that the image will be compressed to a smaller size than specified. - (UIImage *)compressWithCycleQulity:(NSInteger)maxLength { CGFloat compression = 1; NSData *data = UIImageJPEGRepresentation(self, compression);if (data.length < maxLength) return self;
    CGFloat max = 1;
    CGFloat min = 0;
    for (int i = 0; i < 6; ++i) {
        compression = (max + min) / 2;
        data = UIImageJPEGRepresentation(self, compression);
        if(data.length < maxLength * 0.9) {min = compression; }else if (data.length > maxLength) {
            max = compression;
        } else {
            break;
       }
    }
    UIImage *resultImage = [UIImage imageWithData:data];
    return resultImage;
}
Copy the code

3, the solution theory of the third case: the combination of the two image compression methods as far as possible to take into account the quality and size. Make sure it’s the right size. The advantage is to ensure the maximum quality and size in the case of limited size.

- (UIImage *)compressWithQulitySize:(NSInteger)maxLength {
    // Compress by quality
    CGFloat compression = 1;
    NSData *data = UIImageJPEGRepresentation(self, compression);
    if (data.length < maxLength) return self;
    
    CGFloat max = 1;
    CGFloat min = 0;
    for (int i = 0; i < 6; ++i) {
        compression = (max + min) / 2;
        data = UIImageJPEGRepresentation(self, compression);
        if(data.length < maxLength * 0.9) {min = compression; }else if (data.length > maxLength) {
            max = compression;
        } else {
            break;
        }
    }
    UIImage *resultImage = [UIImage imageWithData:data];
    if (data.length < maxLength) return resultImage;
    
    // Compress by size
    NSUInteger lastDataLength = 0;
    while(data.length > maxLength && data.length ! = lastDataLength) { lastDataLength = data.length; CGFloat ratio = (CGFloat)maxLength / data.length; CGSize size = CGSizeMake((NSUInteger)(resultImage.size.width * sqrtf(ratio)), (NSUInteger)(resultImage.size.height * sqrtf(ratio))); // Use NSUInteger to prevent white blank UIGraphicsBeginImageContext(size); [resultImage drawInRect:CGRectMake(0, 0, size.width, size.height)]; resultImage = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); data = UIImageJPEGRepresentation(resultImage, compression); }return resultImage;
}
Copy the code

3. Manual decoding of pictures.

Write in front: picture encoding and decoding theory please move to advanced picture processing 2

Scenario: Suitable for quick display of images, such as tableCell, images are first bitmap decoded into the cache. At the same time, if it is a large image, it can be used with the above image compression method.

Solution: Using CGBitmapContextCreate to redraw images, this compressed image is equivalent to a manual decoding, which can speed up the display of images

// Get the current image data source CGImageRef imageRef = self.CGImage; NSUInteger width = CGImageGetWidth(imageRef)*scale; NSUInteger height = CGImageGetHeight(imageRef)*scale; CGColorSpaceRef colorSpace = CGImageGetColorSpace(imageRef); CGBitmapContextCreate(void * __nullable data, size_t width, size_t height, size_t bitsPerComponent, Uint32_t bitmapInfo) data: uint32_t bitmapInfo BitsPerComponent: the size of each color component RGBA each component is 1 byte bytesPerRow: the number of bytes used in each row 4*width bitmapInfo: */ CGContextRef = CGBitmapContextCreate(nil, width, height, 8, 4*width, colorSpace, kCGImageAlphaNoneSkipLast); CGContextDrawImage(contextRef, CGRectMake(0, 0, width, height), imageRef); imageRef = CGBitmapContextCreateImage(contextRef); CGContextRelease(contextRef);return [UIImage imageWithCGImage:imageRef scale:self.scale orientation:UIImageOrientationUp];
}
Copy the code

2. Image pixel modification operation

Write in the front: the theory of this part is to modify the pixel value in the bitmap of the picture by redrawing the picture, so as to achieve the modification of the picture.

1. Grayscale image.

Three color conversion algorithms for grayscale images:

  • 1, floating point algorithm: R = G = B = 0.3R + 0.59G + 0.11 * B
  • 2, average method: R = G = B = (R+G+B)/3
  • 3, take any component color: R = G = B = R or G or B
- (UIImage *)imageToGray:(NSInteger)type{ CGImageRef imageRef = self.CGImage; NSUInteger width = CGImageGetWidth(imageRef); NSUInteger height = CGImageGetHeight(imageRef); / / 2, create a color space CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB (); //3. Create a space based on the number of pixels UInt32 *imagePiexl = (UInt32 *) calLOc (width*height, sizeof(UInt32)); CGContextRef contextRef = CGBitmapContextCreate(imagePiexl, width, height, 8, 4*width, colorSpaceRef, kCGImageAlphaNoneSkipLast); CGContextDrawImage(contextRef, CGRectMake(0, 0, width, height), self.cgImage); R=G=B=(R+G+B)/3for (int y=0; y<height; y++) {
        for(int x=0; x<width; Uint8_t *rgbPiexl = (uint8_t *)&imagePiexl[y*width+x]; //rgbPiexl[0],rgbPiexl[1],rgbPiexl[2]; //(rgbPiexl[0]+rgbPiexl[1]+rgbPiexl[2])/3; Uint32_t gray = rgbPiexl [0] * 0.3 + rgbPiexl [1] * 0.59 + rgbPiexl [2] * 0.11;if (type == 0) {
                gray = rgbPiexl[1];
            }else if(type == 1) {
                gray = (rgbPiexl[0]+rgbPiexl[1]+rgbPiexl[2])/3;
            }else if (type= = 2) {gray = rgbPiexl rgbPiexl [0] * 0.3 + 0.59 + [1] * rgbPiexl [2] * 0.11; } rgbPiexl[0] = gray; rgbPiexl[1] = gray; rgbPiexl[2] = gray; }} / / drawing according to the context CGImageRef finalRef = CGBitmapContextCreateImage (contextRef); // Release used memory CGContextRelease(contextRef); CGColorSpaceRelease(colorSpaceRef); free(imagePiexl);return [UIImage imageWithCGImage:finalRef scale:self.scale orientation:UIImageOrientationUp];
}
Copy the code

2. Modify the RGB value of the picture.

Control the color display of the picture by modifying the RGB value of the picture. Or I could change some color.

- (UIImage *)imageToRGB:(CGFloat)rk g:(CGFloat)gk b:(CGFloat)bk { CGImageRef imageRef = self.CGImage; NSUInteger width = CGImageGetWidth(imageRef); NSUInteger height = CGImageGetHeight(imageRef); / / 2, create a color space CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB (); //3. Create a space based on the number of pixels UInt32 *imagePiexl = (UInt32 *) calLOc (width*height, sizeof(UInt32)); CGContextRef contextRef = CGBitmapContextCreate(imagePiexl, width, height, 8, 4*width, colorSpaceRef, kCGImageAlphaNoneSkipLast); CGContextDrawImage(contextRef, CGRectMake(0, 0, width, height), imageRef); R=G=B=(R+G+B)/3for (int y=0; y<height; y++) {
       for(int x=0; x<width; Uint8_t *rgbPiexl = (uint8_t *)&imagePiexl[y*width+x]; // No processing is done at this color valueif (rgbPiexl[0]>245&&rgbPiexl[1]>245&&rgbPiexl[2]>245) {
               NSLog(@"No processing at this color value.");
           }else{ rgbPiexl[0] = rgbPiexl[0]*rk; rgbPiexl[1] = rgbPiexl[1]*gk; rgbPiexl[2] = rgbPiexl[2]*bk; }}} / / drawing according to the context CGImageRef finalRef = CGBitmapContextCreateImage (contextRef); // Release used memory CGContextRelease(contextRef); CGColorSpaceRelease(colorSpaceRef); free(imagePiexl);return [UIImage imageWithCGImage:finalRef scale:self.scale orientation:UIImageOrientationUp];
}
Copy the code

3. Image coding.

A Mosaic is a blurred picture. If the pixels in a specific area are set to the same color, the whole will become blurred. The larger the area block is, the more blurred it will be, and the smaller it will be, the closer it will be to the original pixel.

// Set the Mosaic. // Mosaic is to make the image look blurred. If the pixels in a specific area are set to the same color, the whole will become blurred. The larger the area block is, the more blurred it will be, and the smaller it will be, the closer it will be to the original pixel. // Set the size of the region; // Set the size of the region. // get a pixel in this region (the first one) as the color of the whole region; //3. Set the color to the region; - (UIImage *)imageToMosaic:(NSInteger)size; { CGImageRef imageRef = self.CGImage; NSUInteger width = CGImageGetWidth(imageRef); NSUInteger height = CGImageGetHeight(imageRef); / / 2, create a color space CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB (); //3. Create a space based on the number of pixels UInt32 *imagePiexl = (UInt32 *) calLOc (width*height, sizeof(UInt32)); CGContextRef contextRef = CGBitmapContextCreate(imagePiexl, width, height, 8, 4*width, colorSpaceRef, kCGImageAlphaNoneSkipLast); CGContextDrawImage(contextRef, CGRectMake(0, 0, width, height), imageRef); UInt8 * bitmapContextGetData (contextRef); UInt8 *pixels[4] = {0}; NSUInteger currentPixels = 0; NSUInteger preCurrentPiexls = 0; // NSUInteger mosaicSize = size; // Mosaic sizeif (size == 0) return self;
    for (NSUInteger i = 0;  i < height - 1; i++) {
        for (NSUInteger j = 0 ; j < width - 1; j++) {
            currentPixels = i * width + j;
            if (i % mosaicSize == 0) {
                if (j % mosaicSize == 0) {
                    memcpy(pixels, bitmapPixels + 4 * currentPixels, 4);
                }else{ memcpy(bitmapPixels + 4 * currentPixels, pixels, 4); }}else{ preCurrentPiexls = (i - 1) * width + j; memcpy(bitmapPixels + 4 * currentPixels, bitmapPixels + 4 * preCurrentPiexls, 4); }}} / / create image data according to the context CGImageRef finalRef = CGBitmapContextCreateImage (contextRef); // Release used memory CGContextRelease(contextRef); CGColorSpaceRelease(colorSpaceRef); free(imagePiexl);return [UIImage imageWithCGImage:finalRef scale:self.scale orientation:UIImageOrientationUp];
}
Copy the code

4. Use different graphic frames to synthesize pictures and add filter watermarks.

Write in front: The theory is the same as the above pixel modification, the image is modified by manipulating pixels, but different frames and third-party GPUImage provided by the system are used here. There are also differences in efficiency between frameworks. Here, corresponding pixels (black and white processing) are added to each section of code, just for learning. Later, operations of corresponding pixels can be added or replaced in corresponding code blocks according to requirements, or parameters can be added behind for encapsulation.

1. Direct drawing synthesis.

The principle of this scheme is to draw the pixels of multiple pictures on one picture according to their own design through drawing.

- (UIImage *)processUsingPixels:(UIImage *)backImage frontImage:(UIImage *)frontImage; { // 1. Get the raw pixels of the image UInt32 * backPixels; CGImageRef backCGImage = [backImage CGImage]; NSUInteger backWidth = CGImageGetWidth(backCGImage); NSUInteger backHeight = CGImageGetHeight(backCGImage); CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); NSUInteger bytesPerPixel = 4; NSUInteger bitsPerComponent = 8; NSUInteger backBytesPerRow = bytesPerPixel * backWidth; backPixels = (UInt32 *)calloc(backHeight * backWidth, sizeof(UInt32)); CGContextRef context = CGBitmapContextCreate(backPixels, backWidth, backHeight, bitsPerComponent, backBytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big); CGContextDrawImage(context, CGRectMake(0, 0, backWidth, backHeight), backCGImage); // 2. Blend the pattern onto the image CGImageRef frontCGImage = [frontImage CGImage]; // 2.1 Calculate the size & position of the pattern CGFloat frontImageAspectRatio = frontimage.size.width / frontImage.size.height; NSInteger targetFrontWidth = backWidth * 0.25; CGSize frontSize = CGSizeMake(targetFrontWidth, targetFrontWidth / frontImageAspectRatio); // CGPoint frontOrigin = CGPointMake(backWidth * 0.5, backHeight * 0.2); CGPoint frontOrigin = CGPointMake(0, 0); // 2.2 Scale & Get Pixels of the pattern NSUInteger frontBytesPerRow = bytesPerPixel * frontsize.width; UInt32 *frontPixels = (UInt32 *)calloc(frontSize.width * frontSize.height, sizeof(UInt32)); CGContextRef frontContext = CGBitmapContextCreate(frontPixels, frontSize.width, frontSize.height, bitsPerComponent, frontBytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big); CGContextDrawImage(frontContext, CGRectMake(0, 0, frontSize.width, frontSize.height),frontCGImage); // 2.3 Blend each pixel NSUInteger offsetPixelCountForInput = frontorigin. y * backWidth + frontorigin. x;for (NSUInteger j = 0; j < frontSize.height; j++) {
       for (NSUInteger i = 0; i < frontSize.width; i++) {
           UInt32 *backPixel = backPixels + j * backWidth + i + offsetPixelCountForInput;
           UInt32 backColor = *backPixel;

           UInt32 * frontPixel = frontPixels + j * (int)frontSize.width + i;
           UInt32 frontColor = *frontPixel;

           // Blend the pattern with 50% alpha
//            CGFloat frontAlpha = 0.5f * (A(frontColor) / 255.0);
           CGFloat frontAlpha = 1.0f * (A(frontColor) / 255.0);
           UInt32 newR = R(backColor) * (1 - frontAlpha) + R(frontColor) * frontAlpha;
           UInt32 newG = G(backColor) * (1 - frontAlpha) + G(frontColor) * frontAlpha;
           UInt32 newB = B(backColor) * (1 - frontAlpha) + B(frontColor) * frontAlpha;

           //Clamp, not really useful here :p
           newR = MAX(0,MIN(255, newR));
           newG = MAX(0,MIN(255, newG));
           newB = MAX(0,MIN(255, newB));

           *backPixel = RGBAMake(newR, newG, newB, A(backColor));
       }
   }

   // 3. Convert the image to Black & White
   for (NSUInteger j = 0; j < backHeight; j++) {
       for(NSUInteger i = 0; i < backWidth; i++) { UInt32 * currentPixel = backPixels + (j * backWidth) + i; UInt32 color = *currentPixel; // Average of RGB = GreySCALE UInt32 averageColor = (R(color) + G(color) + B(color)) / 3.0; // Average of RGB = GreySCALE UInt32 averageColor = (R(color) + G(color) + B(color)) / 3.0; *currentPixel = RGBAMake(averageColor, averageColor, averageColor, A(color)); } } // 4. Create a new UIImage CGImageRef newCGImage = CGBitmapContextCreateImage(context); UIImage * processedImage = [UIImage imageWithCGImage:newCGImage]; // 5. Cleanup! CGColorSpaceRelease(colorSpace); CGContextRelease(context); CGContextRelease(frontContext); free(backPixels); free(frontPixels);return processedImage;
}
Copy the code

2. Use the CoreGraphics framework to compose images.

- (UIImage *)processUsingCoreGraphics:(UIImage *)backImage frontImage:(UIImage *)frontImage; { CGRect imageRect = {CGPointZero,backImage.size}; NSInteger backWidth = CGRectGetWidth(imageRect); NSInteger backHeight = CGRectGetHeight(imageRect); // 1. Blend the pattern onto our image CGFloat frontImageAspectRatio = frontImage.size.width / frontImage.size.height; NSInteger targetFrontWidth = backWidth * 0.25; CGSize frontSize = CGSizeMake(targetFrontWidth, targetFrontWidth / frontImageAspectRatio); // CGPoint frontOrigin = CGPointMake(backWidth * 0.5, backHeight * 0.2); CGPoint frontOrigin = CGPointMake(0, 0); CGRect frontRect = {frontOrigin, frontSize}; UIGraphicsBeginImageContext(backImage.size); CGContextRef context = UIGraphicsGetCurrentContext(); / / flip drawing context CGAffineTransform flip = CGAffineTransformMakeScale (1.0, 1.0); CGAffineTransform flipThenShift = CGAffineTransformTranslate(flip,0,-backHeight); CGContextConcatCTM(context, flipThenShift); // 1.1 Draw our image into a new CGContext CGContextDrawImage(context, imageRect, [backImage CGImage]); // 1.2 Set Alpha to 0.5 and draw our pattern on CGContextSetBlendMode(context, kCGBlendModeSourceAtop); CGContextSetAlpha (context, 0.5); CGRect transformedpatternRect = CGRectApplyAffineTransform(frontRect, flipThenShift); CGContextDrawImage(context, transformedpatternRect, [frontImage CGImage]); UIImage * imageWithFront = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); Convert our image to Black and White // 2.1 Create a new context with a gray color space CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray(); context = CGBitmapContextCreate(nil, backWidth, backHeight, 8, 0, colorSpace, (CGBitmapInfo)kCGImageAlphaNone); // 2.2 Draw our image into the new context CGContextDrawImage(context, imageRect, [imageWithFront CGImage]); / / 2.3 Get our new B&W Image CGImageRef imageRef = CGBitmapContextCreateImage (context); UIImage * finalImage = [UIImage imageWithCGImage:imageRef]; // Cleanup CGColorSpaceRelease(colorSpace); CGContextRelease(context); CFRelease(imageRef);return finalImage;
}
Copy the code

3, use CoreImage framework to add filter form composite image.

- (UIImage *)processUsingCoreImage:(UIImage *)backImage frontImage:(UIImage *)frontImage {
  CIImage * backCIImage = [[CIImage alloc] initWithImage:backImage];
  
  // 1. Create a grayscale filter
  CIFilter * grayFilter = [CIFilter filterWithName:@"CIColorControls"];
  [grayFilter setValue:@(0) forKeyPath:@"inputSaturation"];
  
  // 2. Create our pattern filter
  
  // Cheat: create a larger pattern image
  UIImage * patternFrontImage = [self createPaddedPatternImageWithSize:backImage.size pattern:frontImage];
  CIImage * frontCIImage = [[CIImage alloc] initWithImage:patternFrontImage];

  CIFilter * alphaFilter = [CIFilter filterWithName:@"CIColorMatrix"]; // CIVector * alphaVector = [CIVector vectorWithX:0 Y:0 Z: 0.5w :0]; CIVector * alphaVector = [CIVector vectorWithX:0 Y:0 Z: 1.0w :0]; [alphaFiltersetValue:alphaVector forKeyPath:@"inputAVector"];
  
  CIFilter * blendFilter = [CIFilter filterWithName:@"CISourceAtopCompositing"];
  
  // 3. Apply our filters
  [alphaFilter setValue:frontCIImage forKeyPath:@"inputImage"];
  frontCIImage = [alphaFilter outputImage];

  [blendFilter setValue:frontCIImage forKeyPath:@"inputImage"];
  [blendFilter setValue:backCIImage forKeyPath:@"inputBackgroundImage"];
  CIImage * blendOutput = [blendFilter outputImage];
  
  [grayFilter setValue:blendOutput forKeyPath:@"inputImage"];
  CIImage * outputCIImage = [grayFilter outputImage];
  
  // 4. Render our output image
  CIContext * context = [CIContext contextWithOptions:nil];
  CGImageRef outputCGImage = [context createCGImage:outputCIImage fromRect:[outputCIImage extent]];
  UIImage * outputImage = [UIImage imageWithCGImage:outputCGImage];
  CGImageRelease(outputCGImage);
  
  return outputImage;
}
Copy the code

CreatePaddedPatternImageWithSize this is a generated the code block specific see DEMO filter design

4. Use the GPUImage framework to add filters to synthesize images.

- (UIImage *)processUsingGPUImage:(UIImage *)backImage frontImage:(UIImage *)frontImage { // 1. Create our GPUImagePictures GPUImagePicture * backGPUImage = [[GPUImagePicture alloc] initWithImage:backImage]; UIImage *fliterImage = [self createPaddedPatternImageWithSize:backImage.size pattern:frontImage]; GPUImagePicture * frontGPUImage = [[GPUImagePicture alloc] initWithImage:fliterImage]; // 2. Setup our filter chain GPUImageAlphaBlendFilter * alphaBlendFilter = [[GPUImageAlphaBlendFilter alloc] init]; AlphaBlendFilter. Mix = 0.5; [backGPUImage addTarget:alphaBlendFilter atTextureLocation:0]; [frontGPUImage addTarget:alphaBlendFilter atTextureLocation:1]; GPUImageGrayscaleFilter * grayscaleFilter = [[GPUImageGrayscaleFilter alloc] init]; [alphaBlendFilter addTarget:grayscaleFilter]; // 3. Process & grab output image [backGPUImage processImage]; [frontGPUImage processImage]; [grayscaleFilter useNextFrameForImageCapture]; UIImage * output = [grayscaleFilter imageFromCurrentFramebuffer];return output;
}
Copy the code

summary

Project code

  • Large picture compression processing and manual decoding pictures

    It is suitable for processing large images and image display in projects, especially when performance and image requirements are high.

  • Image pixel modification operation

    Suitable for processing some picture color and coding projects.

  • Use different graphics frames to compose pictures, add filter watermarks and so on.

    From the amount of code, it is obvious that 1, direct drawing synthesis. The amount of code is significantly higher. CoreImage and GPUImage schemes need to add pattern patterns themselves, which is not a small amount of code. So just from the function of the composite diagram. CoreGraphic scheme is optimal in code volume.

    In terms of performance, local testing, CoreGraphic, direct drawing synthesis, the fastest. GPUImage is similar, with CoreImage being the slowest to add filters.

    From the perspective of controllable diversity, GPUImage already provides many filters and is open source. No doubt the current best, but the other can be their own corresponding function encapsulation.

    Generally speaking, it depends on the needs of the project. Personally, I think it is a good choice to add watermarks and synthesize pictures in general if you want to use CoreGraphic directly. Later, you can use CoreGraphic packaging function.

Reference documentation

  • IOS Image compression
  • Image-Processing