This paper is based on SDWebImage 5.5.2. The reason for reread was also to find that the API was constantly iterating, with many structures different from earlier versions, and to keep a note. The overall analysis can be seen in the previous article: Source code analysis SDWebImage 5.5.2.

This article mainly about its plug-in system, how to easily support a variety of image formats through plug-ins, support system image loading, rich text URL loading, as well as third-party plug-in integration, such as Lottie, YYImage, YYCache, FLAnimatedImage. Currently, the following are supported:

See the official document, Documen Address for details.

Coder Plugins

Let’s start with Coder’s WebP. Coder was mentioned only once in the last article, but this time I will introduce it in a little more detail. WebP image format is a kind of image compression format proposed by the dog dad. SD was already supported in 2013.10 SD (Tag: 3.5) through the subspec provided by CocoaPods. This was continued until the last version of 4.x, which was removed after 5.x protocol. Here’s how it works:

s.subspec 'WebP' do |webp|
    webp.source_files = 'SDWebImage/UIImage+WebP.{h,m}'
    webp.xcconfig = { 'GCC_PREPROCESSOR_DEFINITIONS'= >'$(inherited) SD_WEBP=1' }
    webp.dependency 'SDWebImage/Core'
    webp.dependency 'libwebp'
end
Copy the code

If you are not familiar with subspec, you can see that it is a subset of the current Podspec. You can also define your own source_files, dependency, resource_bundle, etc. The supported configuration is similar to that of Podspec.

If we want to enable WebP it is set via pod ‘SDWebImage/WebP’ or pod ‘SDWebImage’, subspecs: [‘WebP’]. You can see it by modifying SD_WEBP=1 for the precompiled macro GCC_PREPROCESSOR_DEFINITIONS in xcodeFig. Then the internal SD_WEBP macro is used to achieve the effect of conditional compilation.

This is easy when we only need to support one image format, and painful to control with macros as more formats need to be supported:

  • Similar to SD_WEBP macros flying around, poor maintenance;
  • When new formats need to be supported, the core code must be changed and the stability is poor.
  • There is no global identifier to indicate the image format of the current operation, the API is vague and convenient;

At this point, the SDImageCoder protocol of 5.x was introduced.

The whole Pod SDWebImageWebPCoder has two files, in which UIImage+ webp. h is just the package of +[SDImageWebPCoder decodedImageWithData]. WebPCoder states the following:

@interface SDImageWebPCoder : NSObject <SDProgressiveImageCoder, SDAnimatedImageCoder>

@property (nonatomic, class, readonly, nonnull) SDImageWebPCoder *sharedCoder;

@end
Copy the code

Simple enough. However, mainly look at the SDImageCoder and SDAnimatedImageCoder protocols:

@protocol SDImageCoder <NSObject> @required #pragma mark - Decoding - (BOOL)canDecodeFromData:(nullable NSData *)data; // if giFs are supported, +[SDImageCoderHelper animatedImageWithFrames:] '- (nullable UIImage *)decodedImageWithData:(nullable NSData *)data options:(nullable SDImageCoderOptions *)options; #pragma mark - Encoding - (BOOL)canEncodeToFormat:(SDImageFormat)format NS_SWIFT_NAME(canEncode(to:)); // if giFs are supported, You can run the '+[SDImageCoderHelper framesFromAnimatedImage:]' command to assemble each frame into a GIF. - (Nullable NSData *)encodedDataWithImage:(Nullable UIImage *)image format:(SDImageFormat)format options:(nullable SDImageCoderOptions *)options; @end @protocol SDAnimatedImageCoder <SDImageCoder, SDAnimatedImageProvider> @required - (nullable instancetype)initWithAnimatedImageData:(nullable NSData *)data options:(nullable SDImageCoderOptions *)options; @endCopy the code

The SDProgressiveImageCoder protocol is not listed here, it’s pretty much the same. So, the internal implementation is implemented around these methods.

ImageContentType

Before we talk about the implementation, we need to explain a question, how to distinguish the image format from the image data? We need to go back to SD’s NSData+ ImageconTentType. h to see what image formats it supports:

typedef NSInteger SDImageFormat NS_TYPED_EXTENSIBLE_ENUM;
static const SDImageFormat SDImageFormatUndefined = -1;
static const SDImageFormat SDImageFormatJPEG      = 0;
static const SDImageFormat SDImageFormatPNG       = 1;
static const SDImageFormat SDImageFormatGIF       = 2;
static const SDImageFormat SDImageFormatTIFF      = 3;
static const SDImageFormat SDImageFormatWebP      = 4;
static const SDImageFormat SDImageFormatHEIC      = 5;
static const SDImageFormat SDImageFormatHEIF      = 6;
static const SDImageFormat SDImageFormatPDF       = 7;
static const SDImageFormat SDImageFormatSVG       = 8;
Copy the code

NS_ENUM(NSInteger, XXX) {}; Not quite. NS_TYPED_EXTENSIBLE_ENUM may be new to you, but the system’s UILayoutPriority is also an extensible enumeration:

typedef float UILayoutPriority NS_TYPED_EXTENSIBLE_ENUM; Static const UILayoutPriority UILayoutPriorityRequired API_AVAILABLE(ios(6.0)) = 1000; .Copy the code

Why NS_TYPED_ENUM is the future

Extensible Enum is simple to use and has a Swift-compatible API, which features extensible enum. Where is its imagination? When it’s combined with the protocolized classes, you can literally have ten of them. You can support any coding format you want without changing the core SD code, and it’s truly non-invasive. Is there a little bit of protocol-oriented Programming?

Another NS_STRING_ENUM mentioned in this article is also useful in SD:

typedef NSString * SDImageCoderOption NS_STRING_ENUM; FOUNDATION_EXPORT SDImageCoderOption _Nonnull const SDImageCoderDecodeFirstFrameOnly; FOUNDATION_EXPORT SDImageCoderOption _Nonnull const SDImageCoderDecodeScaleFactor; .Copy the code

Going back to our NSData+ ImageconTentType.h, we have three methods:

@interface NSData (ImageContentType) /// obtain image format + (SDImageFormat)sd_imageFormatForImageData:(nullable NSData *)data; /// convert image format to UTType + (nonnull CFStringRef)sd_UTTypeFromImageFormat:(SDImageFormat)format CF_RETURNS_NOT_RETAINED NS_SWIFT_NAME(sd_UTType(from:)); /// convert from UTType to image format + (SDImageFormat)sd_imageFormatFromUTType:(nonnull CFStringRef) UTType; @endCopy the code

In Mac OS 10.4, Apple introduced the Uniform Type Identifiers for text, image, audio, video, and so on. The website has a detailed list of the supported formats and the functions of UTType, also available in the wiki. Currently UTType does not support WebP or SVG, but it does provide extensions. Essentially UTType is just a plain text string. WebP and SVG are defined in SD as follows:

// Currently Image/IO does not support WebP
#define kSDUTTypeWebP ((__bridge CFStringRef)@"public.webp")
#define kSVGTagEnd @"</svg>"
Copy the code

You can answer the previous question: How do YOU identify the image format from image data? M: FILE SIGNATURES TABLE

In computing, a file signature is data used to identify or verify the contents of a file. In particular, it may refer to:

  • File magic number: bytes within a file used to identify the format of the file; generally a short sequence of bytes (most are 2-4 bytes long) placed at the beginning of the file; see list of file signatures
  • File checksum or more generally the result of a hash function over the file contents: data used to verify the integrity of the file contents, generally against transmission errors or malicious attacks. The signature can be included at the end of the file or in a separate file.

We know that Image can be described in two ways: vector graphics or raster graphics (or bitmaps). The screen is a bitmap that contains a lot of pixel information. In order to improve the transmission and storage efficiency of images, certain algorithms are used to compress the pixel information. The formats listed above are representations of different compression algorithms. The file header of a JPEG file in hexadecimal format is FF D8, and that of a WebP file is 57 45.

52 49 46 46 xx xx xx xx RIFF .... //xx xx xx indicates file size 57 45 42 50 WEBPCopy the code

The WebP structure needs to be introduced here:

WebP is an image format that uses either (i) the VP8 key frame encoding to compress image data in a lossy way, or (ii) the WebP lossless encoding (and possibly other encodings in the future). These encoding schemes should make it more efficient than currently used formats. It is optimized for fast image transfer over the network (e.g., for websites). The WebP format has feature parity (color profile, metadata, animation etc) with other formats as well. This document describes the structure of a WebP file.

WebP is a picture description format that can be combined by multiple encoding compression methods (lossless compression, lossy compression VP8), color description file (ICC), metaData, and multi-frame pictures (GIFs). The description Format of WebP is based on the Resource Interchange File Forma (RIFF), which is a resource Interchange File Format, or a common container File Format. I won’t go into more information here.

SDWebImageWebPCoder

Let’s take a look at the main private variables that WebPCoder has:

@implementation SDImageWebPCoder { WebPIDecoder *_idec; // Incremental decoding: WebPDemuxer *_demux; // Image data separator WebPData *_webpdata; // Copied for progressive animation demuxer NSData *_imageData; NSUInteger _loopCount; // Number of animation loops NSUInteger _frameCount; NSArray<SDWebPCoderFrame *> *_frames; // Animation frame data set CGContextRef _canvas; // Picture canvas CGColorSpaceRef _colorSpace; // Image ICC color space BOOL _hasAlpha; CGFloat _canvasWidth; // Image canvas width CGFloat _canvasHeight; NSUInteger _currentBlendIndex; // The current mixed frame rate of the animation}Copy the code

Just to explain a little bit about the word demux, it means you really don’t understand it.

All the above data are obtained by parsing WebP data, and some are obtained by SDImageCoderOptions:

BOOL decodeFirstFrame = [options[SDImageCoderDecodeFirstFrameOnly] boolValue];
NSNumber *scaleFactor = options[SDImageCoderDecodeScaleFactor];
NSValue *thumbnailSizeValue = options[SDImageCoderDecodeThumbnailPixelSize];
NSNumber *preserveAspectRatioValue = options[SDImageCoderDecodePreserveAspectRatio];
Copy the code

DecodedImage

Get configuration information from WebPData, WebPDemuxer, WebPIterator, CGColorSpaceRef and coder Options.

WebPData webpData; WebPDataInit(&webpData); webpData.bytes = data.bytes; webpData.size = data.length; WebPDemuxer *demuxer = WebPDemux(&webpData); uint32_t flags = WebPDemuxGetI(demuxer, WEBP_FF_FORMAT_FLAGS); BOOL hasAnimation = flags & ANIMATION_FLAG; Decoder Options: Scale, thumnailSize, preserveAspectRatio, decodeFirstFrame... // for animated webp image WebPIterator iter; // libwebp's index start with 1 if (! WebPDemuxGetFrame(demuxer, 1, &iter)) { WebPDemuxReleaseIterator(&iter); WebPDemuxDelete(demuxer); return nil; } CGColorSpaceRef colorSpace = [self sd_createColorSpaceWithDemuxer:demuxer];Copy the code

ColorSpace is generated by reading the ICC color profile in WebP. If not, use [SDImageCoderHelper colorSpaceGetDeviceRGB].

After obtaining the thumbnail size, it will compare with WebP’s canvas size to check whether it needs to use the thumbnail:

int canvasWidth = WebPDemuxGetI(demuxer, WEBP_FF_CANVAS_WIDTH);
int canvasHeight = WebPDemuxGetI(demuxer, WEBP_FF_CANVAS_HEIGHT);
CGSize scaledSize = SDCalculateThumbnailSize(CGSizeMake(canvasWidth, canvasHeight), preserveAspectRatio, thumbnailSize);
Copy the code

AnimatedImage

SD launched SDAnimatedImage (Protocol Too) in 5.x, which is designed for GIFs, and WebP supports giFs, so the decoding here will distinguish whether they are giFs or not.

[self sd_createWebpImageWithData: colorSpace: ScaledSize :] generate CGImageRef and finally generate image and set sd_imageFormat = SDImageFormatWebP.

If it is a GIF, CGBitmapInfo is initialized to provide bitmap layout information:

BOOL hasAlpha = config.input.has_alpha;
CGBitmapInfo bitmapInfo = kCGBitmapByteOrder32Host;
bitmapInfo |= hasAlpha ? kCGImageAlphaPremultipliedFirst : kCGImageAlphaNoneSkipFirst;
Copy the code

Why we say first is to use kCGBitmapByteOrder32Host, we know that the iPhone is a small end sequence, should use kCGBitmapByteOrder32Little, but the host is the system to provide a macro, To help us shield the size of the end-to-end issues, defined as:

#ifdef __BIG_ENDIAN__
    #define kCGBitmapByteOrder16Host kCGBitmapByteOrder16Big
    #define kCGBitmapByteOrder32Host kCGBitmapByteOrder32Big
#else /* Little endian. */
    #define kCGBitmapByteOrder16Host kCGBitmapByteOrder16Little
    #define kCGBitmapByteOrder32Host kCGBitmapByteOrder32Little
#endif
Copy the code

To put it simply, Apple gpus only support 32-bit color formats. If not, it will consume CPU for color format conversion. For details, see WWDC 2014 Session 419.

The remaining ALPA information is controlled by CGImageAlphaInfo, which is defined as follows:

ypedef CF_ENUM(uint32_t, CGImageAlphaInfo) {
    kCGImageAlphaNone,               /* For example, RGB. */
    kCGImageAlphaPremultipliedLast,  /* For example, premultiplied RGBA */
    kCGImageAlphaPremultipliedFirst, /* For example, premultiplied ARGB */
    kCGImageAlphaLast,               /* For example, non-premultiplied RGBA */
    kCGImageAlphaFirst,              /* For example, non-premultiplied ARGB */
    kCGImageAlphaNoneSkipLast,       /* For example, RBGX. */
    kCGImageAlphaNoneSkipFirst,      /* For example, XRGB. */
    kCGImageAlphaOnly                /* No color data, alpha data only */
};
Copy the code

Here’s an explanation of what premutipled does, which is pretty good. AlphaInfo provides information in three areas:

  • Whether there is an alpha value;
  • If there is alpha value, alpha position first or last, like RGBA or ARGB;
  • If there is an alpha value, has the component of each color been multiplied by the alpha value? The advantage is that you don’t have to do three multiplications.

Used here is a discussion of the concrete, the conclusion is that does not contain alpha on kCGImageAlphaNoneSkipFirst, otherwise use kCGImageAlphaPremultipliedFirst.

This is followed by generating the Canvas and iterator to start drawing each frame:

CGContextRef canvas = CGBitmapContextCreate(NULL, canvasWidth, canvasHeight, 8, 0, [SDImageCoderHelper colorSpaceGetDeviceRGB], bitmapInfo); if (! canvas) { WebPDemuxDelete(demuxer); CGColorSpaceRelease(colorSpace); return nil; } NSMutableArray<SDImageFrame *> *frames = [NSMutableArray array]; do { @autoreleasepool { CGImageRef imageRef = [self sd_drawnWebpImageWithCanvas:canvas iterator:iter colorSpace:colorSpace scaledSize:scaledSize]; if (! imageRef) { continue; } #if SD_UIKIT || SD_WATCH UIImage *image = [[UIImage alloc] initWithCGImage:imageRef scale:scale orientation:UIImageOrientationUp]; #else UIImage *image = [[UIImage alloc] initWithCGImage:imageRef scale:scale orientation:kCGImagePropertyOrientationUp];  #endif CGImageRelease(imageRef); NSTimeInterval duration = [self sd_frameDurationWithIterator:iter]; SDImageFrame *frame = [SDImageFrame frameWithImage:image duration:duration]; [frames addObject:frame]; } } while (WebPDemuxNextFrame(&iter));Copy the code

Finally, release the corresponding iterator, Demuer, Canvas, colorSpace, generate animatedImage, decode

UIImage *animatedImage = [SDImageCoderHelper animatedImageWithFrames:frames];
animatedImage.sd_imageLoopCount = loopCount;
animatedImage.sd_imageFormat = SDImageFormatWebP;
Copy the code

DrawnWebpImage

The drawnWebImage method is used to generate each frame of the GIF. Internally, CreateWebpImage generates the image of the current frame and mixes it on the canvas (based on canvas size). Finally, scale according to scale size. Blend code:

BOOL shouldBlend = iter.blend_method == WEBP_MUX_BLEND; // If not blend, cover the target image rect. (firstly clear then draw) if (! shouldBlend) { CGContextClearRect(canvas, imageRect); } CGContextDrawImage(canvas, imageRect, imageRef); CGImageRef newImageRef = CGBitmapContextCreateImage(canvas); CGImageRelease(imageRef); if (iter.dispose_method == WEBP_MUX_DISPOSE_BACKGROUND) { CGContextClearRect(canvas, imageRect); }Copy the code

Blend_method is the blending method specified for the current frame. If no blending is required, it will clean up the local canvas before entering the image transformation.

// Blend operation (animation only). Indicates how transparent pixels of the
// current frame are blended with those of the previous canvas.
typedef enum WebPMuxAnimBlend {
  WEBP_MUX_BLEND,              // Blend.
  WEBP_MUX_NO_BLEND            // Do not blend.
} WebPMuxAnimBlend;
Copy the code

There is also a dispose_method that determines whether to clear the current context before rendering the next frame:

// Dispose method (animation only). Indicates how the area used by the current
// frame is to be treated before rendering the next frame on the canvas.
typedef enum WebPMuxAnimDispose {
  WEBP_MUX_DISPOSE_NONE,       // Do not dispose.
  WEBP_MUX_DISPOSE_BACKGROUND  // Dispose to background color.
} WebPMuxAnimDispose;
Copy the code

CreateWebpImage

Creating an image initializes WebPDecoderConfig and checks webP image integrity:

WebPDecoderConfig config; if (! WebPInitDecoderConfig(&config)) { return nil; } // Check webp image integrity if (WebPGetFeatures(webData.bytes, webData.size, &config.input)! = VP8_STATUS_OK) { return nil; }Copy the code

WebPDecoderConfig is declared as follows:

// Main object storing the configuration for advanced decoding.
struct WebPDecoderConfig {
  WebPBitstreamFeatures input;  // Immutable bitstream features (optional)
  WebPDecBuffer output;         // Output buffer (can point to external mem)
  WebPDecoderOptions options;   // Decoding options
};
Copy the code

For WebPBitstreamFeatures, WebPDecBuffer, and WebPDecoderOptions, see WebP Doc. In SD, I made the following Settings for config:

config.options.use_threads = 1; // Enable multi-threaded decoding; config.output.colorspace = MODE_bgrA; // The color space is specified as RGBA order; // Use scaling for thumbnail if (scaledSize.width ! = 0 && scaledSize.height ! = 0) { config.options.use_scaling = 1; config.options.scaled_width = scaledSize.width; config.options.scaled_height = scaledSize.height; }Copy the code

MODE_bgrA is WEBP_CSP_MODE’s definition of colorspace.

Configure decode Config and CGBitmapInfo. CGImageRef = CGImageRef = CGImageRef

// Decode the WebP image data into a RGBA value array if (WebPDecode(webpData.bytes, webpData.size, &config) ! = VP8_STATUS_OK) { return nil; } // Construct a UIImage from the decoded RGBA value array CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, config.output.u.RGBA.rgba, config.output.u.RGBA.size, FreeImageData); size_t bitsPerComponent = 8; size_t bitsPerPixel = 32; size_t bytesPerRow = config.output.u.RGBA.stride; size_t width = config.output.width; size_t height = config.output.height; CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault; CGImageRef imageRef = CGImageCreate(width, height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent); CGDataProviderRelease(provider);Copy the code

FreeImageData is passed in the first step when creating the dataProvider as a callback to ensure that the data is cleaned up in time:

static void FreeImageData(void *info, const void *data, size_t size) {
    free((void *)data);
}
Copy the code

Info of the information here is actually CGDataProviderCreateWithData calls incoming NULL it also can be any type.

At this point, the WebP decode core implementation is almost complete, and the decode logic in the rest of ProgressCoder and SDAnimatedImageCoder is similar. A slight difference is that the AnimatedImage internally stores the frame data in the frame iterator incremented by the WebPIterator into the SDWebPCoderFrame. SDWebPCoderFrame is a clone of WebPIterator.

@interface SDWebPCoderFrame : NSObject

@property (nonatomic, assign) NSUInteger index; // Frame index (zero based)
@property (nonatomic, assign) NSTimeInterval duration; // Frame duration in seconds
@property (nonatomic, assign) NSUInteger width; // Frame width
@property (nonatomic, assign) NSUInteger height; // Frame height
@property (nonatomic, assign) NSUInteger offsetX; // Frame origin.x in canvas (left-bottom based)
@property (nonatomic, assign) NSUInteger offsetY; // Frame origin.y in canvas (left-bottom based)
@property (nonatomic, assign) BOOL hasAlpha; // Whether frame contains alpha
@property (nonatomic, assign) BOOL isFullSize; // Whether frame size is equal to canvas size
@property (nonatomic, assign) BOOL shouldBlend; // Frame dispose method
@property (nonatomic, assign) BOOL shouldDispose; // Frame blend operation
@property (nonatomic, assign) NSUInteger blendFromIndex; // The nearest previous frame index which blend mode is WEBP_MUX_BLEND

@end
Copy the code

Refer to the WebP documentation for WebPIterator.

EncodedImage

Decode logic and encode are basically the opposite operation, details do not table first.

conclusion

This article describes how the SD Coder plug-in works, and the image format currently supported by SD. And how to add new types of image formats to it and integrate them into the overall SD processing flow. It focuses on the implementation of WebP decoding and related APIS, and does not involve too much WebP internal implementation.