Images play an important role in mobile development. The Ui of early Android application pages is relatively simple, but with the continuous upgrade and development of Android system, interface elements are becoming richer and richer, and users have higher and higher requirements for experience. Ui girls need to design exquisite interface elements, including many good-looking pictures. However, with the improvement of mobile phone performance (resolution, CPU frequency, memory, etc.), the image quality is getting bigger and bigger. Taking a picture is usually 3M, 4M, 8M. As we all know, The Android application will allocate a specified memory size when creating the process. But each manufacturer system can modify this value and if we use “have” the big picture directly loaded into memory, the memory will soon run out, end up OOM is unusual, so the image processing for the application of a stable, friendly user experience is very important, today we’re going to talk about the Bitmap, Optimize the “picture” in the development process to ensure the stable and smooth operation of our project online.

Bitmap first

Bitmap image processing one of the most important classes, it can obtain image file information, image color transformation, cutting, rotation, scaling and other operations, and can specify the format to save image files.




As shown in the figure, Bitmap is an elder figure in the SDK. It has been available since API1, which shows its importance.

Inheritance relationships are left unexplained, implementing Parcelable’s ability to pass in memory.



Bitmap has two important internal classes, CompressFormat and Config;

Here’s a look at each of these classes

  • CompressFormat 

CompressFormat is an enumerated class that provides three types of image compression.

  1. JPEG: Indicates that Bitmap is compressed using the JPEG compression algorithm. The compressed Bitmap format can be. JPG or.
  2. PNG: indicates that Bitmap is compressed using the PNG compression algorithm. The compressed format can be. PNG, which is lossless.
  3. WEBP: WebP is a lossy compression. Under the same quality condition, the volume of WebP image is 40% smaller than that of JPEG image. However, the encoding time of WebP image is “8 times longer than that of JPEG image “. Also note that in the official documentation there is this description: As ofBuild.VERSION_CODES.Q, a value of 100results in a file in the lossless WEBP format. Otherwise the file will be in the lossy WEBP format. After Android10, if quality is 100, bitmap compression will be lossless and all other compression will be lossy.

Bitmap provides us with a reliable method for developers to use. Let’s take a look at Bitmap’s methods as follows:



The first method is the compress() method, which takes three parameters

  1. Format: 👆, as explained above, represents the compressed format;
  2. Quality: Compression quality and value of 0-100, 0 said the minimum image compression, 100 said compression, highest quality for PNG compression format, this parameter can be ignored, for the WEBP, less than 100 for lossy compression format, will directly influence on image quality, equal to 100 when USES is lossless compression format, image quality is there will be no change, But the image size is very compact;
  3. Stream: writes the compressed image to the specified output stream.

Return value: Boolean, return true if the bitmap is successfully compressed into the output stream, then bitmap information can be parsed from the corresponding input stream using bitmap.factory.



As can be seen from the official website, this method may take a long time in the process of image compression, so it is recommended to operate in the sub-thread. As for why you can look at the source code, the source code will call a Native method of nativeCompress, that is, the compression process is processed at the bottom;

  • Config

What does it mean to store bitmap pixels? That is, the format of each pixel displayed on the screen in bitmap is stored in memory, which will affect the transparency and quality of the real bitmap image.

  1. Bitmap.config. ALPHA_8: Color information only consists of transparency, accounting for 8 bits;
  2. Bitmap.config. ARGB_4444: Color information consists of transparency and R (Red), G (Green) and B (Blue), each of which has 4 bits, and a total of 16 bits.

  3. Bitmap.config. ARGB_8888: Color information consists of transparency and R (Red), G (Green), and B (Blue), each of which has 8 bits and a total of 32 bits. It is the default color storage format of Bitmap and the configuration that takes up the most space.

  4. Bitmap.config. RGB_565: The color information is composed of R (Red), G (Green) and B (Blue), with R accounting for 5 bits, G for 6 bits and B for 5 bits, accounting for 16 bits in total.

ARGB_8888 is the default bitmap storage mode of android system. It consists of 4 channels, each channel is 8 bits, and the sub-table represents the transparency and RGB color value. In other words, a bitmap pixel occupies 4 bytes.

In the same way: Using bitmap.config. RGB_565 storage, the memory size of a single pixel is only 2byte. In other words, the memory size of an image using ARGB_565 format will be reduced by half compared with the default ARGB_8888. We’ll talk about that later;

BitmapFactory 

There are many ways to create bitmap objects, including specifying files, streams, and byte arrays.



Bitmaps are created from byte arrays, specified paths, system resources, binary streams, etc. Of course, some methods require special parameters, such as byte arrays to specify the start offset position of parsing, length, etc. Some need to specify the path, or specify the bitmapFactory. Option configuration information, it is also an important means of our image optimization;

Bitmapfactor. Options what the hell is this? It’s very important! Bitmap loaded configuration class, want to do image memory optimization is indispensable to play with it “deal with”, as follows its internal attributes

Here we will only cover a few important attributes related to image optimization

  • InsampleSize: sampling rate. The default value is 1, which indicates no scaling. The default value is 2, which indicates that the width and height are scaled twice and the total size is reduced by 4 times.
  • InBitmap: used bitmap.
  • InJustDecodeBound: If set to true, it does not fetch the image and does not allocate memory but returns the height and width of the image.
  • InMutable: whether images are mutable. Set this parameter to true if bitmaps are used for multiple purposes.
  • InDensity: the corresponding pixel density when loading the bitmap (more on this later);
  • InTargetDensity: the pixel density of the actual bitmap rendered, corresponding to the screen pixel density of the terminal device (described later);

Ok, Bitmap API let’s stop here, because today we are not mainly to explain its use, in order to lay a foundation for the following knowledge, a brief introduction to Bitmap knowledge, let’s return to the “main topic”.

Analyze Bitmap memory usage

Bitmaps are used to describe the length, width, color and other information of an image. In general, we can use the BitmapFactory to parse images in a path into Bitmap objects.

Once an image is loaded into memory, how much memory does it take up?

  • getAllocationByteCount 
  • getByteCount
  • getRowBytes

What do these three methods mean? What does it have to do with memory usage? Let’s explain these three methods separately

Take a look at this picture



The image above is a 1920*1200 image of 270Kb saved in the res/drawable-mdpi directory

Why are you looking at this picture? Because the eyes 👀 see tired, by the way… No, note the size and dimensions of the original image in the red box above, and set the theme for subsequent compression;

We respectively by Bitmap. GetAllocationByteCount () and Bitmap. GetByteCount () and Bitmap getRowBytes () method to get the Bitmap relative byte size, such as the following:



The print result is as follows

The 2020-05-23 10:20:10. 926, 7669-7669 / com. Example. Practicerecycerviewitemdecoration D/AAAA: Density1:1.5 densityDpi: 240 2020-05-23 10:21:52. 422, 7669-7669 / com. Example. Practicerecycerviewitemdecoration D/AAAAA: height:1800 width:2880 allocationByteCount:20736000 byteCount:20736000 rowBytes:11520 density:240 mutable:false Copy the code

So you see allocationByteCount = 20736000 and why is that? What’s the difference?

Here’s what the official document says:



This method was added after API19 to return the size of memory to store Bitmpa pixel information. Bitmpa is the amount of memory allocated to Bitmpa. What does getByteCount have to do with it? GetAllocationByteCount is greater than getByteCount when the bitmap is used to reuse another image smaller than the original bitmap. In other words, by using the bitmap to decode the image, If the memory of the overused Bitmap is larger than that of the Bitmap to be allocated, then getByteCount() represents the memory occupied by the newly decoded image (not the actual memory size, which is the size of the overused Bitmap). GetAllocationByteCount () indicates the actual memory used by the overused Bitmap.

And after API19 system recommended using getAllocationByteCount, look at the source code



GetRowBytes () = getRowBytes () = getRowBytes (); Let’s move on, and you’ll see it in a flash when I open up the source for getByteCount



GetByteCunt is the size of a line of pixels in bytes * Bitmap height

We can verify this:

11520 * 1800 = 20736000 Copy the code

The settlement result is very accurate without any deviation. The size is similar to understanding that the area of a rectangle is equal to the length * width. GetRowBytes represents the memory occupied by a row of pixels of the bitmap, and then multiplied by the height is the memory occupied by the whole bitmap.

How did getRowBytes come from? It represents the pixel memory of a bitmap row. What does that mean? Memory occupied by a row of pixels =bitmap width * byte size occupied by 1 pixel is calculated as follows

2880 times 4 is 11520

There is no deviation in the calculation results. At this point, does everyone seem to understand something? I am analyzing the memory usage from inside to outside according to the BITmap memory related API, and finally come to the conclusion

Memory occupied by Bitmap = Width x height x Bytes of memory occupied by a pixel, as shown below

2880 * 1800 * 4 = 20736000 

Some people may have noticed that the height and width of the bitmap in memory are different from the width and height of the original image. Why is this?

InDensity and InTargetDensity are also mentioned in Bitmap API. I will tell you the conclusion and then take you to find the answer from the source point of view.

In fact, BitmapFactory will make a comparison between the current screen density of the device and the drawable directory where the image is located in the process of parsing the image, and scale the image according to this ratio. The specific formula is as follows:

Scale = Screen density of the current device/screen density of the Drawable directory where the picture is located

Actual memory occupied by Bitmap = Width * scale * height * scale * bytes of memory occupied by one pixel. In Android, the screen density corresponding to each drawable directory is as follows:



Looking back at the above question, why is the original width and height of the image different from the width and height of the bitmap? From the log we printed, we know that density=1.5 densityDpi=240, while the image is placed in drawable- MDPI, and the desityDpi of the bitmap is 160.

True bitmap height = 1.5 (densityDpi 240/ densityDpi 160 of Drawable) * 1200 = 1800Copy the code

Bitmap Width = 1.5 (densityDpi of Drawable 240/ densityDpi of Drawable 160) * 1920 = 2800Copy the code

The same result is very accurate, that is, the size of our Bitmap memory is related to the width and height of our image, bitmap. Config and the zoom ratio, and the zoom ratio depends on the screen density of the device and the corresponding density of the drawable of the image.

If we put the image under drawable-hdpi, will the bitmap memory size change? Is it bigger or smaller?

I print the log to try it out, and then verify the result according to the rule above. Print the following

The 2020-05-23 12:01:45. 358, 9182-9182 / com. Example. Practicerecycerviewitemdecoration D/AAAA: Density1:1.5 densityDpi: 240 2020-05-23 12:01:47. 018, 9182-9182 / com. Example. Practicerecycerviewitemdecoration D/AAAAA: height:1200 width:1920 allocationByteCount:9216000 byteCount:9216000 rowBytes:7680 density:240 mutable:false  Copy the code

The width and height of the Bitmap is equal to the original width and height, and the memory size is 9216000. The reason is that the densityDpi of the drawable of the image has changed, which is calculated according to the formula

1920 * 240/240 * 1200 * 240/240  * 4 = 9216000Copy the code

9216000/20736000 = 0.44… Placing images in MDPI takes about 60% more memory than in HDPI,

Therefore, we need to prepare multiple images to be placed in different drawable directories during image adaptation, which on the one hand ensures the consistent display effect of our images on different devices, and on the other hand, the system can save a lot of memory space by loading suitable bitmap. Imagine if our device is 640 Dpi? And we only prepared a picture to put in mdPI or HDPI, so we this picture will consume a lot of memory!!

BitmapFactory: BitmapFactory: BitmapFactory: BitmapFactory: BitmapFactory: BitmapFactory: BitmapFactory:

public static Bitmap decodeResource(Resources res, int id, Options opts) {   
         validate(opts);   
         Bitmap bm = null;    
         InputStream is = null;       
          try {       
            final TypedValue value = new TypedValue();        
            is = res.openRawResource(id, value);       
            bm = decodeResourceStream(res, value, is, null, opts);   
          } catch (Exception e) {     
             /* do nothing. If the exception happened on open, bm will be null. If it happened on close, bm is still valid. */  
         } finally {             try {          
               if(is ! =null) is.close();  
           } catch (IOException e) {            // Ignore  }}if (bm == null&& opts ! =null&& opts.inBitmap ! =null) {           throw new IllegalArgumentException("Problem decoding into existing bitmap");      }    
   return bm;
}Copy the code

Continue to follow up decodeResourceStream inside (here is a screenshot)



If option is empty, create a new one. If TypeValue is not empty, fetch the density of TypeValue. TypeValue is the result of Resource resolution. After reading the image information from the resource, including the dPI of the image’s drawable pair, the density value in TypeValue is used. If this value is 0, DENSITY_DEFAULT is used

public static final int DENSITY_DEFAULT = DENSITY_MEDIUM;

public static final int DENSITY_MEDIUM = 160;

The inTargetDesity size is densityDpi for the screen density of the device

With these two values we can calculate the size of the bitmap, and then we look at decodeStream, which will eventually follow nativeDecodeStream, which is obviously a native method, So we know that Bitmap memory calculation is actually done in the Native layer, so I’ll just post the native layer processing code,

static jobject doDecode(JNIEnv* env, SkStreamRewindable* stream, jobject padding, jobject options) { // Initial scaling coefficient float scale = 1.0f;
        if (env->GetBooleanField(options, gOptions_scaledFieldID)) { 
            const int density = env->GetIntField(options, gOptions_densityFieldID); 
            const int targetDensity = env->GetIntField(options, gOptions_targetDensityFieldID); 
            const int screenDensity = env->GetIntField(options, gOptions_screenDensityFieldID);
            if(density ! =0&& targetDensity ! =0&& density ! = screenDensity) {// The zoom coefficient is the density of the current coefficient/the density of the folder where the image is located;
                scale = (float) targetDensity / density; }}// The original decoded Bitmap; SkBitmap decodingBitmap;
      if(decoder->decode(stream, &decodingBitmap, prefColorType, decodeMode) ! = SkImageDecoder::kSuccess) {return nullObjectReturn("decoder->decode returned false"); } 
         // Width and height of the original decoded Bitmap;
         int scaledWidth = decodingBitmap.width(); 
         int scaledHeight = decodingBitmap.height(); 
         // Use the scaling coefficient to scale, width and height after scaling;
         if(willScale && decodeMode ! = SkImageDecoder::kDecodeBounds_Mode) { scaledWidth =int(scaledWidth * scale + 0.5 f); 
             scaledHeight = int(scaledHeight * scale + 0.5 f); 
         } 
         // For historical reasons; Sx and SY are basically equal to scale.
        const float sx = scaledWidth / float(decodingBitmap.width()); 
        const float sy = scaledHeight / float(decodingBitmap.height()); 
        canvas.scale(sx, sy); canvas.drawARGB(0x00.0x00.0x00.0x00); 
        canvas.drawBitmap(decodingBitmap, 0.0 f.0.0 f, &paint); 
        // now create the java bitmap 
        return GraphicsJNI::createBitmap(env, javaAllocator.getStorageObjAndReset(), bitmapCreateFlags, ninePatchChunk, ninePatchInsets, -1);Copy the code

The native layer calculates the above processing, first obtains the original width and height of the picture, calculates the scaling coefficient according to decodeMode, finally scales the canvas, and finally draws the Bitmap to complete the loading operation of the Bitmap. If you see this, you have a basic understanding of the Bitpmap loading process into memory and the underlying scaling strategy, don’t stop! Moving on, we haven’t started on bitmap optimization yet…

  • The size of an image within assets

As we know, images in Android can be saved not only in the Drawable directory, but also in assets, and then get the input stream of the image through AssetManager. How big is the Bitmap generated by loading this way? Same girl.png as above, this time put it in assets and load it with the following code:



The final output is as follows:

The 2020-05-23 14:32:33. 799, 11603-11603 / com. Example. Practicerecycerviewitemdecoration D/AAAA: Density1:1.5 densityDpi: 240 2020-05-23 14:32:35. 335, 11603-11603 / com. Example. Practicerecycerviewitemdecoration D/AAAAA: height:1200 width:1920 allocationByteCount:9216000 byteCount:9216000 rowBytes:7680 density:240 mutable:false Copy the code

It can be seen that the system does not scale the pictures in assets when they are loaded.

As you can see from the above example, a 270Kb image loaded into memory takes up 9216000 bytes, or 9M. Therefore, when appropriate, we need to optimize the thumbnail of images that need to be loaded.

Modify image loading Config

Changing the storage mode that occupies less space can quickly and effectively reduce the memory usage of images. For example, use the inPreferredConfig option of bitmapFactory. Options to set the storage mode to bitmap.config. RGB_565. This takes up two bytes per pixel, so the total memory footprint is cut in half. As follows:



The following logs are displayed:

The 2020-05-23 14:37:06. 213, 11949-11949 / com. Example. Practicerecycerviewitemdecoration D/AAAA: Density1:1.5 densityDpi: 240 2020-05-23 14:37:07. 047, 11949-11949 / com. Example. Practicerecycerviewitemdecoration D/AAAAA: Height: 1200 width: 1920 allocationByteCount: 4608000 / / compared to cut by half 9216000 memory byteCount: 4608000 rowBytes: 3840 density: 240 mutable:false Copy the code

This conclusion has already been covered in Bitmap Config, so I won’t go into it here;

In addition, there is an inSampleSize parameter in Options, which can implement Bitmap sampling compression. This parameter means that the data is collected every inSampleSize pixels in the wide and high dimensions. For example:



Because the width and height will be sampled, the final image will be shortened by 4 times, and the final print effect is as follows:

The 2020-05-23 14:42:59. 440, 12182-12182 / com. Example. Practicerecycerviewitemdecoration D/AAAA: Density1:1.5 densityDpi: 240 2020-05-23 14:43:00. 332, 12182-12182 / com. Example. Practicerecycerviewitemdecoration D/AAAAA: Height: 600 width: 960 allocationByteCount: 2304000 / / for a quarter of 9216000, Maximum memory reduction byteCount:2304000 rowBytes:3840 Density :240 MUTABLEfalse Copy the code


Bitmap reuse

Scene description

If you create multiple Bitmaps on an Android page, such as two images A and B, and click A button to switch between the two images in ImageView,



You can do this with the following code:



However, each time switchImage is called to switch images, a new Bitmap object needs to be created through the BitmapFactory. When the method completes, the Bitmap is reclaimed by the GC, which causes the creation and destruction of large memory objects, leading to frequent GC (or memory jitter). For end-user interaction products like Android Apps, the user experience will be affected if the UI freezes due to frequent GC. You can view the memory in the Android Studio Profiler, and after switching the image several times, it looks like this:



  • Optimize using options. inBitmap

In fact, after the first display, a Bitmap object already exists in memory. We can reuse the Bitmap space already occupied by memory. This concept is also described above. In this case, we use the options. inBitmap parameter. Modified as follows:



Explanation:

The first red box creates a Bitmap object that can be reused. In the second red box, options.inBitmap is assigned to the previously created bitmap object to avoid memory reallocation. Rerunking the code and looking at the memory in the Profiler, you can see that no matter how many times we switch images, the memory usage is basically flat.



Let’s look at the log again:

The 2020-05-23 15:33:46. 515, 21739-21739 / com. Example. Practicerecycerviewitemdecoration D/AAAAA: height:1200 width:1920 allocationByteCount:9216000 byteCount:9216000 rowBytes:7680 density:240 mutable:trueThe 2020-05-23 15:34:09. 031, 21739-21739 / com. Example. Practicerecycerviewitemdecoration D/AAAAA: height:322 width:640 allocationByteCount:9216000 byteCount:824320 rowBytes:2560 density:240 mutable:true Copy the code

The memory of the second image obviously overuses the memory size of the first Bitmap 9216000, while the size of the second image byteCount is 824320, not equal to the size of allocationByteCoung. At the beginning of the article, we also explained the difference between getAllocationByteCoun and getByteCount, which well explained the result.

If the first Bitmap is smaller than the second Bitmap, it will not be reused, as we mentioned earlier, otherwise it will crash, as follows:



We default in onResume imageView, right? .setimageBitmap (bitmap) This bitmap is the corresponding bitmap of image [1] above, and its memory allocation has also been printed as 824320. Then when we click switch, we will reuse the Bitmap memory and fill the content of image [0] into the Bitmap. We try to run it and find the result



Throws an exception directly to us, and we find the answer in the decordResource source code as follows



If BM is empty and bitmap multiplexing is enabled, this will crash…

This is because there are limits to how bitmaps can be reused:

Prior to Android 4.4, you could only reuse the same size Bitmap memory area. After 4.4, you could reuse any Bitmap memory area as long as it was larger than the Bitmap to be allocated.

So what we need to do is do something like this



Step 1: Initialize a bitmap. The size of this bitmap is 824320 after loading into memory in bitmap[1].

Step 2: Take the first image (image[0]) and the actual memory size is 9216000. Since inJustDecodeBound=true is not actually loaded into memory, to get the bitmap configuration information;

Step 3: Determine whether bitmap can be reused as follows



Get the size of the preloaded bitmap in option, then calculate the size of the preloaded bitmap according to the bitmap storage format, and finally return the comparison result. ARGB_8888 is used by default, so ✖️4;

If the memory size of the preloaded bitmap is less than the size of the reused bitmap,

option.InMutable=true;  
option.InBitmap=bitmapCopy the code

Last step: load the Bitmap again and reuse it;

In addition to the inBitmap parameter, I set Options. InMutable to true before loading. If this parameter is not set to true, BitmapFactory will not reuse Bitmap memory. Warning:

W/BitmapFactory: Unable to reuse an immutable bitmap as an image decoder targetCopy the code

Bitmap caching

When a large number of images need to be displayed on the interface at the same time, such as ListView, RecyclerView, etc., a Bitmap may be loaded and destroyed many times in a short time due to the user’s continuous sliding up and down. In this case, by using appropriate caching, the GC frequency can be effectively slowed down to ensure the efficiency of image loading and improve the responsiveness and fluency of the interface.

The most common cache method is LruCache, which can be used as follows:



Explanation:

In the figure, the maximum space of LruCache is 20 MB. If the space exceeds 20 MB, LruCache removes excess bitmaps based on the internal cache policy.

The sizeOf () method in the figure specifies the sizeOf the Bitmap to be inserted. When we insert data into LruCache, LruCache does not know that each object will take up a large amount of memory, so we need to specify this manually and calculate it differently depending on the type of data cached.

Finally, we have access operations.

Is today the content of the above, explain the relevant basic knowledge and the optimization of type of Bitmap Bitmap handle more than so many practical problems, like screenshots long figure processing, if you don’t deal with this “big figure”, the application is easy to fall off, here need subdivision load, there is not much said, you can refer to the official documentation to learn.

If you are interested in this article, please like it and see you in the next one!