This time, a guy with 5 years of experience came for an interview. Here is an excerpt of the conversation:

Interviewer: I see you wrote about Glide, why use Glide instead of other image loading frames? Young man: Glide use simple, chain call, very convenient, always use this. Interviewer: Have you seen the source code? What are the advantages over other picture frames? Interviewer: If you are not allowed to use the open source library now, you need to write a picture loading framework, you will consider what aspects of the problem, talk about the general idea. Guy: Uh, compress it. Interviewer: Anything else? Guy: Well, I didn’t write that.

When it comes to image loading framework, everyone is most familiar with Glide, but I do not recommend the resume to write familiar with Glide, unless you read its source code, or involved in Glide development and maintenance.

In the general interview, the frequency of picture loading problems is not too low, but there will be some differences in the way of asking, such as:

  • Write Glide on the resume, then will ask about Glide design, and with other similar framework comparison;
  • If you are asked to write an image loading frame, talk about the idea;
  • What would you do to load one or more large images for an image loading scene, such as the Internet?

Enter the text with the question ~

Talk about Glide

1.1 How easy is Glide to use?

Glide because of its good reputation, many developers use it directly in the project, using the method is quite simple

Github.com/bumptech/gl…

1. Add dependencies:

Implementation 'com. Making. Bumptech. Glide: glide: 4.10.0' annotationProcessor 'com. Making. Bumptech. Glide: the compiler: 4.10.0'Copy the code

2. Add network permissions

<uses-permission android:name="android.permission.INTERNET" />
Copy the code

3, a code to load the image into the ImageView

Glide.with(this).load(imgUrl).into(mIv1);
Copy the code

Advanced usage, parameter Settings

RequestOptions options = new RequestOptions()
            .placeholder(R.drawable.ic_launcher_background)
            .error(R.mipmap.ic_launcher)
            .diskCacheStrategy(DiskCacheStrategy.NONE)
    		.override(200, 100);
    
Glide.with(this)
            .load(imgUrl)
            .apply(options)
            .into(mIv2);
Copy the code

It is so easy to use Glide to load images, which allows many developers to save their own image processing time, image loading work is all over Glide, at the same time, it is easy to forget the relevant knowledge of image processing.

1.2 Why Glide?

From the previous interview situation, I found this phenomenon: the resume wrote familiar with Glide, are basically familiar with the use of methods, many 3 -6 years of work experience, except that Glide is easy to use, do not know Glide compared with other picture frame such as Fresco what advantages and disadvantages.

First of all, there are a few popular image loading frameworks. You can compare Glide with Fresco, such as these:

Glide:

  • Caching of multiple image formats, suitable for more content representations (such as Gif, WebP, thumbnail, Video)
  • Lifecycle integration (manages image loading requests based on the lifecycle of the Activity or Fragment)
  • Efficient processing of Bitmap (Reuse and active recovery of Bitmap, reducing system recovery pressure)
  • Efficient cache strategy, flexible (Picasso only caches raw size images, Glide caches multiple sizes), fast loading and low memory overhead (the default Bitmap format is different, so the memory overhead is half that of Picasso)

Fresco “:

  • The biggest advantage is bitmap loading below 5.0 (minimum 2.3). On systems below 5.0, Fresco places images in a special memory area (Ashmem area)
  • – Greatly reduced OOM usage (images will no longer use App memory when processing OOM images on the lower Native layer)
  • Suitable for scenarios that require high performance loading of a large number of images

Glide is good enough for general apps, but for apps that require a lot of images, Fresco is more suitable to prevent a lot of images from being loaded into the OOM. Not that Glide will cause OOM, Glide’s default memory cache is LruCache, memory will not always increase.

If you were to write your own image loading frame, what would you consider?

First, take a look at the necessary image loading frame requirements:

  • Asynchronous load: thread pool
  • Switching thread: Handler is not controversial
  • Cache: LruCache, DiskLruCache
  • Prevent OOM: soft reference, LruCache, image compression, Bitmap pixel storage location
  • Memory leaks: Note that the ImageView is properly referenced, lifecycle management
  • List sliding loading problems: load confusion, too many tasks

Of course, there are some non-essential requirements, such as loading animations.

2.1 Asynchronous Loading:

Thread pools, how many?

There are three levels of caching: memory cache, hard disk, and network.

Since the network is blocked, the read memory and the hard disk can be in one thread pool, the network requires another thread pool, or the network can use Okhttp’s built-in thread pool.

The read disk and the read network need to be handled in separate thread pools, so two thread pools are appropriate.

Glide must also require multiple thread pools, take a look at the source code is not like this

public final class GlideBuilder { ... private GlideExecutor sourceExecutor; // Thread pool for loading source files, including network loading private GlideExecutor diskCacheExecutor; // Load the disk cache thread pool... private GlideExecutor animationExecutor; // Animation thread poolCopy the code

Glide uses three thread pools, or two without animation.

2.2 Switching threads:

Update ImageView in main thread

Whether it is RxJava, EventBus, or Glide, if you want to switch from a child thread to the Android main thread, you need Handler.

Look at Glide related source:

class EngineJob<R> implements DecodeJob.Callback<R>,Poolable { private static final EngineResourceFactory DEFAULT_FACTORY = new EngineResourceFactory(); Private static final Handler MAIN_THREAD_HANDLER = new Handler(looper.getMainLooper (), new MainThreadCallback());Copy the code

RxJava is written entirely in The Java language. How can I switch from a child thread to the Main Android thread? There are still many 3-6 years of development can not answer this very basic question, and as long as this question can not be answered, the following questions about the principle, basically can not be answered.

There are a lot of Android development work for many years don’t know hongyang, Guo Lin, Yu Gang said, I don’t know what is the nugget, the heart of the estimate will think whether there is called digging for silver to dig for iron (I don’t know if there is).

What I want to express is that in this line of work, you really need to have a passion for technology and keep learning. You are not afraid that others are better than you, but you are afraid that people who are better than you work harder than you, but you do not know.

2.3 the cache

We often talk about three levels of image caching: memory cache, hard disk cache, network.

2.3.1 Memory cache

It’s usually LruCache

Glide’s default memory cache is also LruCache, but does not use the LruCache in Android SDK, but the internal is also based on LinkHashMap, so the principle is the same.

// -> GlideBuilder#build
if (memoryCache == null) {
  memoryCache = new LruResourceCache(memorySizeCalculator.getMemoryCacheSize());
}
Copy the code

Since speaking of LruCache, it is necessary to understand the characteristics and source code of LruCache:

Why LruCache?

LruCache uses the least recently used algorithm to set a cache size. When the cache reaches this size, the oldest data will be removed to avoid images occupying too much memory and causing OOM.

LruCache source analysis
Public class LruCache<K, V> {public class LruCache<K, V> {private final LinkedHashMap<K, V> map; . public LruCache(int maxSize) {if (maxSize <= 0) {
            throw new IllegalArgumentException("maxSize <= 0"); } this.maxSize = maxSize; // Create a LinkedHashMap with accessOrdertrueThis.map = new LinkedHashMap<K, V>(0, 0.75f, 0)true); }...Copy the code

Create a LinkedHashMap in the LruCache constructor and pass true to the accessOrder parameter to indicate that the data store is based on the LinkedHashMap.

Let’s start with how LinkedHashMap works

LinkedHashMap follows the structure of HashMap as an array plus a linked list.

LinkedHashMap overrides the createEntry method.

Look at the createEntry method of the HashMap

void createEntry(int hash, K key, V value, int bucketIndex) {
    HashMapEntry<K,V> e = table[bucketIndex];
    table[bucketIndex] = new HashMapEntry<>(hash, key, value, e);
    size++;
}
Copy the code

The array of HashMap isHashMapEntryobject

Take a look at the createEntry method for LinkedHashMap

void createEntry(int hash, K key, V value, int bucketIndex) { HashMapEntry<K,V> old = table[bucketIndex]; LinkedHashMapEntry<K,V> e = new LinkedHashMapEntry<>(hash, key, value, old); table[bucketIndex] = e; // Add the array e.addBefore(header); // handle the list size++; }Copy the code

The array of LinkedHashMap isLinkedHashMapEntryobject

LinkedHashMapEntry

private static class LinkedHashMapEntry<K,V> extends HashMapEntry<K,V> { // These fields comprise the doubly linked list  used for iteration. LinkedHashMapEntry<K,V> before, after; Private void remove() {before. After = after; after.before = before; } private void addBefore(LinkedHashMapEntry<K,V> existingEntry) { after = existingEntry; before = existingEntry.before; before.after = this; after.before = this; }Copy the code

LinkedHashMapEntry inherits HashMapEntry, adding before and after variables, so it’s a two-way list structure, and addingaddBeforeandremoveMethod to add and delete a linked list node.

LinkedHashMapEntry#addBefore adds a piece of data to the front of the Header

private void addBefore(LinkedHashMapEntry<K,V> existingEntry) {
        after  = existingEntry;
        before = existingEntry.before;
        before.after = this;
        after.before = this;
}
Copy the code

ExistingEntry passes all the headers, you add a node in front of the header node, you just move the list pointer, you add all the new data at the before position of the list header, the before of the list header is the most recently accessed data, Header after is the oldest data.

Look at the LinkedHashMapEntry# remove

private void remove() {
        before.after = after;
        after.before = before;
    }
Copy the code

The removal of the list node is relatively simple, change the pointer to the point.

Take a look at the Put method of LinkHashMap

public final V put(K key, V value) { V previous; synchronized (this) { putCount++; //size += safeSizeOf(key, value); // 1, linkHashMap put = map.put(key, value); if (previous ! = null) {// If there is an old value, it will be overwritten, so subtracted size -= safeSizeOf(key, previous); } } trimToSize(maxSize); return previous; }Copy the code

The LinkedHashMap structure can be represented by this diagram

LinkHashMap’s PUT and get methods finally call the trimToSize method. LruCache overcomes the trimToSize method and determines if the memory exceeds a certain size, it removes the oldest data

LruCache#trimToSize removes the oldest data

public void trimToSize(int maxSize) { while (true) { K key; V value; Synchronized (this) {synchronized (this) {if (size <= maxSize) {break; } // After size, remove the oldest data map.entry <K, V> toEvict = map.diverted (); if (toEvict == null) { break; } key = toEvict.getKey(); value = toEvict.getValue(); map.remove(key); SafeSizeOf returns 1 by default for this size calculation; size -= safeSizeOf(key, value); evictionCount++; } entryRemoved(true, key, value, null); }}Copy the code

If you don’t understand LinkHashMap very well, you can refer to: Diagram LinkedHashMap principle

LruCache summary:

  • LinkHashMap inherits HashMap, and adds a bidirectional linked list structure on the basis of HashMap. Every time the data is accessed, it updates the list pointer of the accessed data. Specifically, it deletes the node in the list first, and then adds it to the list before the header. This ensures that all data prior to the list header node is recently accessed (deleting from the list doesn’t actually remove data, it just moves the list pointer, and the data itself remains in the map).
  • LruCache internally uses LinkHashMap to access data. On the premise of two-way linked list to ensure the old and new order of data, set a maximum memory. When putting data into it, when the data reaches the maximum memory, the oldest data will be removed to ensure that the memory does not exceed the set maximum.

2.3.2 DiskLruCache

Rely on:

Implementation ‘com. Jakewharton: disklrucache: 2.0.2’

DiskLruCache and LruCache implementation idea is similar, the same is to set a total size, each time to write files to the hard disk, the total size exceeds the threshold, the old files will be deleted. Take a quick look at the remove operation:

Private final LinkedHashMap<String, Entry> lruEntries = new LinkedHashMap<String, Entry > (0, 0.75 f,true); . public synchronized boolean remove(String key) throws IOException { checkNotClosed(); validateKey(key); Entry entry = lruEntries.get(key);if(entry == null || entry.currentEditor ! = null) {return false; } // One key may correspond to multiple values,hashConflict situationfor(int i = 0; i < valueCount; i++) { File file = entry.getCleanFile(i); // Run file.delete() to delete the cache file. If the deletion fails, an exception is thrownif(file.exists() && ! file.delete()) { throw new IOException("failed to delete "+ file); } size -= entry.lengths[i]; entry.lengths[i] = 0; }...return true;
  }
Copy the code

As you can see, DiskLruCache also takes advantage of the LinkHashMap features, except that the Entry stored in the array is a little changed, and the Editor is used for manipulating files.

private final class Entry { private final String key; private final long[] lengths; private boolean readable; private Editor currentEditor; private long sequenceNumber; . }Copy the code

2.4 prevent OOM

Loading pictures is very important point is the need to prevent OOM, above the LruCache cache size Settings, can effectively prevent OOM, but when the picture needs to be large, may need to set a relatively large cache, so that the probability of OOM increased, it should explore other methods to prevent OOM.

Method 1: Soft references

To review the four Java references:

  • Strong references: Ordinary variables are strong references, such asprivate Context context;
  • Software application: SoftReference. Before OOM, the garbage collector will reclaim the object referenced by a SoftReference.
  • WeakReference: WeakReference. When GC occurs, the garbage collector will reclaim the objects in WeakReference.
  • Virtual reference: can be reclaimed at any time, no use scenario.

How to understand a strong quote:

When an Activity is destroyed, the Activity will disconnect from GCRoot. Who is GCRoot? It’s safe to assume that the Activity object was created in an ActivityThread, and that the ActivityThread must hold a reference to the Activity that calls back the lifecycle of the Activity. When the Activity executes onDestroy, the ActivityThread is disconnected from the Activity. The Activity is unreachable to GCRoot. So it will be marked as a recyclable object by the garbage collector.

Soft references are designed to be used in OOM scenarios. Large memory objects such as bitmaps can be modified with SoftReference to prevent large objects from causing OOM problems. See this code

private static LruCache<String, SoftReference<Bitmap>> mLruCache = new LruCache<String, SoftReference<Bitmap>>(10 * 1024){@override protected int sizeOf(String key, SoftReference<Bitmap> value) { Return the size of the memory used by the Bitmap in K // The Bitmap is reclaimed and the size is 0if (value.get() == null){
                return 0;
            }
            returnvalue.get().getByteCount() /1024; }};Copy the code

If a SoftReference object is stored in the LruCache, the Bitmap will be reclaimed when the memory is insufficient. That is to say, a Bitmap decorated with SoftReference will not result in the OOM.

When the Bitmap is recycled, the remaining size of the LruCache should be recalculated. You can write a method that clears the remaining memory when the Bitmap is empty and recalculates it.

Another problem is that when memory is insufficient, when the Bitmap in the soft reference is reclaimed, the LruCache becomes useless, which is equivalent to the invalidity of the memory cache. Efficiency problems will inevitably occur.

Method 2: onLowMemory

Activities and fragments are called when memory is running outonLowMemoryMethod, you can clear the cache in this method, Glide is used to prevent OOM.

//Glide
public void onLowMemory() {
    clearMemory();
}

public void clearMemory() {
    // Engine asserts this anyway when removing resources, fail faster and consistently
    Util.assertMainThread();
    // memory cache needs to be cleared before bitmap pool to clear re-pooled Bitmaps too. See # 687.
    memoryCache.clearMemory();
    bitmapPool.clearMemory();
    arrayPool.clearMemory();
  }
Copy the code
Method 3: Think in terms of where Bitmap pixels are stored

As we know, the system allocates a limited amount of memory for each process, that is, for each virtual machine. In the early days, 16M, 32M, and now 100+M are divided into five parts.

  • The virtual machine stack
  • Local method stack
  • Program counter
  • Methods area
  • The heap

Objects are allocated in the heap. The heap is the largest chunk of memory in the JVM.

The memory size of a Bitmap is large not because of the size of the object itself, but because of the pixel data of the Bitmap. The size of the pixel data of a Bitmap is equal to the memory occupied by the width x height x 1 pixel.

How much memory does 1 pixel take up? Different bitmaps use different amounts of memory for each pixel. You can see the following definition code in Fresco

  /**
   * Bytes per pixel definitions
   */
  public static final int ALPHA_8_BYTES_PER_PIXEL = 1;
  public static final int ARGB_4444_BYTES_PER_PIXEL = 2;
  public static final int ARGB_8888_BYTES_PER_PIXEL = 4;
  public static final int RGB_565_BYTES_PER_PIXEL = 2;
  public static final int RGBA_F16_BYTES_PER_PIXEL = 8;
Copy the code

If the Bitmap format is RGB_565, 1 pixel takes 2 bytes, and ARGB_8888 takes 4 bytes. Take the memory footprint into account when selecting an image loading frame, as less memory footprint means less OOM occurrence. Glide has half the memory overhead of Picasso, just because the default Bitmap format is different.

As for width and height, it refers to the width and height of Bitmap. How to calculate it? See BitmapFactory. OutWidth Options

/** * The resulting width of the bitmap. If {@link #inJustDecodeBounds} is * set to false, this will be width of the output bitmap after any * scaling is applied. If true, it will be the width of the input image * without any accounting for scaling. * * <p>outWidth will be set to -1 if there  is an error trying to decode.</p> */ public int outWidth;Copy the code

Options specifies the width and height of the original image if it is true. If it is false, it is the height after scaling. Therefore, we can generally reduce the memory footprint of Bitmap pixels by compression.

Beyond that, the above analysis of the size of Bitmap pixel data is just to explain why Bitmap pixel data is so large. Is it possible to have pixel data not in the Java heap, but in the Native heap? It is said that Android 3.0 to 8.0 Bitmap pixel data is stored in the Java heap, and after 8.0 pixel data is stored in the native heap, is it true? Look at the source code to know ~

8.0 Bitmap

The Java layer creates Bitmap methods

public static Bitmap createBitmap(@Nullable DisplayMetrics display, int width, int height, @NonNull Config config, boolean hasAlpha, @NonNull ColorSpace colorSpace) { ... Bitmap bm; .if(config ! = Config. ARGB_8888 | | the colorSpace = = the colorSpace. Get (the colorSpace. Named. SRGB)) {/ / in the end by native method to create bm = nativeCreate (null, 0, width, width, height, config.nativeInt,true, null, null);
        } else {
            bm = nativeCreate(null, 0, width, width, height, config.nativeInt, true, d50.getTransform(), parameters); }...return bm;
    }

Copy the code

Bitmaps are created using the native method nativeCreate

Corresponding source 8.0.0 _r4 xref/frameworks/base/core/jni/android/graphics/Bitmap. The CPP

//Bitmap.cpp
static const JNINativeMethod gBitmapMethods[] = {
    {   "nativeCreate"."([IIIIIIZ[FLandroid/graphics/ColorSpace$Rgb$TransferParameters;) Landroid/graphics/Bitmap;",
        (void*)Bitmap_creator },
...
Copy the code

JNI dynamic registration, nativeCreate method corresponding to Bitmap_creator;

//Bitmap.cpp static jobject Bitmap_creator(JNIEnv* env, jobject, jintArray jColors, jint offset, jint stride, jint width, jint height, jint configHandle, jboolean isMutable, jfloatArray xyzD50, jobject transferParameters) { ... / / 1. The application heap memory, create native layer Bitmap sk_sp < Bitmap > nativeBitmap = Bitmap: : allocateHeapBitmap (& Bitmap, NULL);if(! nativeBitmap) {returnNULL; }... //2. Create a Bitmap for the Java layerreturn createBitmap(env, nativeBitmap.release(), getPremulBitmapCreateFlags(isMutable));
}
Copy the code

There are two main steps:

  1. Create a native layer BitmapallocateHeapBitmapmethods

    8.0.0 _r4 / xref/frameworks/base/libs/hwui hwui/Bitmap. The CPP
// static sk_sp<Bitmap> allocateHeapBitmap(size_t size, const SkImageInfo& info, size_t rowBytes, SkColorTable* ctable) {void* addr = calloc(size, 1);if(! addr) {return nullptr;
    }
    return sk_sp<Bitmap>(new Bitmap(addr, size, info, rowBytes, ctable));
}
Copy the code

We can see that the calloc function of c++ applies for a piece of memory space, and then creates a Bitmap object of the native layer, and passes the memory address to it, that is, the Bitmap data (pixel data) of the native layer is stored in the native heap.

  1. Create the Java layer Bitmap
//Bitmap.cpp jobject createBitmap(JNIEnv* env, Bitmap* bitmap, int bitmapCreateFlags, jbyteArray ninePatchChunk, jobject ninePatchInsets, int density) { ... BitmapWrapper* bitmapWrapper = new BitmapWrapper(bitmap); // Call back to the Java layer via JNI, Jobject obj = env->NewObject(gBitmap_class, gBitmap_constructorMethodID, reinterpret_cast<jlong>(bitmapWrapper), bitmap->width(), bitmap->height(), density, isMutable, isPremultiplied, ninePatchChunk, ninePatchInsets); .return obj;
}

Copy the code

Env ->NewObject, create a Java layer Bitmap object using JNI, gBitmap_class, gBitmap_constructorMethodID, gBitmap_constructorMethodID

//Bitmap.cpp
int register_android_graphics_Bitmap(JNIEnv* env)
{
    gBitmap_class = MakeGlobalRefOrDie(env, FindClassOrDie(env, "android/graphics/Bitmap"));
    gBitmap_nativePtr = GetFieldIDOrDie(env, gBitmap_class, "mNativePtr"."J");
    gBitmap_constructorMethodID = GetMethodIDOrDie(env, gBitmap_class, "<init>"."(JIIIZZ[BLandroid/graphics/NinePatch$InsetStruct;) V");
    gBitmap_reinitMethodID = GetMethodIDOrDie(env, gBitmap_class, "reinit"."(IIZ)V");
    gBitmap_getAllocationByteCountMethodID = GetMethodIDOrDie(env, gBitmap_class, "getAllocationByteCount"."()I");
    return android::RegisterMethodsOrDie(env, "android/graphics/Bitmap", gBitmapMethods,
                                         NELEM(gBitmapMethods));
}
Copy the code

Bitmap 8.0 was created with just two points:

  1. Create a native Bitmap and apply for memory in the native heap.
  2. JNI creates a Java-tier Bitmap object that allocates memory in the Java heap.

Pixel data exists in the native layer Bitmap, which proves that Bitmap pixel data of 8.0 exists in the native heap.

7.0 Bitmap

Looking directly at the native layer methods,

/ 7.0.0 _r31 xref/frameworks/base/core/jni/android/graphics/Bitmap. The CPP

Static const JNINativeMethod gBitmapMethods[] = {{"nativeCreate"."([IIIIIIZ)Landroid/graphics/Bitmap;", (void*)Bitmap_creator }, ... static jobject Bitmap_creator(JNIEnv* env, jobject, jintArray jColors, jint offset, jint stride, jint width, jint height, jint configHandle, jboolean isMutable) { ... / / 1. Through this method to create native layer Bitmap Bitmap * nativeBitmap = GraphicsJNI: : allocateJavaPixelRef (env, & Bitmap, NULL); .return GraphicsJNI::createBitmap(env, nativeBitmap,
            getPremulBitmapCreateFlags(isMutable));
}

Copy the code

Native layer Bitmap created by GraphicsJNI: : allocateJavaPixelRef, inside look at how allocation, GraphicsJNI implementation class is Graphics. The CPP

android::Bitmap* GraphicsJNI::allocateJavaPixelRef(JNIEnv* env, SkBitmap* bitmap, SkColorTable* ctable) { const SkImageInfo& info = bitmap->info(); size_t size; // Calculate the space requiredif(! computeAllocationSize(*bitmap, &size)) {return NULL;
    }

    // we must respect the rowBytes value already seton the bitmap instead of // attempting to compute our own. const size_t rowBytes = bitmap->rowBytes(); // 1. Create an array, JbyteArray arrayObj = (jbyteArray) env->CallObjectMethod(gVMRuntime, gVMRuntime_newNonMovableArray, gVMRuntime_newNonMovableArray, gByte_class, size); . Jbyte * addr = (jbyte*) env->CallLongMethod(gVMRuntime, gVMRuntime_addressOf, arrayObj); . //3. Create Bitmap, Android ::Bitmap* wrapper = new Android ::Bitmap(env, arrayObj, (void*) addr, info, rowBytes, ctable); wrapper->getSkBitmap(bitmap); // since we're already allocated, we lockPixels right away // HeapAllocator behaves this way too bitmap->lockPixels(); return wrapper; }Copy the code

As you can see, the 7.0 pixel memory allocation looks like this:

  1. Create an array by calling the Java layer through JNI
  2. Then create the Native layer Bitmap and pass the address of the array into it.

Thus, 7.0 Bitmap pixel data is in the Java heap.

Of course, Bitmap memory below 3.0 is also said to be in the native heap, but you need to manually release the Bitmap at the native layer. In this case, you need to manually call the recycle method to recycle the memory at the native layer. This you can see the source code verification.

Native layer Bitmap reclamation problem

Java layer Bitmap objects are automatically collected by the garbage collector, while native layer Bitmap image we do not need to manually collect, how to deal with the source code?

Remember one interview question that went something like this:

Talk about the relationship between final, finally and Finalize

Finalize, finalize, finalize, Finalize, Finalize, Finalize, Finalize, Finalize, Finalize, Finalize, Finalize

/** * Called by the garbage collector on an object when garbage collection * determines that there are no more references to the object. * A subclass overrides the {@code finalize} method to dispose of * system resources or to perform other cleanup. * <p> ... **/ protected void finalize() throws Throwable { }Copy the code

When the garbage collector confirms that there is no other reference to the object, it will call the Finalize method of this object, and subclasses can override this method and do some operations to free resources.

Before 6.0, Bitmap released native layer objects through finalize method. 6.0 Bitmap. Java

Bitmap(long nativeBitmap, byte[] buffer, int width, int height, int density, boolean isMutable, boolean requestPremultiplied, byte[] ninePatchChunk, NinePatch.InsetStruct ninePatchInsets) { ... mNativePtr = nativeBitmap; //1. Create BitmapFinalizer mFinalizer = new BitmapFinalizer(nativeBitmap); int nativeAllocationByteCount = (buffer == null ? getByteCount() : 0); mFinalizer.setNativeAllocationByteCount(nativeAllocationByteCount); } private static class BitmapFinalizer { private long mNativeBitmap; // Native memory allocatedfor the duration of the Bitmap,
        // if pixel data allocated into native memory, instead of java byte[]
        private int mNativeAllocationByteCount;

        BitmapFinalizer(long nativeBitmap) {
            mNativeBitmap = nativeBitmap;
        }

        public void setNativeAllocationByteCount(int nativeByteCount) {
            if(mNativeAllocationByteCount ! = 0) { VMRuntime.getRuntime().registerNativeFree(mNativeAllocationByteCount); } mNativeAllocationByteCount = nativeByteCount;if(mNativeAllocationByteCount ! = 0) { VMRuntime.getRuntime().registerNativeAllocation(mNativeAllocationByteCount); } } @Override public voidfinalize() { try { super.finalize(); } catch (Throwable t) { // Ignore } finally { //2. This is it,setNativeAllocationByteCount(0); nativeDestructor(mNativeBitmap); mNativeBitmap = 0; }}}Copy the code

The Bitmap constructor creates a BitmapFinalizer class and overwrites the BitmapFinalizer method. When the Java layer Bitmap is reclaimed, the BitmapFinalizer object will also be reclaimed, and the Finalize method will definitely be called. Free native layer Bitmap objects inside.

Some changes have been made since 6.0. BitmapFinalizer is gone and replaced by NativeAllocationRegistry.

For example, the 8.0 Bitmap constructor

Bitmap(long nativeBitmap, int width, int height, int density, boolean isMutable, boolean requestPremultiplied, byte[] ninePatchChunk, NinePatch.InsetStruct ninePatchInsets) { ... mNativePtr = nativeBitmap; long nativeSize = NATIVE_ALLOCATION_SIZE + getAllocationByteCount(); // Create the NativeAllocationRegistry class, RegisterNativeAllocation method NativeAllocationRegistry registry = new NativeAllocationRegistry( Bitmap.class.getClassLoader(), nativeGetNativeFinalizer(), nativeSize); registry.registerNativeAllocation(this, nativeBitmap); }Copy the code

NativeAllocationRegistry is not analyzed. Both BitmapFinalizer and NativeAllocationRegistry are designed to recycle The Java Bitmap. Reclaim the Native layer Bitmap as well. In general, we do not need to call the recycle method manually. GC can remove it.

In Android 8.0, Bitmap pixel memory is stored in the native heap. In Android 8.0, Bitmap pixel memory is stored in the native heap. In Android 8.0, Bitmap pixel memory is stored in the native heap. Upgrade or change your phone

It’s no problem for us to change phones, but not everyone can keep up with the Android updates, so we have to fix that

The reason why Fresco can face Glide must have its own unique features. The advantages of Fresco are listed at the beginning of the article: “On systems below 5.0 (minimum 2.3), Fresco places the images in a special memory area (Ashmem area).” This Ashmem area is an anonymous shared memory area where Fresco places the Bitmap pixels in the shared memory, which is native heap memory.

The key source of Fresco is in the PlatformDecoderFactory class

public class PlatformDecoderFactory {

  /**
   * Provide the implementation of the PlatformDecoder for the current platform using the provided
   * PoolFactory
   *
   * @param poolFactory The PoolFactory
   * @returnThe PlatformDecoder implementation */ public static PlatformDecoder buildPlatformDecoder( PoolFactory poolFactory, Boolean gingerbreadDecoderEnabled) {/ / 8.0 above OreoDecoder the decoderif (Build.VERSION.SDK_INT >= Build.VERSION_CODES.O) {
      int maxNumThreads = poolFactory.getFlexByteArrayPoolMaxNumThreads();
      return new OreoDecoder(
          poolFactory.getBitmapPool(), maxNumThreads, new Pools.SynchronizedPool<>(maxNumThreads));
    } else if(build.version.sdk_int >= build.version_codes.lollipop) {// > 5.0 > 8.0 with ArtDecoder decoder int maxNumThreads = poolFactory.getFlexByteArrayPoolMaxNumThreads();return new ArtDecoder(
          poolFactory.getBitmapPool(), maxNumThreads, new Pools.SynchronizedPool<>(maxNumThreads));
    } else {
      if(gingerbreadDecoderEnabled && Build. VERSION. SDK_INT < Build. VERSION_CODES. KITKAT) {/ / is less than 4.4 GingerbreadPurgeableDecoder decoderreturn new GingerbreadPurgeableDecoder();
      } else{// This is the decoder for 4.4 to 5.0returnnew KitKatPurgeableDecoder(poolFactory.getFlexByteArrayPool()); }}}}Copy the code

8.0 don’t look at the first, see 4.4 below is how to get the Bitmap, see GingerbreadPurgeableDecoder this class has a method for obtaining Bitmap

//GingerbreadPurgeableDecoder private Bitmap decodeFileDescriptorAsPurgeable( CloseableReference<PooledByteBuffer> BytesRef, int inputLength, byte[] suffix, bitmapfactory. Options Options) {// MemoryFile: Anonymous shared memory MemoryFile MemoryFile = null; Try {// copy picture data to anonymous shared memory memoryFile = copyToMemoryFile(bytesRef, inputLength, suffix); FileDescriptor fd = getMemoryFileDescriptor(memoryFile);if(mWebpBitmapFactory ! = null) {/ / create the Bitmap, Fresco "himself wrote a set of method to create Bitmap Bitmap Bitmap. = mWebpBitmapFactory decodeFileDescriptor (fd, null, options);return Preconditions.checkNotNull(bitmap, "BitmapFactory returned null");
      } else {
        throw new IllegalStateException("WebpBitmapFactory is null"); }}}Copy the code

In 4.4, Fresco uses anonymous shared memory to save Bitmap data. First copy the image data to the anonymous shared memory, and then use Fresco’s own method to load the Bitmap.

You can also use the PlatformDecoderFactory class to load the Bitmap. You can also use the PlatformDecoderFactory class to load the Bitmap. Think about why there are so many decoders for different platforms. Is it a bad idea to use anonymous shared memory for anything below 8.0? I look forward to hearing from you in the comments section

2.5 ImageView memory Leak

Once in Vivo resident development, the page with avatar function was detected memory leak, the reason is that the SDK has a method to load network avatar, hold ImageView reference caused by.

Of course, the modification is also relatively simple and rough, and the ImageView is modified with WeakReference.

In fact, this approach solves the memory leak problem, but it is not perfect. For example, when the interface exits, we want the ImageView to be reclaimed, and we want the image loading task to be cancelled, and the unfinished task to be removed.

Glide’s approach is to listen for lifecycle callbacks, look at the RequestManager class

public void onDestroy() {
    targetTracker.onDestroy();
    for(Target<? > target: targettracker.getall ()) {// clear the task (target); } targetTracker.clear(); requestTracker.clearRequests(); lifecycle.removeListener(this); lifecycle.removeListener(connectivityMonitor); mainHandler.removeCallbacks(addSelfToLifecycle); glide.unregisterRequestManager(this); }Copy the code

In the Activity/fragment destruction, cancel the image loading task, details you can see the source code.

2.6 List Loading Failure

Image disorder

Due to the reuse mechanism of RecyclerView or LIstView, when the network starts to load the image ImageView is the first item, after the successful loading of ImageView due to reuse may run to the 10th item to go, The picture showing the first item on the 10th item must be wrong.

The usual way to do this is to set a tag for the ImageView. The tag is usually the address of the image. Before updating the ImageView, make sure that the tag is consistent with the URL.

Of course, you can cancel the image loading task when the item disappears from the list. Consider whether it is appropriate to do it in the image loading framework or in the UI.

The thread pool has too many tasks

List sliding, you get a lot of image requests, and if it’s the first time you go in, there’s no cache, there’s a lot of tasks waiting on the queue. Therefore, before requesting network images, you need to check whether the task already exists in the queue. If it exists, it is not added to the queue.

conclusion

This article starts with Glide, analyzes the necessary requirements of a picture loading framework, as well as what technologies and principles are involved in each requirement.

  • Asynchronous loading: At least two thread pools
  • Switch to the main thread: Handler
  • Cache: LruCache and DiskLruCache, which relates to the principle of LinkHashMap
  • Prevent OOM: soft reference, LruCache, image compression is not expanded, Bitmap pixel storage location source analysis, Fresco part source analysis
  • Memory leaks: Note that the ImageView is properly referenced, lifecycle management
  • List sliding loading problem: load with tag, queue full task is not added

Some questions remain, such as why does Fresco use different decoders to get bitmaps on different Android versions, and is it possible to use anonymous shared memory for all versions below 8.0? I look forward to your learning and sharing in the comments section


That’s it. Please leave a comment in the comments section

Talk about fresco’s Bitmap memory allocation

My other nuggets posts:

Interviewer: Here we go again: Have your app cards been blocked? Interviewer: Toutiao started very quickly. How do you think it might have been optimized?