Hi, nice to meet you! πŸ‘‹ 🏻

This article is about Kotlin Lazy, and hopefully it will help you understand and use it better.

The introduction

Everyone who uses Kotlin, more or less, uses Lazy, which translates as Lazy initialization.

It’s also relatively straightforward. If we have an object or field that we want to initialize only when we use it, we can declare it first and then initialize it when we use it, and the initialization process is thread safe by default (NONE is not used specifically). The benefit of this is the performance advantage, we don’t have to initialize everything as soon as the application or the page loads, and this is somewhat more convenient than var xx = NULL in the past.

What is this article about?

  • Lazy usage
  • Lazy internal source code design parsing
  • Lazy is recommended
  • How can daily development be simplified

Common use

Before we get started, let’s look at the simplest use:

    private val lock = "lock"
  	// 1. Basic (thread-safe), which internally uses Lazy itself as the lock object
    val mutableAny by lazy() {

  	// 2. (thread-safe) Use the passed lock as the lock object
    val mutableAnyToLock by lazy(lock) {

  	// 3. The principle is the same as method 1
    val mutableToSyn by lazy(LazyThreadSafetyMode.SYNCHRONIZED) {

  	// 4. (thread safety) Internal use of CAS mechanism, rather than directly add synchronization lock
    val mutableToPub by lazy(LazyThreadSafetyMode.PUBLICATION) {

  	// 5. (Thread unsafe) Multithreading may be initialized multiple times
    val mutableToNone by lazy(LazyThreadSafetyMode.NONE) {
Copy the code

We demonstrated five ways to use it above. We may have seen or used mode 1 or mode 3 most in daily life, but I personally use mode 4 and 5 more, mainly because they are more suitable for common scenes compared with other scenarios. This will be mentioned later, so I will not elaborate too much.

If you look at my comments carefully, there are five ways to use them, but there are really only three. Why? The specific source code is shown in the following figure:

So our source code analysis mainly to see the latter LazyThreadSafetyMode related to the corresponding three classes can be.

  • SynchronizedLazyImpl
  • SafePublicationLazyImpl
  • UnsafeLazyImpl

The final implementation principle is also the object lock, CAS, the default implementation of three ways, we follow the source code together to see.

The source code parsing

Let’s start with the most common Lazy interface:

public interface Lazy<out T> {
		// Initialize the value
    public val value: T
		// Whether it has been initialized
    public fun isInitialized(a): Boolean
Copy the code

Lazy has three concrete implementations, which we mentioned above, so we will look at the source code of each of them.


SynchronizedLazyImpl (SynchronizedLazyImpl)


private class SynchronizedLazyImpl<out T>(initializer: () -> T, lock: Any? = null) : Lazy<T>,... {
    private var initializer: (() -> T)? = initializer
  	// Internally initialized value, defaults to a static class
    @Volatile private var _value: Any? = UNINITIALIZED_VALUE
  	Lazy itself is used as the lock object by default, if the lock is not empty
    private vallock = lock ? :this

    override val value: T
        get() {
            val _v1 = _value
          	// If the value does not equal the default value, the initialization is proved to have been done
            if(_v1 ! == UNINITIALIZED_VALUE) {return _v1 as T
          	// Add an object lock for initialization. The lock object is the lock passed in, which defaults to the current object
            return synchronized(lock) {
                val _v2 = _value
              	// If the value does not equal the default value, the initialization is proved to have been done
                if(_v2 ! == UNINITIALIZED_VALUE) { _v2as T
                } else {
                    valtypedValue = initializer!! () _value = typedValue initializer =null
Copy the code

To elaborate on the process, use an example such as the following code:

val mutableToSyn by lazy(LazyThreadSafetyMode.SYNCHRONIZED) {
Copy the code

When we call mutableToSyn, we’re calling lazy. value, which is implemented as SynchronizedLazyImpl, so we’re actually calling the value implementation.

The get() method then enters an object lock on the same object we passed in (Lazy self object if not passed in). Because of the lock, there is no thread-safety issue even if multiple threads call get() at the same time. Then get() checks if it is initialized, returns if it is, or calls our own callback to initialize it.


SafePublicationLazyImpl (SynchronizedLazyImpl) has the following details:


private class SafePublicationLazyImpl<out T>(initializer: () -> T) : Lazy<T>, ... {
    @Volatile private var initializer: (() -> T)? = initializer
  	// Internal value
    @Volatile private var _value: Any? = UNINITIALIZED_VALUE

    override val value: T
        get() {
            val value = _value
            if(value ! == UNINITIALIZED_VALUE) {return value as T
          	// Save the callback function for now
            val initializerValue = initializer
          	// If the callback is null, the assignment is complete
            if(initializerValue ! =null) {
              	// Get the latest value
                val newValue = initializerValue()
              	// Compare the _value in the current object (this), if _value===UNINITIALIZED_VALUE, then assign newValue, compare the memory address
                if (valueUpdater.compareAndSet(this, UNINITIALIZED_VALUE, newValue)) {
                    initializer = null
                    return newValue
            return _value as T


    companion object {
        private valvalueUpdater = ... AtomicReferenceFieldUpdater.newUpdater( SafePublicationLazyImpl::class.java,
            "_value")}}Copy the code

To elaborate on the process, use an example such as the following code:

val mutableToPub by lazy(LazyThreadSafetyMode.PUBLICATION) {
Copy the code

When we call mutableToPub, we’re actually calling lazy. value, which is SafePublicationLazyImpl, so we’re actually calling the value implementation.

Get () checks if _value is not the default value. If it is not, it returns the default value. If it is null, then _value is initialized. If it is null, then _value is initialized. NewValue initialized value obtained, and then use valueUpdater.com pareAndSet to update _value value in the form of CAS, if the current _value equals the UNINITIALIZED_VALUE expectations, Sets _value to the new newValue, and then sets the initialization function to NULL.

Question parsing

  • whyinitializer 与 _valueTo increase theVolatileModified?
  • Why use itAtomicReferenceFieldUpdater.compareAndSetTo update?

I believe that many students will have such a question (if not that for their own goo clang πŸ‘πŸ»). If we look inside SynchronizedLazyImpl, we see that Volatile also modifies _value. Why?

So let’s go back to the time when we learned about Java locking – >

As we know, each thread has its own working memory for efficiency. The internal operation process of the thread is mainly based on the working memory. The changes in the working memory will be refreshed to the main memory later, and the refresh timing is uncertain. That is to say, in the case of multi-threading, it is likely that the changes of thread A cannot be timely informed by thread B at this time.

Let’s say we have thread A and thread B:

Thread A will read the variable sum from main memory and store it in its own working memory as A copy. All subsequent reads by thread A will be directly read from its own working memory. If thread A were to modify sum at this point, it would also make changes to its copy in working memory and then flush to main memory, but there is no guarantee as to when it would be written to main memory. If thread B reads the variable, it may still get the same value, which results in an inconsistency if thread B also has increment logic. This is what we call a visibility problem.

To address this problem, we often use one of two solutions, synchronized or volatile.

Synchronized can guarantee that only one thread can acquire the lock at the same time. When releasing the lock, the modification of the current variable will be actively refreshed to main memory, so it avoids the above problems. However, this approach requires that other threads be blocked. It is also possible to use the alternative in certain scenarios, such as overread and underwrite scenarios, because if every read is locked, performance may be affected, and volatile eliminates this problem in such scenarios.

When we modify a volatile variable with multiple threads, it is the first time that it is flushed to main memory and is visible to all threads. When we modify a volatile variable with multiple threads, we must fetch the latest variable from main memory before performing any operation. This avoids the performance problems caused by blocking threads. It is important to note, however, that volatile does not guarantee atomicity. It guarantees visibility and inhibits instruction reordering. (By default, the compiler optimizes our code to adjust certain steps.

What is atomicity?

Atomicity means that the operation is not divisible. Whether multi-core or single-core, an atomic quantity can only be operated by one thread at a time. In short, any operation that is not interrupted by the thread scheduler during its entire operation is considered atomic. For example, a = 1, the act of assigning directly, does not depend on other steps.

Something like a++ does not belong, because the steps are as follows:

  1. I need to get the value of A
  2. And then I’m going to do plus 1
  3. And then say

The above three steps are connected step by step. If the two threads operate at the same time, thread A executes step 1 while thread B just completes the whole step, and the value of A is equivalent to the old value, then the subsequent increment and assignment are inconsistent with our original logic.

So if we look at the logic above:

If we didn’t use compareAndSet, we’d probably write code like this:

if (_valude == UNINITIALIZED_VALUE) {
       initializer = null
       return newValue
Copy the code
  1. The default value of _value is UNINITIALIZED_VALUE
  2. If yes, set it to newValue

However, the above process is obviously not an atomic operation, that is, there is no guarantee that the assignment will not be interrupted after the execution of the judgment logic. It is likely that the assignment has already been made by another thread, and there will be an inconsistency with the expected situation.

So here use AtomicReferenceFieldUpdater.com pareAndSet and AtomicReferenceFieldUpdater is JDK provide us with atomic operations to update the specified object fields. The main logic of compareAndSet method is as follows:

Mainly by using CAS mechanism. If the default value of _value in the current object is UNINITIALIZED_VALUE, and if the value is actually UNINITIALIZED_VALUE, that is, the resource is not currently occupied by another thread, So let’s update it to newValue. Otherwise, if the value is no longer UNINITIALIZED_VALUE, the operation is abandoned.


UnsafeLazyImpl, as follows:

internal class UnsafeLazyImpl<out T>(initializer: () -> T) : Lazy<T>... {
    private var initializer: (() -> T)? = initializer
    private var _value: Any? = UNINITIALIZED_VALUE

    override val value: T
        get() {
            if(_value === UNINITIALIZED_VALUE) { _value = initializer!! () initializer =null
            return _value as T
Copy the code

Check whether value is equal to the default value, if so, call initialization logic, otherwise return.

Because no thread-safe handling is done, it must be called from a thread-safe location, otherwise multithreaded calls are likely to cause logic problems with multiple initializations.

Use advice

After analyzing the above several, it is not difficult to find that the above three have their own different scenarios.


    Thread safety Have a variable, for example, may be more than one thread calls at the same time, and you don’t accept the initialization function may be called many times, so you can use this method, but it is important to note that because when the get its internal use the object lock, so in the case of multithreading Call for the first time, is likely to block our other threads, For example, if the child thread is called at the same time as the main thread, then the main thread will be blocked. Although this time is usually very short (mainly due to internal logic), it still needs to be noted.


    Thread safety But compared to the former, you can accept your initialization function may be called many times, but it doesn’t affect your final use, because only the first initialize the results will be returned, does not affect your logic, so in general, if you don’t mind the above problem, we can try to use this way to write thread-safe code. To avoid initialization performance loss by calling GET locks.

  • NONE

    This method is not thread-safe, you need to pay attention to the thread-safe situation of the call, otherwise multi-threaded is likely to cause multiple initialization variables, resulting in different threads when the initial call object is even inconsistent, resulting in logic problems. In fact, for Android development, this is a relatively common use, and we usually deal with the main thread, for example, we can use the Activity or Fragment to lazy some fields, etc.

Extended use


For a project, there is standard key delivery, so you can follow the standardized delivery.


/** Add a tag to the Fragment */
fun <T : Fragment> T.argument(key: String = BUNDLE_KEY_TAG, value: Parcelable): T {
    arguments = value.toFragmentBundle(key)
    return this

// Fragment related
inline fun <reified T : Any> Fragment.bundles(
    key: String = BUNDLE_KEY_TAG,
) = lazy(PUBLICATION) {
    valvalue = arguments? .get(key) ? :throw NullPointerException("Fragment.getBundle Null?")
    if (value is T) value else throw RuntimeException("Fragment.getBundle Type mismatch")}Copy the code

When using:

private val searchKey by bundles<SearchUserKey>()
Copy the code


We often use BaseQuickAdapter in our projects, so how to use lazy optimization is a simple idea as follows:

fun <T> createAdapter(
    @LayoutRes layout: Int,
    obj: QuickAdapterBuilder<T>. () - >Unit
): Lazy<BaseQuickAdapter<T, BaseViewHolder>> = lazy(NONE) {
    QuickAdapterBuilder<T>().apply {
Copy the code
class QuickAdapterBuilder<T> {

    private var layout: Int = 0

    private var convert: ((holder: BaseViewHolder, data: T) -> Unit)? = null

    private var init: (BaseQuickAdapter<T, BaseViewHolder>.() -> Unit)? = null

    fun setLayout(@LayoutRes layout: Int) {
        this.layout = layout

    fun onBind(convert: (holder: BaseViewHolder.data: T) - >Unit) {
        this.convert = convert

    fun init(init: BaseQuickAdapter<T, BaseViewHolder>.() -> Unit) {
        this.init = init

   	internal val adapter: BaseQuickAdapter<T, BaseViewHolder> =
        object : BaseQuickAdapter<T, BaseViewHolder>(layout), LoadMoreModule {
            init {
                init? .invoke(this)}override fun convert(holder: BaseViewHolder, item: T){ convert? .invoke(holder, item) } } }Copy the code

We can write elegant code for our generic business code or component by extending functions or defining top-level functions and simply returning lazy{}, as shown in the example above.


Kotlin Lazy — What are they? How to use them?

Why doesn’t volatile guarantee atomicity while Atomic does?

Why does volatile not guarantee atomicity?

About me

I am Petterp, a third-rate development, if this article is helpful to you, welcome to like support, your support is my continuous creation of the biggest encouragement!