preface

Hi, I’m KunMinX, author of Jetpack MVVM Best Practice.

The term “data flooding” mentioned today is a generalization of this phenomenon that I created and spread online in 2019 in order to understand and remember the situation that “pages receive old data push when they enter the second year”.

It mainly occurs in: the combination of SharedViewModel + LiveData to solve the page communication scenario.

Objective of this paper

Since the goal of this article is to introduce the flaws of the existing solution in the official Demo and the perfect solution after a year of iteration,

So let’s assume that everyone in this room has a basic understanding of the background, and knows:

Why is LiveData designed to be sticky events by default

Why the official documentation recommends using SharedViewModel + LiveData (the documentation doesn’t say it, but it actually contains three key background reasons)

Even why there is the phenomenon of “data flooding”

And why not static singletons, LiveDataBus, in the “page communication” scenario

If you are not familiar with these pre-knowledge, you can go to the “Exclusive analysis of the full picture of the background reason of LiveData data pouring” based on your personal interest, which will not be mentioned here.

Existing solutions and their drawbacks

In Jetpack MVVM, we mentioned the Event wrapper, reflection mode, and SingleLiveEvent as three ways to solve the “data dump” problem. They are from the extranet we mentioned above, the meituan article, and the latest official demo.

But as we explained in Jetpack MVVM Essentials, they have the following problems:

Event Event wrapper:

In the case of multiple observers, only the first observer is allowed to consume, which is not in line with realistic requirements.

Also, the handwritten Event Event wrapper has null-safe consistency issues in Java.

How reflection interferes with Version:

There is delay, so it cannot be used in the scene requiring real time;

And the data will stay in memory with the SharedViewModel for a long time.

Official latest demo SingleLiveEvent:

It is an improvement on the consistency problem of the Event wrapper, but it does not solve the problem of multi-observer consumption.

It also introduces the additional problem of messages not being freed from memory.

UnPeekLiveData characteristics

UnPeekLiveData through the original “delay automatic message cleaning” design, to meet:

1. When a message is distributed to multiple observers, it will not be null if consumed by the first observer

2. When the time limit is reached, the message is no longer flooded

3. When the time limit expires, the message is automatically cleared and released from memory

4. Made non-invasive design possible, and finally realized non-invasive rewriting following the open and close principle in combination with the design of official SingleLiveEvent.

In addition, UnPeekLiveData provides a constructor pattern, which can be used to assemble UnPeekLiveData suitable for your own business scenarios.

Zero intrusion design Delay automatic message clearing Builder constructor

PS: Thank you very much for the recent positive trial and feedback of hegaojian, Angki, Flynn, Joker_Wan, Xiaoshu Zhiqiu, Big Qi and other partners, so that the undetected problems were found and taken into consideration in a timely manner.

 

GitHub & Maven dependencies

See: GitHub: KunMinX/UnPeekLiveData

   

License

This article is issued under the CC signature – Non-commercial Use – No Deduction 4.0 International Agreement.

Copyright © 2019 – present KunMinX

The summary of the word “data dump” and its phenomenon mentioned in the paper, the understanding of the defects of Event Event wrapper, reflection mode and SingleLiveEvent, as well as the design of “delayed automatic cleaning message” of UnPeekLiveData, all belong to my independent and original achievements. I have the right of ownership and final interpretation.

When you refer to or quote the introduction, ideas and conclusions of this article for second creation, or reprint the full text, you must indicate the link source, otherwise we reserve the right to pursue responsibility.