Status Update
Comments
du...@google.com <du...@google.com> #2
reemission of the same liveData is racy
ia...@gmail.com <ia...@gmail.com> #3
va...@gmail.com <va...@gmail.com> #4
ap...@google.com <ap...@google.com> #5
@Test
fun raceTest() {
val subLiveData = MutableLiveData(1)
val subject = liveData(testScope.coroutineContext) {
emitSource(subLiveData)
emitSource(subLiveData) //crashes
}
subject.addObserver().apply {
testScope.advanceUntilIdle()
}
}
yb...@google.com <yb...@google.com>
lu...@gmail.com <lu...@gmail.com> #6
du...@google.com <du...@google.com> #7
I actually have a WIP fix for it:
if your case is the one i found (emitting same LiveData multiple times, as shown in #5) you can work around it by adding a dummy transformation.
val subLiveData = MutableLiveData(1)
val subject = liveData(testScope.coroutineContext) {
emitSource(subLiveData.map {it })
emitSource(subLiveData.map {it} )
}
ch...@gmail.com <ch...@gmail.com> #8
Branch: androidx-master-dev
commit af12e75e6b4110f48e44ca121466943909de8f06
Author: Yigit Boyar <yboyar@google.com>
Date: Tue Sep 03 12:58:11 2019
Fix coroutine livedata race condition
This CL fixes a bug in liveData builder where emitting same
LiveData source twice would make it crash because the second
emission registry could possibly happen before first one is
removed as source.
We fix it by using a suspending dispose function. It does feel
a bit hacky but we cannot make DisposableHandle.dispose async
and we do not want to block there. This does not mean that there
is a problem if developer disposes it manually since our emit
functions take care of making sure it disposes (and there is
no other way to add source to the underlying MediatorLiveData)
Bug: 140249349
Test: BuildLiveDataTest#raceTest_*
Change-Id: I0b464c242a583da4669af195cf2504e2adc4de40
M lifecycle/lifecycle-livedata-ktx/api/2.2.0-alpha05.txt
M lifecycle/lifecycle-livedata-ktx/api/current.txt
M lifecycle/lifecycle-livedata-ktx/api/public_plus_experimental_2.2.0-alpha05.txt
M lifecycle/lifecycle-livedata-ktx/api/public_plus_experimental_current.txt
M lifecycle/lifecycle-livedata-ktx/api/restricted_2.2.0-alpha05.txt
M lifecycle/lifecycle-livedata-ktx/api/restricted_current.txt
M lifecycle/lifecycle-livedata-ktx/src/main/java/androidx/lifecycle/CoroutineLiveData.kt
M lifecycle/lifecycle-livedata-ktx/src/test/java/androidx/lifecycle/BuildLiveDataTest.kt
du...@google.com <du...@google.com> #9
It's expected to need to call .cachedIn
twice there as you've done since re-collection is unsupported on pagingData, operators like combine cause re-collection so without cachedIn you would be expecting to reload the data from scratch any time a new value is emitted to removedItemsFlow
In your example you are correctly caching the filtered data.
After .cachedIn
for you (presumably in alpha03).
ch...@gmail.com <ch...@gmail.com> #10
In my example i call .cachedIn
twice, but actually it doesn't caching the filtered data, so i have to keep the whole remove data and it will filter all remove data in source data every time.
Such as i want to remove data which id is 1
, then i remove id 2
,but when i remove 2
,i have to filter 1
again, so i have to save all remove data, so i think the re-collection doesn't work event i call .cachedIn
twice
du...@google.com <du...@google.com> #11
In my example i call .cachedIn twice, but actually it doesn't caching the filtered data, so i have to keep the whole remove data and it will filter all remove data in source data every time.
Such as i want to remove data which id is 1, then i remove id 2 ,but when i remove 2,i have to filter 1 again, so i have to save all remove data, so i think the re-collection doesn't work event i call .cachedIn twice
Sorry - I'm not sure I entirely understand what you're doing here. .cachedIn
doesn't affect what REFRESH returns, it simply prevents repeated work on recollection within the specified scope (for example if you change from landscape to portrait).
However on each new instance of PagingData
due to invalidation, you would still need to filter the entire stream of events, since it's a new stream.
ch...@gmail.com <ch...@gmail.com> #12
I want to do something like notifyItemRemoved
, for example there is a collect list ,i want to uncollect the item1
, then uncollect the item2
, so when i uncollcet item2
i have to filter item1
again ,
I don’t know if you can understand this example, because i think it's useless when call the .cachedIn
after filter, because it can't caching the new filter data , so if i want to remove data,i have to keep all item i want to remove even the data already removed.
du...@google.com <du...@google.com> #13
In this case it's recommended to update the backing dataset directly and call invalidate()
. We're investigating a page-level flow api that will allow you to control the items themselves without invalidation which you can track in #9.
Calling .cachedIn
after .filter
prevents your .filter
from needing to run again on the same list. For example on configuration change or when navigating between fragments. It has nothing to do with making your filter operation incremental.
ch...@gmail.com <ch...@gmail.com> #14
Thank you for your reply,I hope you can proceed smoothly and have more powerful api.
Description
Component used: Paging Version used: 3.0.0-alpha01
I was moving my app over to Paging 3.0.0-alpha01. It uses Room 2.3.0-alpha01 for support for
PagingSource
.Dao:
ViewModel:
UI:
Steps to reproduce:
Expected behavior: The new images are seamlessly added
Actual behavior: I received the following exception: