Status Update
Comments
du...@google.com <du...@google.com> #2
Branch: androidx-master-dev
commit b90079595f33f58fece04026a97faa0d243acdb1
Author: Yuichi Araki <yaraki@google.com>
Date: Wed Sep 18 16:55:49 2019
Change the way to detect mismatch between POJO and query
This fixes cursor mismatch warnings with expandProjection.
Bug: 140759491
Test: QueryMethodProcessorTest
Change-Id: I7659002e5e0d1ef60fc1af2a625c0c36da0664d8
M room/compiler/src/main/kotlin/androidx/room/processor/QueryMethodProcessor.kt
M room/compiler/src/main/kotlin/androidx/room/solver/TypeAdapterStore.kt
M room/compiler/src/main/kotlin/androidx/room/solver/query/result/PojoRowAdapter.kt
M room/compiler/src/test/kotlin/androidx/room/processor/QueryMethodProcessorTest.kt
M room/compiler/src/test/kotlin/androidx/room/testing/TestProcessor.kt
ia...@gmail.com <ia...@gmail.com> #3
va...@gmail.com <va...@gmail.com> #4
Branch: androidx-master-dev
commit bdde5a1a970ddc9007b28de4aa29d60ffa588f08
Author: Yigit Boyar <yboyar@google.com>
Date: Thu Apr 16 16:47:05 2020
Re-factor how errors are dismissed when query is re-written
This CL changes how we handle errors/warnings if query is
re-written.
There was a bug in expandProjection where we would report warnings
for things that Room already fixes automatically (
The solution to that problem (I7659002e5e0d1ef60fc1af2a625c0c36da0664d8)
solved it by deferring validating of columns until after re-write
decision is made. Unfortunately, this required changing PojoRowAdapter
to have a dummy mapping until it is validating, make it hard to use
as it does have a non-null mapping which is not useful.
This CL partially reverts that change and instead rely on the log
deferring logic we have in Context. This way, we don't need to break
the stability of PojoRowAdapter while still having the ability to
drop warnings that room fixes. This will also play nicer when we
have different query re-writing options that can use more information
about the query results.
Bug: 153387066
Bug: 140759491
Test: existing tests pass
Change-Id: I2ec967c763d33d7a3ff02c1a13c6953b460d1e5f
M room/compiler/src/main/kotlin/androidx/room/log/RLog.kt
M room/compiler/src/main/kotlin/androidx/room/processor/QueryMethodProcessor.kt
M room/compiler/src/main/kotlin/androidx/room/solver/TypeAdapterStore.kt
M room/compiler/src/main/kotlin/androidx/room/solver/query/result/PojoRowAdapter.kt
ap...@google.com <ap...@google.com> #5
Branch: androidx-master-dev
commit 310351127da4b970a9c5b5dbaa796d0cf19abac0
Author: Yigit Boyar <yboyar@google.com>
Date: Fri Jun 12 22:52:50 2020
Never collect for Flow<PageEvent> twice
This CL Fixes a bug where the Multicast logic could possibly
restart the Flow<PageEvent> if upsteam closes and a new downstream
arrives before we get the new PagedData.
Since Flow<PageEvent> (PagedData.flow) cannot be collected twice and
we already keep the history of events cached, this CL ensures that
if Multicaster is restarted, we won't connect to the original flow.
Instead, we'll just complete that flow as an empty flow and let
downstream receive history of events, if any exists.
I'm not changing Multicaster because from its perspective, restarting
upstream makes sense if there is a new downstream after previous
upstream closes. This behavior is specifc to CachedPageEventFlow hence
handled there.
We run Multicaster with keepAlive so there is no case where the
collection from upstream would stop unless:
* upstream closes
* we close the CachedPageEventFlow (usually due to a new one)
* scope is cancelled
Bug: 158784811
Test: CachedPageEventFlowTest, muzei
Change-Id: I58bcc4b2bde88b1c78ed1a96cf227e068127b47e
M paging/common/src/main/kotlin/androidx/paging/CachedPageEventFlow.kt
M paging/common/src/test/kotlin/androidx/paging/CachedPageEventFlowTest.kt
yb...@google.com <yb...@google.com>
lu...@gmail.com <lu...@gmail.com> #6
i use simple fragment and non-suspend submit
```
java.lang.IllegalStateException: cannot collect twice from pager
at androidx.paging.PageFetcherSnapshot$pageEventFlow$1.invokeSuspend(PageFetcherSnapshot.kt:94)
at androidx.paging.PageFetcherSnapshot$pageEventFlow$1.invoke(PageFetcherSnapshot.kt)
at androidx.paging.CancelableChannelFlowKt$cancelableChannelFlow$1.invokeSuspend(CancelableChannelFlow.kt:35)
at androidx.paging.CancelableChannelFlowKt$cancelableChannelFlow$1.invoke(CancelableChannelFlow.kt)
at kotlinx.coroutines.flow.ChannelFlowBuilder.collectTo$suspendImpl(Builders.kt:327)
at kotlinx.coroutines.flow.ChannelFlowBuilder.collectTo(Builders.kt)
at kotlinx.coroutines.flow.internal.ChannelFlow$collectToFun$1.invokeSuspend(ChannelFlow.kt:53)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:56)
at kotlinx.coroutines.EventLoop.processUnconfinedEvent(EventLoop.common.kt:69)
at kotlinx.coroutines.DispatchedContinuationKt.resumeCancellableWith(DispatchedContinuation.kt:337)
at kotlinx.coroutines.intrinsics.CancellableKt.startCoroutineCancellable(Cancellable.kt:26)
at kotlinx.coroutines.CoroutineStart.invoke(CoroutineStart.kt:109)
at kotlinx.coroutines.AbstractCoroutine.start(AbstractCoroutine.kt:158)
at kotlinx.coroutines.BuildersKt__Builders_commonKt.launch(Builders.common.kt:54)
at kotlinx.coroutines.BuildersKt.launch(Unknown Source)
at kotlinx.coroutines.BuildersKt__Builders_commonKt.launch$default(Builders.common.kt:47)
at kotlinx.coroutines.BuildersKt.launch$default(Unknown Source)
at androidx.paging.AsyncPagingDataDiffer.submitData(AsyncPagingDataDiffer.kt:181)
at androidx.paging.PagingDataAdapter.submitData(PagingDataAdapter.kt:115)
at ds.photosight.view.GalleryFragment$onViewCreated$$inlined$observe$2.onChanged(LiveData.kt:53)
```
du...@google.com <du...@google.com> #7
Are you able to provide a repro?
Make sure you didn't forget to call .cachedIn(scope)
?
ch...@gmail.com <ch...@gmail.com> #8
pagerFlow
.combine(removedItemsFlow) { pagingData, removed ->
pagingData.filter { (it as CollectEntity).id !in removed }
}
.cachedIn(viewmodel.viewModelScope)
.collectLatest {
****
}
i found the same error , if cachedIn
is after combine
, it will broke up, but it is normal when cachedIn
call twice , such as cachedIn().combine().cachedIn()
, but it will not cache the combine data, because i want to cache the data after filter
du...@google.com <du...@google.com> #9
It's expected to need to call .cachedIn
twice there as you've done since re-collection is unsupported on pagingData, operators like combine cause re-collection so without cachedIn you would be expecting to reload the data from scratch any time a new value is emitted to removedItemsFlow
In your example you are correctly caching the filtered data.
After .cachedIn
for you (presumably in alpha03).
ch...@gmail.com <ch...@gmail.com> #10
In my example i call .cachedIn
twice, but actually it doesn't caching the filtered data, so i have to keep the whole remove data and it will filter all remove data in source data every time.
Such as i want to remove data which id is 1
, then i remove id 2
,but when i remove 2
,i have to filter 1
again, so i have to save all remove data, so i think the re-collection doesn't work event i call .cachedIn
twice
du...@google.com <du...@google.com> #11
In my example i call .cachedIn twice, but actually it doesn't caching the filtered data, so i have to keep the whole remove data and it will filter all remove data in source data every time.
Such as i want to remove data which id is 1, then i remove id 2 ,but when i remove 2,i have to filter 1 again, so i have to save all remove data, so i think the re-collection doesn't work event i call .cachedIn twice
Sorry - I'm not sure I entirely understand what you're doing here. .cachedIn
doesn't affect what REFRESH returns, it simply prevents repeated work on recollection within the specified scope (for example if you change from landscape to portrait).
However on each new instance of PagingData
due to invalidation, you would still need to filter the entire stream of events, since it's a new stream.
ch...@gmail.com <ch...@gmail.com> #12
I want to do something like notifyItemRemoved
, for example there is a collect list ,i want to uncollect the item1
, then uncollect the item2
, so when i uncollcet item2
i have to filter item1
again ,
I don’t know if you can understand this example, because i think it's useless when call the .cachedIn
after filter, because it can't caching the new filter data , so if i want to remove data,i have to keep all item i want to remove even the data already removed.
du...@google.com <du...@google.com> #13
In this case it's recommended to update the backing dataset directly and call invalidate()
. We're investigating a page-level flow api that will allow you to control the items themselves without invalidation which you can track in #9.
Calling .cachedIn
after .filter
prevents your .filter
from needing to run again on the same list. For example on configuration change or when navigating between fragments. It has nothing to do with making your filter operation incremental.
ch...@gmail.com <ch...@gmail.com> #14
Thank you for your reply,I hope you can proceed smoothly and have more powerful api.
Description
Component used: Paging Version used: 3.0.0-alpha01
I was moving my app over to Paging 3.0.0-alpha01. It uses Room 2.3.0-alpha01 for support for
PagingSource
.Dao:
ViewModel:
UI:
Steps to reproduce:
Expected behavior: The new images are seamlessly added
Actual behavior: I received the following exception: