Fixed
Status Update
Comments
su...@google.com <su...@google.com> #2
Yigit, do you have time to fix it?
reemission of the same liveData is racy
reemission of the same liveData is racy
al...@gmail.com <al...@gmail.com> #3
yea i'll take it.
al...@gmail.com <al...@gmail.com> #4
Thanks for the detailed analysis. This may not be an issue anymore since we've started using Main.immediate there but I' not sure; I'll try to create a test case.
ra...@google.com <ra...@google.com> #5
just emitting same live data reproduces the issue.
@Test
fun raceTest() {
val subLiveData = MutableLiveData(1)
val subject = liveData(testScope.coroutineContext) {
emitSource(subLiveData)
emitSource(subLiveData) //crashes
}
subject.addObserver().apply {
testScope.advanceUntilIdle()
}
}
@Test
fun raceTest() {
val subLiveData = MutableLiveData(1)
val subject = liveData(testScope.coroutineContext) {
emitSource(subLiveData)
emitSource(subLiveData) //crashes
}
subject.addObserver().apply {
testScope.advanceUntilIdle()
}
}
be...@gmail.com <be...@gmail.com> #6
With 2.2.0-alpha04 (that use Main.immediate), the issue seems to be still there (I tested it by calling emitSource() twice, like your test case)
al...@gmail.com <al...@gmail.com> #7
yea sorry immediate does not fix it.
I actually have a WIP fix for it:
https://android-review.googlesource.com/c/platform/frameworks/support/+/1112186
if your case is the one i found (emitting same LiveData multiple times, as shown in #5) you can work around it by adding a dummy transformation.
val subLiveData = MutableLiveData(1)
val subject = liveData(testScope.coroutineContext) {
emitSource(subLiveData.map {it })
emitSource(subLiveData.map {it} )
}
I actually have a WIP fix for it:
if your case is the one i found (emitting same LiveData multiple times, as shown in #5) you can work around it by adding a dummy transformation.
val subLiveData = MutableLiveData(1)
val subject = liveData(testScope.coroutineContext) {
emitSource(subLiveData.map {it })
emitSource(subLiveData.map {it} )
}
be...@gmail.com <be...@gmail.com> #8
Project: platform/frameworks/support
Branch: androidx-master-dev
commit af12e75e6b4110f48e44ca121466943909de8f06
Author: Yigit Boyar <yboyar@google.com>
Date: Tue Sep 03 12:58:11 2019
Fix coroutine livedata race condition
This CL fixes a bug in liveData builder where emitting same
LiveData source twice would make it crash because the second
emission registry could possibly happen before first one is
removed as source.
We fix it by using a suspending dispose function. It does feel
a bit hacky but we cannot make DisposableHandle.dispose async
and we do not want to block there. This does not mean that there
is a problem if developer disposes it manually since our emit
functions take care of making sure it disposes (and there is
no other way to add source to the underlying MediatorLiveData)
Bug: 140249349
Test: BuildLiveDataTest#raceTest_*
Change-Id: I0b464c242a583da4669af195cf2504e2adc4de40
M lifecycle/lifecycle-livedata-ktx/api/2.2.0-alpha05.txt
M lifecycle/lifecycle-livedata-ktx/api/current.txt
M lifecycle/lifecycle-livedata-ktx/api/public_plus_experimental_2.2.0-alpha05.txt
M lifecycle/lifecycle-livedata-ktx/api/public_plus_experimental_current.txt
M lifecycle/lifecycle-livedata-ktx/api/restricted_2.2.0-alpha05.txt
M lifecycle/lifecycle-livedata-ktx/api/restricted_current.txt
M lifecycle/lifecycle-livedata-ktx/src/main/java/androidx/lifecycle/CoroutineLiveData.kt
M lifecycle/lifecycle-livedata-ktx/src/test/java/androidx/lifecycle/BuildLiveDataTest.kt
https://android-review.googlesource.com/1112186
https://goto.google.com/android-sha1/af12e75e6b4110f48e44ca121466943909de8f06
Branch: androidx-master-dev
commit af12e75e6b4110f48e44ca121466943909de8f06
Author: Yigit Boyar <yboyar@google.com>
Date: Tue Sep 03 12:58:11 2019
Fix coroutine livedata race condition
This CL fixes a bug in liveData builder where emitting same
LiveData source twice would make it crash because the second
emission registry could possibly happen before first one is
removed as source.
We fix it by using a suspending dispose function. It does feel
a bit hacky but we cannot make DisposableHandle.dispose async
and we do not want to block there. This does not mean that there
is a problem if developer disposes it manually since our emit
functions take care of making sure it disposes (and there is
no other way to add source to the underlying MediatorLiveData)
Bug: 140249349
Test: BuildLiveDataTest#raceTest_*
Change-Id: I0b464c242a583da4669af195cf2504e2adc4de40
M lifecycle/lifecycle-livedata-ktx/api/2.2.0-alpha05.txt
M lifecycle/lifecycle-livedata-ktx/api/current.txt
M lifecycle/lifecycle-livedata-ktx/api/public_plus_experimental_2.2.0-alpha05.txt
M lifecycle/lifecycle-livedata-ktx/api/public_plus_experimental_current.txt
M lifecycle/lifecycle-livedata-ktx/api/restricted_2.2.0-alpha05.txt
M lifecycle/lifecycle-livedata-ktx/api/restricted_current.txt
M lifecycle/lifecycle-livedata-ktx/src/main/java/androidx/lifecycle/CoroutineLiveData.kt
M lifecycle/lifecycle-livedata-ktx/src/test/java/androidx/lifecycle/BuildLiveDataTest.kt
ti...@gmail.com <ti...@gmail.com> #9
Sorry folks. This is a bug in alpha01. The fix should be a part of alpha02. Hang tight.
ra...@google.com <ra...@google.com>
al...@gmail.com <al...@gmail.com> #10
Do you know when the fix will be available?
su...@google.com <su...@google.com> #11
We're aiming for later this week.
al...@gmail.com <al...@gmail.com> #12
Awesome, I'll get to merge my branch! 😉
ya...@gmail.com <ya...@gmail.com> #13
Hi, may I know,
1) After the end of execution of OneTimeWorkRequest typed worker's doWork(), do we need to cancelAllWorkByTag explicitly to perform "clean up"?
2) After this bug fixed, is there still any hard limit, only how many active job (with different tags) we can schedule on WorkManager?
Thanks.
1) After the end of execution of OneTimeWorkRequest typed worker's doWork(), do we need to cancelAllWorkByTag explicitly to perform "clean up"?
2) After this bug fixed, is there still any hard limit, only how many active job (with different tags) we can schedule on WorkManager?
Thanks.
al...@gmail.com <al...@gmail.com> #14
No, you don't need to perform cleanup and no, there aren't any limits.
su...@google.com <su...@google.com> #15
1. You don't need to perform any cleanup. Completed OneTimeWorkRequests will not re-execute,
2. There is no limit on the number of jobs you can schedule; however, at most 100 will be sent to JobScheduler at a time. The rest will be queued up and sent when appropriate. (That being said, I would advise that you probably don't want to have hundreds of potentially active jobs at any one time.)
2. There is no limit on the number of jobs you can schedule; however, at most 100 will be sent to JobScheduler at a time. The rest will be queued up and sent when appropriate. (That being said, I would advise that you probably don't want to have hundreds of potentially active jobs at any one time.)
ni...@gmail.com <ni...@gmail.com> #16
I still have this issue with 1.0.0-alpha02, like others it seems ( https://github.com/googlecodelabs/android-workmanager/issues/16 ).
As seen on the link, every time I restart my app (using ADB Idea plugin, that kill it then start it again), the numbers of tasks scheduled in JobScheduler increments by the number of periodic tasks scheduled.
I have 4 users that went into the issue and I will use a workaround code I wrote for now.
I see it on Samsung 7.0 devices for now.
As seen on the link, every time I restart my app (using ADB Idea plugin, that kill it then start it again), the numbers of tasks scheduled in JobScheduler increments by the number of periodic tasks scheduled.
I have 4 users that went into the issue and I will use a workaround code I wrote for now.
I see it on Samsung 7.0 devices for now.
ni...@gmail.com <ni...@gmail.com> #17
Here's a sample project that reproduces the issue. I use ADB Idea Plugin with the "ADB Restart App" feature to triggers the issue.
You will see that with each restart, the jobcount from JobScheduler increases by 5, but not the one from WorkManager.
I hope someone will see this message, if I need to make another issue please tell me.
You will see that with each restart, the jobcount from JobScheduler increases by 5, but not the one from WorkManager.
I hope someone will see this message, if I need to make another issue please tell me.
al...@gmail.com <al...@gmail.com> #18
If your issue is specific to periodic work, I'd file a new issue since this one is a little different.
Description
Version used: alpha01
Devices/Android versions reproduced on: >= API 23
There are a few issues here. First of all anytime a job finishes, all jobs that aren't already running are rescheduled. Then, each job generates a new ID which creates an exponential number of jobs for the system or Play Services to manage. Instead, a job should get its ID from the unique job tag or something like that.