Status Update
Comments
cc...@google.com <cc...@google.com> #2
reemission of the same liveData is racy
dv...@gmail.com <dv...@gmail.com> #3
dv...@gmail.com <dv...@gmail.com> #4
cc...@google.com <cc...@google.com> #5
@Test
fun raceTest() {
val subLiveData = MutableLiveData(1)
val subject = liveData(testScope.coroutineContext) {
emitSource(subLiveData)
emitSource(subLiveData) //crashes
}
subject.addObserver().apply {
testScope.advanceUntilIdle()
}
}
la...@google.com <la...@google.com> #6
dv...@gmail.com <dv...@gmail.com> #7
I actually have a WIP fix for it:
if your case is the one i found (emitting same LiveData multiple times, as shown in #5) you can work around it by adding a dummy transformation.
val subLiveData = MutableLiveData(1)
val subject = liveData(testScope.coroutineContext) {
emitSource(subLiveData.map {it })
emitSource(subLiveData.map {it} )
}
dv...@gmail.com <dv...@gmail.com> #8
Branch: androidx-master-dev
commit af12e75e6b4110f48e44ca121466943909de8f06
Author: Yigit Boyar <yboyar@google.com>
Date: Tue Sep 03 12:58:11 2019
Fix coroutine livedata race condition
This CL fixes a bug in liveData builder where emitting same
LiveData source twice would make it crash because the second
emission registry could possibly happen before first one is
removed as source.
We fix it by using a suspending dispose function. It does feel
a bit hacky but we cannot make DisposableHandle.dispose async
and we do not want to block there. This does not mean that there
is a problem if developer disposes it manually since our emit
functions take care of making sure it disposes (and there is
no other way to add source to the underlying MediatorLiveData)
Bug: 140249349
Test: BuildLiveDataTest#raceTest_*
Change-Id: I0b464c242a583da4669af195cf2504e2adc4de40
M lifecycle/lifecycle-livedata-ktx/api/2.2.0-alpha05.txt
M lifecycle/lifecycle-livedata-ktx/api/current.txt
M lifecycle/lifecycle-livedata-ktx/api/public_plus_experimental_2.2.0-alpha05.txt
M lifecycle/lifecycle-livedata-ktx/api/public_plus_experimental_current.txt
M lifecycle/lifecycle-livedata-ktx/api/restricted_2.2.0-alpha05.txt
M lifecycle/lifecycle-livedata-ktx/api/restricted_current.txt
M lifecycle/lifecycle-livedata-ktx/src/main/java/androidx/lifecycle/CoroutineLiveData.kt
M lifecycle/lifecycle-livedata-ktx/src/test/java/androidx/lifecycle/BuildLiveDataTest.kt
cc...@google.com <cc...@google.com> #9
Did the tracing buffer increase improve the problem?
Where can I get this project source code ? androidx.benchmark:benchmark-macro
Here:
or mirrored on github:
If you want to experiment with different perfetto capture configs you can also use this API:
dv...@gmail.com <dv...@gmail.com> #10
thank you
dv...@gmail.com <dv...@gmail.com> #11
I have a question about perfetto command.
as I found that, the data sources wasn't started after just run the perfetto command so I want to use --background-wait options, to let perfetto wait data sources started , then return.
I also worry about perfetto command process would killed by lmk, so I want to use detach mode to run perfetto, but --background-wait and detach mode are mutually exclusive.
how to achieve my goal: 1、return until all data sources start, 2、 run in background
la...@google.com <la...@google.com> #12
If perfetto is started from an adb shell
, it cannot be killed by LMKD. So you can safely use --background-wait without worrying.
--detach is realy reserved exclusively for the System Tracing app and should not be used by others.
dv...@gmail.com <dv...@gmail.com> #13
dv...@gmail.com <dv...@gmail.com> #14
thank you
dv...@gmail.com <dv...@gmail.com> #15
It works after enlarge the buffer size and always use --background-wait option
( I have used wsl in windows , following the directions in
Thank you.
by the way, is there better way to simulate high system loads in android during benchmark test?
ap...@google.com <ap...@google.com> #16
Branch: androidx-main
commit 2bb16072e48a70b44cb208b2c13dbecb79ce66f3
Author: Chris Craik <ccraik@google.com>
Date: Mon Mar 18 14:48:22 2024
Use --background-wait on API 33+
Bug: 310760059
Test: ./gradlew bench:b-m:cC
Test: manually confirmed API 33 has --background-wait, 32 does not
Relnote: "Enable blocking start on Perfetto trace record to reduce
risk of missing data at beginning of trace. Only supported on API
33+."
Change-Id: Ie6e417ad248b431ebf6096e2865265d51553be7f
M benchmark/benchmark-common/src/main/java/androidx/benchmark/perfetto/PerfettoHelper.kt
Description
Component used:MacrobenchmarkRule Version used:1.20 Devices/Android versions reproduced on: OPPO Find N3
If this is a bug in the library, we would appreciate if you could attach:
As proper marcrobenchmark, I dedicated to find out the user perceived janks.
According to some studies, it should be more than 5 consecutive jank frames, or a single jank frame that's more than 5 frames time.
And I add some loads during macrobenchmark testing.
however, when I add some loads(for example, let some app running in background, the system becomes slow, the perfetto command started /stoped by macrobenchmarkrule can't capture the full perfetto trace which happens between measureBlock.
I checked trace, found perfetto command started by macrobenchmark running in little cpu cores. That maybe the cause for the loss of perfetto trace during high system loads.
I still found that, If I keep a perfetto trace running outside macrobenchmark, It merely happens that losing the traces during marcobenchmark ?
can you fix that?