Status Update
Comments
cc...@google.com <cc...@google.com> #2
Ah, good catch, this is why we attached those timestamps in the first place on the trace files.
We don't need to do it for benchmarkData.json (and I'd like to avoid breaking that file output API), but we should sort out how we do this for baselineProfile files, which are the more pressing item here, since we try and link those.
+Josh, could Studio do some sort of per-am-insrument-based renaming for files that benchmark links? While we can rename in this case, in general we'd like to keep stable names for file outputs that we expect tooling to interact with (like json outputs, or baseline profile files).
cc...@google.com <cc...@google.com> #3
Note - we fixed the baseline profile issue by making two copies of the file, one with a unique name we pass to studio, and one with a stable name to make pulling easier:
Will fix remainder here by just adding timestamp to stack sampling / method tracing.
ap...@google.com <ap...@google.com> #4
Branch: androidx-main
commit f7a6c1936ed5b4f42ec99dd6eb30b34365bb2136
Author: Chris Craik <ccraik@google.com>
Date: Mon Jan 24 16:05:48 2022
Add timestamp to profiler traces
Fixes: 214917025
Relnote: "Fixed issue where microbench profiler traces would fail to be updated in subsequent runs when linked in Studio output"
Test: ./gradlew benchmark:benchmark-common:cC
Test: validated new filenames with "StackSampling" profiler enabled
Change-Id: I5ae4d8eae1f463f75fcc43ceb751c01d4e5f4d8d
M benchmark/benchmark-common/src/androidTest/java/androidx/benchmark/ProfilerTest.kt
M benchmark/benchmark-common/src/main/java/androidx/benchmark/Profiler.kt
Description
Version used: 1.1.0-beta01
Devices/Android versions reproduced on: any
When clicking on link to Stack Sampling Trace or Method Sampling Trace, it sometimes doesn't open proper trace file.
It happens when the profiling session was already opened in AS profiler.
This could be resolved with attaching datetime to the profiling file.
This would also improve the file not being overwritten with every benchmark run (+ we do the same thing with perfetto-trace).
Question is whether to do the same for benchmarkData.json?