Status Update
Comments
ra...@google.com <ra...@google.com> #2
Are you sure you are emitting these trace sections every iteration?
a....@gmail.com <a....@gmail.com> #3
Yes, you can check in the attached archive. There are files with the expected number of runs, but there are also those with fewer.
a....@gmail.com <a....@gmail.com> #4
Here's how we run the tests; there is no variability here.
cc...@google.com <cc...@google.com> #5
If any metric is missing from a given iteration, results from that iteration are skipped:
This is definitely surprising and can be undesirable, it was done to workaround flakes in certain underlying metric sources, and to accommodate how metric results are output into json (no easy way to accommodate gaps in any metric).
Few things we could do to improve this:
- Print what's missing so you can investigate - this used to be more obvious back before we had so many extensible ways to generate metrics.
- Print fact that iteration was skipped to Studio output, possibly to JSON
- Offer a flag to enable either throwing or storing
null
orNaN
in those arrays, though this would need to be opt-in to begin with.
a....@gmail.com <a....@gmail.com> #6
I would like option 3 to be implemented, where a flag is offered to store null
or NaN
in the arrays when a metric is missing.
a....@gmail.com <a....@gmail.com> #7
I increased the buffer size fourfold by replacing the default Perfetto config, which resolved my issue with disappearing metrics. This solution might be helpful for others facing similar problems. However, it would still be preferable to have a fix for the original problem.
Description
Component used:benchmark-macro-junit Version used:1.3.0 Devices/Android versions reproduced on:emulator64_x86_64 REL
We have performance tests that include custom metrics added via
TraceSectionMetric
. For some reason, I'm seeing a discrepancy between the total number of runs and the number of measurements in theruns
array for each metric. This discrepancy occurs only in some runs of the tests, not every time.My configuration looks like this:
Here's an example of jq command output:
I have tried manually requesting some metrics using the Perfetto batch processor and indeed each file contains the requested metric. However, I don't understand why some values are not recorded in the final JSON report. I would prefer to analyze the already generated JSON file.
Any insights into why this might be happening would be greatly appreciated.