Status Update
Comments
ra...@google.com <ra...@google.com> #2
Hello,
Thank you for reaching out to us with your request.
We have duly noted your feedback and will thoroughly validate it. While we cannot provide an estimated time of implementation or guarantee the fulfillment of the issue, please be assured that your input is highly valued. Your feedback enables us to enhance our products and services.
We appreciate your continued trust and support in improving our Google Cloud Platform products. In case you want to report a new issue, Please do not hesitate to create a new issue on the
Once again, we sincerely appreciate your valuable feedback; Thank you for your understanding and collaboration.
a....@gmail.com <a....@gmail.com> #3
Will be any communication to me when google starts working on this feature?
a....@gmail.com <a....@gmail.com> #4
Here's how we run the tests; there is no variability here.
cc...@google.com <cc...@google.com> #5
If any metric is missing from a given iteration, results from that iteration are skipped:
This is definitely surprising and can be undesirable, it was done to workaround flakes in certain underlying metric sources, and to accommodate how metric results are output into json (no easy way to accommodate gaps in any metric).
Few things we could do to improve this:
- Print what's missing so you can investigate - this used to be more obvious back before we had so many extensible ways to generate metrics.
- Print fact that iteration was skipped to Studio output, possibly to JSON
- Offer a flag to enable either throwing or storing
null
orNaN
in those arrays, though this would need to be opt-in to begin with.
a....@gmail.com <a....@gmail.com> #6
I would like option 3 to be implemented, where a flag is offered to store null
or NaN
in the arrays when a metric is missing.
a....@gmail.com <a....@gmail.com> #7
I increased the buffer size fourfold by replacing the default Perfetto config, which resolved my issue with disappearing metrics. This solution might be helpful for others facing similar problems. However, it would still be preferable to have a fix for the original problem.
Description
Component used:benchmark-macro-junit Version used:1.3.0 Devices/Android versions reproduced on:emulator64_x86_64 REL
We have performance tests that include custom metrics added via
TraceSectionMetric
. For some reason, I'm seeing a discrepancy between the total number of runs and the number of measurements in theruns
array for each metric. This discrepancy occurs only in some runs of the tests, not every time.My configuration looks like this:
Here's an example of jq command output:
I have tried manually requesting some metrics using the Perfetto batch processor and indeed each file contains the requested metric. However, I don't understand why some values are not recorded in the final JSON report. I would prefer to analyze the already generated JSON file.
Any insights into why this might be happening would be greatly appreciated.