Status Update
Comments
ma...@gmail.com <ma...@gmail.com> #2
I am not sure I understand the use case. how can the benchmark be code to real world scenario when it's not possible to do right now ? which scenario is it ?
In any case, since this would be for benchmarking, this would clearly not be available through the public DSL. We should find a semi-private way of doing this (maybe the private variant API object could offer that functionality for instance or a property).
je...@google.com <je...@google.com>
je...@google.com <je...@google.com> #3
We want benchmarks to measure code after Progaurd / R8, but it's not possible to turn that on for androidTests in library modules at the moment (to my knowledge?)
Benchmarks are also a public facing thing, but we have a plugin to help configure gradle builds for our users, so if support for this ends up in a private API, we could try to keep those usages localized to our code perhaps.
je...@google.com <je...@google.com> #4
Any update on the status of this request and when it can be supported?
Thanks,
Amanda
an...@google.com <an...@google.com> #5
this is not part of our OKR at this point so we are not talking soon. at first glance, we would need to simulate usage patterns to minify against and such, this seems substantial amount of work. there are not a lot of library module that have android tests, most only rely on unit-tests.
how important is this ? we are out of PM right now but I suspect the next step will be to negotiate with J. Eason and xav@ to scale a priority level.
je...@google.com <je...@google.com> #6
This is a high priority request for Compose, to enable their benchmarks to measure release accurate performance. (Micro) Benchmarks are library modules, as they don't need the complexity of multi-apk tests - they're self measuring APKs that depend on libraries. (d.android.com/benchmark)
there are not a lot of library module that have android tests, most only rely on unit-tests.
To clarify, this is for com.android.library
modules, not jars - I'd expect most of those to use android tests (all of the libraries in jetpack for example do).
we would need to simulate usage patterns to minify against and such, this seems substantial amount of work
Simulate usage patterns? I don't understand - the dev can themselves provide a keep rule for test infra / classes if necessary. Long term, keep rules should be provided by test libraries.
Description
SourceDirectories.addGeneratedSourceDirectory
works the same way as the artifact API and takes over the output for the tasks. For example in thetoml/gen
), running the verification task shows that the location is changed tobuild/generated/toml/debugAddCustomSources
.It does in fact work if you pass this to multiple variants. For instance changing the recipe to something like this:
will in fact works. However the look of the API make it looks like it would not work.
Part of this is the way some of our other APIs work (for example transforms on Artifact are very variant specific).
At the very least we should document that it can in fact be used for multiple variantss. We should also consider making it more obvious (maybe the API just needs to receive a
Provider<Directory>
, though I think there are potentially issue as the location should be inbuild/
?)