Assigned
Status Update
Comments
ds...@google.com <ds...@google.com> #2
Hello,
Thanks for the feature request!
This has been forwarded to the Vertex AI Engineering Team so that they may evaluate it. Note that there are no ETAs or guarantees of implementation for feature requests. All communication regarding this feature request is to be done here.
Description
This will create a feature request which anybody can view and comment on.
Please describe your requested enhancement. Good feature requests will solve common problems or enable new use cases.
What you would like to accomplish: Knowing that there is a pre-process over the input data on batch predictions it would be useful to know the percent or amount of data that is lost in pre-precessing and the resulting size of the batch predictions on comparison. Adding a flag that informs about a format issue that dramatically changes the size of the input table or inform of the discarding of the input data corps would have an impact raising awarness of missing or wrong formats on input data.
How this might work: On the data flow on the batch prediction the first raw input data size can be compared with the output size of the pre-process previous to the batch prediction. If the difference goes above 50% reducing size of the data raise a visible flag.
If applicable, reasons why alternative solutions are not sufficient: It is important to have some information on the batch process output that allows the user to identify on a short notice the source of a non desired behavior of the managed service and lets room for a quick self support.
Other information (workarounds you have tried, documentation consulted, etc):