Assigned
Status Update
Comments
ti...@google.com <ti...@google.com> #2
I have informed our engineering team of this feature request. There is currently no ETA for its implementation.
A current workaround would be to check the returned "boundingPoly" [1] "vertices" for the returned "textAnnotations". If the calculated rectangle's heights > widths, than your image is sideways.
[1]https://cloud.google.com/vision/reference/rest/v1/images/annotate#boundingpoly
A current workaround would be to check the returned "boundingPoly" [1] "vertices" for the returned "textAnnotations". If the calculated rectangle's heights > widths, than your image is sideways.
[1]
Description
When a document is sent to predict through BatchPrediction [1] there should be an option to get more granular predictions than just one label to the whole document. An option to get a line per line prediction would be useful.
How this might work:
The system would understand that given the format of the document a prediction needs to be made for every line in the document rather than to the whole document.
If applicable, reasons why alternative solutions are not sufficient:
Having to split the texts into smaller one-line texts and submitting a csv with all the one-lined texts is a tedious workaround that could be avoided.
---
[1]