Assigned
Status Update
Comments
la...@google.com <la...@google.com> #2
I have informed our engineering team of this feature request. There is currently no ETA for its implementation.
A current workaround would be to check the returned "boundingPoly" [1] "vertices" for the returned "textAnnotations". If the calculated rectangle's heights > widths, than your image is sideways.
[1]https://cloud.google.com/vision/reference/rest/v1/images/annotate#boundingpoly
A current workaround would be to check the returned "boundingPoly" [1] "vertices" for the returned "textAnnotations". If the calculated rectangle's heights > widths, than your image is sideways.
[1]
ha...@sirenatech.com <ha...@sirenatech.com> #3
I also need this problem solved :)
ha...@sirenatech.com <ha...@sirenatech.com> #4
same :D
be...@google.com <be...@google.com> #5
+1
ha...@sirenatech.com <ha...@sirenatech.com> #6
+1
be...@google.com <be...@google.com> #7
This needs more attention. It's not just a display issue as described in the report. The co-ordinates returned in 'boundingPoly' are incorrect if the image was taken on a phone. All the x points should be y and vice versa.
The workaround does not make sense as "boundingPoly" [1] "vertices" for "textAnnotations" does not indicate the image dimensions - it indicates the dimensions of the relevant text block inside the image.
The workaround does not make sense as "boundingPoly" [1] "vertices" for "textAnnotations" does not indicate the image dimensions - it indicates the dimensions of the relevant text block inside the image.
ha...@sirenatech.com <ha...@sirenatech.com> #8
+1
be...@google.com <be...@google.com> #9
Would be great if this could be implemented.
ha...@sirenatech.com <ha...@sirenatech.com> #11
+1
jo...@google.com <jo...@google.com>
jo...@google.com <jo...@google.com> #12
+1
ha...@sirenatech.com <ha...@sirenatech.com> #13
+1.
ha...@sirenatech.com <ha...@sirenatech.com> #14
The rotation information should already be available, basically the order of the bounding box vertices encode that rotation information:
https://cloud.google.com/vision/docs/reference/rest/v1/images/annotate#block
Could you please test and see if that works for your case?
Could you please test and see if that works for your case?
ha...@sirenatech.com <ha...@sirenatech.com> #15
Each bounding box on the page can have a different orientation, so it can be frustrating to figure out.
fd...@google.com <fd...@google.com> #16
+1
ha...@sirenatech.com <ha...@sirenatech.com> #17
"A current workaround would be to check the returned "boundingPoly" [1] "vertices" for the returned "textAnnotations". If the calculated rectangle's heights > widths, than your image is sideways. " - The proposed workaround does not work if the image needs to be rotated at 180 degrees. Any ETA after 2y and 1 day?
ga...@google.com <ga...@google.com> #18
+1
ha...@sirenatech.com <ha...@sirenatech.com> #19
+1
Description
-We tested the US English model on a far-field device (with 3-Mics) and the responses are 90% accurate. We tested from a distance of 25 ft as well in fairly noisy (fans running etc) office space. However, when we use the Indian English model, it works only from distances less than 4ft distance. Beyond 4ft distance, the model just returns empty or just random text. The test environment is the same in both bases.
What you expected to happen:
-We expected that the Indian English language model will perform as good as the US English language model. However, the performance is pretty bad.
Steps to reproduce:
-Test both the models from 4ft distance.
-Then test both the models from 10ft distance.
-Then test both the models from 20ft distance.
-Some fan noise around will help understand the problem. The US english model works well even in the noisy scenario.