Assigned
Status Update
Comments
va...@google.com <va...@google.com>
pu...@google.com <pu...@google.com> #2
Hello,
Thank you for reaching out.
This issue was forwarded to our product team. Please keep in mind that this issue has to be analyzed and considered by the product team and I can't provide you ETA for it to be delivered. However, you can keep track of the status by following this thread.
Kind Regards
ph...@gmail.com <ph...@gmail.com> #3
Is there any update on this issue?
ph...@gmail.com <ph...@gmail.com> #4
Hello,
Thank you for your patience.
Our product team is currently working on it. Once there will be an update it will be shared in this thread.
Kind Regards
Description
The text annotations are very far off where the text is in the image and the results only seem to match a rotated version of the image. I am not sure what is causing it but I have tested my own API call to
Two very similar JPGs with completely different results (attached working.jpg and broken.jpg).
What you expected to happen:
The text is usually highlighted and positioned correctly
Steps to reproduce:
Try the vision API with Broken.jpg and Working.jpg, which I can't tell the difference between apart from the file size (compression?).
Other information (workarounds you have tried, documentation consulted, etc):
I've tried no compression jpg and some compression, plus slight file altercations like screenshotting the broken.jpg as a PNG, which fixes it. Seems to be a difference in the file type. I've also tried the "Try It" interface + my API with a google firestore uri. I am getting inconsistent results and am stuck on the difference.
I've checked the textAnnotaion coordinated with an image inspection tool and the broken results seem to be accurate if you rotate the image 90deg, so is the vision AI doing that?