Assigned
Status Update
Comments
pu...@google.com <pu...@google.com>
je...@google.com <je...@google.com> #2
Hi there,
I am unable to retrieve and test your image as the bucket you provided does not seem to be available anymore. Would it be possible for you to share the original image(twoc.png) that you’ve used in the Cloud Vision API Explorer?
Alternatively, it would be important to verify if any of the best practices would apply to your image. For example, when using TEXT_DETECTION the OCR of Cloud Vision API requires more resolution to detect characters and therefore, a minimum image size of 1024 x 768 pixels is advised.
I am unable to retrieve and test your image as the bucket you provided does not seem to be available anymore. Would it be possible for you to share the original image(twoc.png) that you’ve used in the Cloud Vision API Explorer?
Alternatively, it would be important to verify if any of the best practices would apply to your image. For example, when using TEXT_DETECTION the OCR of Cloud Vision API requires more resolution to detect characters and therefore, a minimum image size of 1024 x 768 pixels is advised.
je...@google.com <je...@google.com> #3
i attached example image file.
when i use other language in "Language Hints" like Arabic,English it show bad output.line will be break wrong.
but when i use Google doc , that will show correct output.i think Google doc use other mechanism of OCR or any preprocessing (ex:change size,layer recognition,stronger language analyzer or even any image processing and etc).
may be when you test this image it show good output but in other image (Asian language like Arabic) file it has bad output and user should edit text file.
how can i get output like Google doc with Google cloud text_detection.
when i use other language in "Language Hints" like Arabic,English it show bad output.line will be break wrong.
but when i use Google doc , that will show correct output.i think Google doc use other mechanism of OCR or any preprocessing (ex:change size,layer recognition,stronger language analyzer or even any image processing and etc).
may be when you test this image it show good output but in other image (Asian language like Arabic) file it has bad output and user should edit text file.
how can i get output like Google doc with Google cloud text_detection.
Description
Please provide as much information as possible. At least, this should include a description of your issue and steps to reproduce the problem. If possible please provide a summary of what steps or workarounds you have already tried, and any docs or articles you found (un)helpful.
Problem you have encountered:
When running the Java demo code for any Cloud Vision feature in a non-standalone environment such as a war package the ImageAnnotatorClient#batchAnnotateImages method blocks the thread indefinitely with no feedback.
What you expected to happen:
ImageAnnotatorClient#batchAnnotateImages to return a valid BatchAnnotateImagesResponse object after blocking the thread for a reasonable/expected amount of time.
Steps to reproduce:
Run the Java demo code for Google Cloud Vision in any non-standalone environment such as a war package.
My specific environment used the LANDMARK_DETECTION feature however all of the features cause a hang in a non-standalone environment. I used maven for compilation into a war and a 7.0.14 GlassFish server. Additionally, I used Apache NetBeans 21 for my IDE.
Other information (workarounds you have tried, documentation consulted, etc):
I tried running in a separate thread, changing firewall rules, and changing my testing server to Tomcat.
Here is a related Stack Overflow question: