Status Update
Comments
[Deleted User] <[Deleted User]> #2
Hello,
To assist us in conducting thorough investigation, we kindly request your cooperation in providing the following information regarding the reported issue:
- Has this scenario ever worked as expected in the past?
- Do you see this issue constantly or intermittently ?
- If this issue is seen intermittently, then how often do you observe this issue ? Is there any specific scenario or time at which this issue is observed ?
- To help us understand the issue better, please provide detailed steps to reliably reproduce the problem.
- It would be greatly helpful if you could attach screenshots of the output related to this issue.
Your cooperation in providing these details will enable us to dive deeper into the matter and work towards a prompt resolution. We appreciate your assistance and look forward to resolving this issue for you.
Thank you for your understanding and cooperation.
[Deleted User] <[Deleted User]> #3
Thank you for reaching out. Here is the information requested regarding the reported issue:
In response to the first point, I previously conducted an uptraining with approximately twenty bank statements, and this issue did not occur.
For the second point, I have attempted the uptraining three times, and the same error appeared each time.
To reproduce the issue: I labeled 76 bank statements, dividing them as follows—59 in the training dataset and 17 in the test dataset. I also modified the label schema by deactivating the following labels: account_types, bank_address, bank_name, client_address, and client_name. Additionally, I created a new label for rib (bank account identifier). I then proceeded to uptrain a new version, and the issue arises during this process.
I hope this information helps clarify the situation. Please let me know if you need any further details.
Thank you for your assistance in resolving this matter.
va...@google.com <va...@google.com>
ar...@google.com <ar...@google.com> #4
Hello,
Thank you for reaching out to us with your request.
We have duly noted your feedback and will thoroughly validate it. While we cannot provide an estimated time of implementation or guarantee the fulfillment of the issue, please be assured that your input is highly valued. Your feedback enables us to enhance our products and services.
If you are looking for a faster resolution and dedicated support for your issue, I kindly request you to file a support ticket by clicking
We appreciate your continued trust and support in improving our Google Cloud Platform products. In case you want to report a new issue, Please do not hesitate to create a new issue on the
Once again, we sincerely appreciate your valuable feedback; Thank you for your understanding and collaboration.
[Deleted User] <[Deleted User]> #5
ar...@google.com <ar...@google.com> #6
Hello,
To troubleshoot the issue further, I have created a private ticket to provide some information about the issue (for which you should have received a notification). Please provide requested information there. Don't put any personal information, including project identifiers in this public ticket.
ar...@google.com <ar...@google.com> #7
Hello,
Thank you for reaching out to us with your request.
We have duly noted your feedback and will thoroughly validate it. While we cannot provide an estimated time of implementation or guarantee the fulfillment of the issue, please be assured that your input is highly valued. Your feedback enables us to enhance our products and services.
If you are looking for a faster resolution and dedicated support for your issue, I kindly request you to file a support ticket by clicking
We appreciate your continued trust and support in improving our Google Cloud Platform products. In case you want to report a new issue, Please do not hesitate to create a new issue on the
Once again, we sincerely appreciate your valuable feedback; Thank you for your understanding and collaboration.
pc...@gmail.com <pc...@gmail.com> #8
I am facing the exact same issue. I labelled myself around 20 documents, and the moment I begin the uptraining it gets cancelled because the vast majority of my documents are judged invalid.
When I search the type of error in the documentation (
I am only training on six types of tags: line_item, line_item/description, line_item/amount, line_item/quantity, line_item/unit and line_item/unit_price.
Though I am getting the error for a large amount of the documents I am submitting for training, I was able to find the specific tag that is causing the problem in one of them. When I remove that tag, the document is considered valid, but when I add it again in the same position, I am getting the error. It seems that an empty tag is created along but it is invisible and impossible to erase, so relabelling the document doesn't seem to work as a solution.
One interesting thing about this is that when I open the document in the Document AI labelling tool and I search for the "amount" labels with the browser's search tool, it counts 26 labels, but one of them is invisible and is not highlighted in the search, so manually I can only count 25 "amount" labels. It is unclear where the remaining ghost label is being created or how or why the search tool is finding it.
This error seems to be related to this one:
A reply suggests relabelling as a solution, but, as I stated, that didn't work in my case.
Some other posts that describe a similar error can be found here:
And most importantly:
---
It would be great if you could finally address this issue. According to the information I could find in different forums on the web, it is a problem that has been around for a few years now without a clear solution. It is sadly blocking my company from adopting Document AI, and if we cannot get past this soon we will have to find another service, which we would regret as we love how straightforward everything is in GCP.
Best regards.
Description
I can't find anything relevant in the logs or in troubleshooting.
There is a warning that it does not meet recommended criteria, but it does meet the minimum criteria and I have trained processors like this before without error.
The dialog says:
Failed to train processor version
Internal error encountered.
Request ID: 16429219895591395042