Status Update
Comments
va...@google.com <va...@google.com>
ab...@google.com <ab...@google.com> #2
Hello,
Thank you for reaching out to us with your request.
We have duly noted your feedback and will thoroughly validate it. While we cannot provide an estimated time of implementation or guarantee the fulfillment of the issue, please be assured that your input is highly valued. Your feedback enables us to enhance our products and services.
We appreciate your continued trust and support in improving our Google Cloud Platform products. In case you want to report a new issue, please do not hesitate to create a new issue on the
Once again, we sincerely appreciate your valuable feedback; Thank you for your understanding and collaboration.
ph...@medxoom.com <ph...@medxoom.com> #3
there is a DLQ and 5 attempts before DLQ.
2024-05-24T20:22:04.775Z info: Handler[0] 'Nack'ing message text:'NACKs', orderingKey:'nack9', messageId:'9630716938658656', attempt:'77'
2024-05-24T20:22:04.775Z info: Handler[0] 'Nack'ing message text:'NACKs', orderingKey:'nack6', messageId:'9630754437488409', attempt:'75'
2024-05-24T20:22:04.775Z info: Handler[0] 'Nack'ing message text:'NACKs', orderingKey:'nack4', messageId:'9630745068394146', attempt:'75'
2024-05-24T20:22:04.824Z info: Handler[0] 'Nack'ing message text:'NACKs', orderingKey:'nack3', messageId:'9630756171334489', attempt:'81'
2024-05-24T20:22:04.826Z info: Handler[0] 'Nack'ing message text:'NACKs', orderingKey:'nack7', messageId:'9630700040032946', attempt:'79'
...
2024-05-24T20:22:38.607Z info: Handler[0] 'Ack'ing message text:'NACKs', orderingKey:'ack10', messageId:'9630719865842657', attempt:'2'
presumably, there should only be 5 attempts per messageId for nack and 1 attempt per ack.
However...if an "attempt" is an attempt to send the message but no one picks it up before the deadline, then we can see what's happening there!
je...@google.com <je...@google.com> #4
Hello,
To troubleshoot the issue further, I have created a private ticket to provide some information about the issue (for which you should have received a notification). Please provide requested information there. Don't put any personal information, including project identifiers in this public ticket.
je...@google.com <je...@google.com> #5
Hello,
Thank you for reaching out to us with your request.
We have duly noted your feedback and will thoroughly validate it. While we cannot provide an estimated time of implementation or guarantee the fulfillment of the issue, please be assured that your input is highly valued. Your feedback enables us to enhance our products and services.
We appreciate your continued trust and support in improving our Google Cloud Platform products. In case you want to report a new issue, Please do not hesitate to create a new issue on the
Once again, we sincerely appreciate your valuable feedback; Thank you for your understanding and collaboration.
Description
Please provide as much information as possible. At least, this should include a description of your issue and steps to reproduce the problem. If possible please provide a summary of what steps or workarounds you have already tried, and any docs or articles you found (un)helpful.
Problem you have encountered:
We have worker services that use a Pull subscription set up with Message Ordering and Exactly-Once Delivery. The subscription is also set up with a DLQ.
When several messages are NACK'd enough times and go to the DLQ, the service stop receiving messages from the subscription. Messages are backed up in the queue. The service is essentially dead in the water until we restart it.
What you expected to happen:
The service should continue to receive messages.
Steps to reproduce:
Jon Skeet from the dotnet SDK team created a repro (see ticket link below) and has been in contact with the Pub/Sub backend team, however it may not meet all the criteria. Our system uses multiple ordering keys and has more than one subscriber instance; we use GKE with Replica Sets >= 2 so there are at least two subscribers at nearly all times.
Jon was seeing it eventually send new messages after several minutes. We've see no messages come for up to days until we restart the service. It's as if the sender gets deadlocked for the specific connection or session.
Other information (workarounds you have tried, documentation consulted, etc):
Jon Skeet responded as follows regarding this issue with Pub/Sub:
"We've heard back from the Pub/Sub team, and they recommend that you open a support ticket via a Google Support ticket if possible"
I've seen a similar issue reported for the nodejs SDK. I think a lot of people are frustrated by this issue without knowing the specifics.
related issue: