Assigned
Status Update
Comments
gs...@google.com <gs...@google.com> #2
Hello,
Thank you for reaching out to us!
This issue seems to be outside of the scope of Issue Tracker. This Issue Tracker is a forum for end users to report bugs
and request features
on Google Cloud products. Please go through
I recommend you to
For now, I'm going to close this thread which will no longer be monitored. In case you want to report a new issue, please do not hesitate to create a new Issue Tracker describing your issue.
Thank you!
ma...@kadijk.com <ma...@kadijk.com> #3
Thanks for the quick reply.
Problem is not that the message is not clear, the error messages are shown in the log, that is where it should be, since it is a message to me (the developer) and not the user agent processing the result of the request.
My problem is that the 500 return status of the request is triggering unwanted behaviour, it signals the request was nog fulfilled however it was fulfilled and completed
To quote the RFC (https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html )
"500 Internal Server Error: The server encountered an unexpected condition which prevented it from fulfilling the request."
The fact that the instance will be terminated AFTER fulfilling this request, is not relevant for the user agent. The fact that the instance will be shut down, can have a lot of reasons, and since we have some kind of memory problem here, ii is likely not caused by this specific request, but the sequence of request handled by this instance.
Therefore, as I see it, it would be according the the RFC to send a 200 OK for this request, and log the information about instance shutdown somewhere else (the logs, where is is now).
Seeing a 200 OK will trigger the correct behaviour of the UA, it will know the "request was successfully received, understood, and accepted." (RFC 2616)
If the request was send from an AppEngine Task Queue it will de considered completed, which is correct. The 500 signals a retry is necessary. This may result is endless retry's of a request which was completed long ago. Which is no the expected behaviour.
Currently, as a developer, I have no possibility to change the status code of the request, if the instance triggers a "soft private memory limit". As far as I know there is no was we can catch this exception, or any other way we can influence the status code of these requests. If developers disagree on how to read the RFC and there are developpers who think that a 500 status is the way to go in this situation, it should a least be possible to configure this or the have a way to influence this in out code.
The default behaviour should however be to return a 200 OK and not a 500 Internal Server Error (in my humble opinion).
Matthijs
Problem is not that the message is not clear, the error messages are shown in the log, that is where it should be, since it is a message to me (the developer) and not the user agent processing the result of the request.
My problem is that the 500 return status of the request is triggering unwanted behaviour, it signals the request was nog fulfilled however it was fulfilled and completed
To quote the RFC (
"500 Internal Server Error: The server encountered an unexpected condition which prevented it from fulfilling the request."
The fact that the instance will be terminated AFTER fulfilling this request, is not relevant for the user agent. The fact that the instance will be shut down, can have a lot of reasons, and since we have some kind of memory problem here, ii is likely not caused by this specific request, but the sequence of request handled by this instance.
Therefore, as I see it, it would be according the the RFC to send a 200 OK for this request, and log the information about instance shutdown somewhere else (the logs, where is is now).
Seeing a 200 OK will trigger the correct behaviour of the UA, it will know the "request was successfully received, understood, and accepted." (RFC 2616)
If the request was send from an AppEngine Task Queue it will de considered completed, which is correct. The 500 signals a retry is necessary. This may result is endless retry's of a request which was completed long ago. Which is no the expected behaviour.
Currently, as a developer, I have no possibility to change the status code of the request, if the instance triggers a "soft private memory limit". As far as I know there is no was we can catch this exception, or any other way we can influence the status code of these requests. If developers disagree on how to read the RFC and there are developpers who think that a 500 status is the way to go in this situation, it should a least be possible to configure this or the have a way to influence this in out code.
The default behaviour should however be to return a 200 OK and not a 500 Internal Server Error (in my humble opinion).
Matthijs
gs...@google.com <gs...@google.com> #4
Hi Matthijs,
Developers have been made aware of the issue and will schedule eventual implementation as they judge fit. Comments and developments can be followed from withing this thread.
Developers have been made aware of the issue and will schedule eventual implementation as they judge fit. Comments and developments can be followed from withing this thread.
Description
Since the request is handle correctly, it should return a 200 OK status, i.s.o. a 500 internal server error.
In a user request, the user will see an error that is not intended for him, and not needed, since the request was completed before the instance is sit down.
In case if handling a task queue request, it will get re-queued while it was completed, hitting the server again, and maybe even a lot of time is the memory limit was caused by this very request. Also if a request fails it may trigger logic like canceling remote transactions etc. that are not needed at all.
Logging an critical error in the logs is, what we have now is all we need, it will be nice if the logs from the terminated request are not lost somewhere in the shutdown proces, but there a re other bugs filed for that.
Making sure the request returns with a 200 OK state, will resolve a lot of issues mostly for task queue events. retrying a request over and over again after it was completed correctly can cause a lot of trouble ...