Assigned
Status Update
Comments
al...@expert-group.biz <al...@expert-group.biz> #2
Hello,
Thank you for reaching out to us!
This issue seems to be outside of the scope of Issue Tracker. This Issue Tracker is a forum for end users to report bugs
and request features
on Google Cloud products. Please go through
I recommend you to
For now, I'm going to close this thread which will no longer be monitored. In case you want to report a new issue, please do not hesitate to create a new Issue Tracker describing your issue.
Thank you!
al...@expert-group.biz <al...@expert-group.biz> #3
This problem has reappeared to our application.
I get 500 with this message:
"The process handling this request unexpectedly died. This is likely to cause a new process to be used for the next request to your application. (Error code 203)"
I use java SDK.
I get 500 with this message:
"The process handling this request unexpectedly died. This is likely to cause a new process to be used for the next request to your application. (Error code 203)"
I use java SDK.
gr...@google.com <gr...@google.com>
pr...@gmail.com <pr...@gmail.com> #5
Some troubleshooting data points - hoping this would help in finding a solution. Here is the sequence of actions we took and the results found:
1. We had always on, uploaded new code, started getting 500/203 errors.
2. We disabled always on, and surprisingly 500/203 errors went away.
3. We enabled always on, again, we started getting errors.
4. Again we disabled always on, 500 errors went away.
We use Java 1.4.0 sdk. No frameworks.
appid: docpulseapp
1. We had always on, uploaded new code, started getting 500/203 errors.
2. We disabled always on, and surprisingly 500/203 errors went away.
3. We enabled always on, again, we started getting errors.
4. Again we disabled always on, 500 errors went away.
We use Java 1.4.0 sdk. No frameworks.
appid: docpulseapp
br...@gmail.com <br...@gmail.com> #6
Just got 20 in a row trying to process one task-queue job. I see you've demoted it to 'info' from 'warning' level in the logging. Somehow I don't feel this error is any less serious.
aprigoninjadev.appspot, Python
I 2011-03-15 17:44:39.082 The process handling this request unexpectedly died. This is likely to cause a new process to be used for the next request to your application. (Error code 203)
aprigoninjadev.appspot, Python
I 2011-03-15 17:44:39.082 The process handling this request unexpectedly died. This is likely to cause a new process to be used for the next request to your application. (Error code 203)
fr...@gmail.com <fr...@gmail.com> #7
We are experiencing this A LOT on one of our applications. Using Java, AppEngine SDK 1.3.5. The error is intermittent. We see
The process handling this request unexpectedly died. This is likely to
cause a new process to be used for the next request to your application.
(Error code 203)
this one works:conceptuamath.appspot.com
this one does not:conceptuamath-qa3.appspot.com
Resolving this is rather urgent for us. I am happy to help you test in any way I can.
The process handling this request unexpectedly died. This is likely to
cause a new process to be used for the next request to your application.
(Error code 203)
this one works:
this one does not:
Resolving this is rather urgent for us. I am happy to help you test in any way I can.
em...@gtempaccount.com <em...@gtempaccount.com> #8
It is worth characterizing exactly what is happening in our applications;
Our live application is atconceptuamath.appspot.com . We used reserved instances on this application. It is running well and is not seeing the error 203 problem.
conceptuamath-qa3.appspot.com is running the same code. We used reserved instances on this application. We had two periods, one on 3/16 from 2:55 pm pacific to 4:07 pacific, and one on 3/17 (9:06 to 9:35 pacific we got a lot of errors, then two isolated errors, one at 10:59 and one at 12:01).
conceptuamath-qa2.appspot.com is running the same code, but does not use reserved instances. It had a period starting at about 4:15 pacific where it started getting a number of failures waiting for the datastore to return a result. This is not the error 203 issue, but I'm wondering if it is related and the difference is the reserved instances.
All applications are using version 1.3.8 Java SDK, JDO and Restlet.
Our live application is at
All applications are using version 1.3.8 Java SDK, JDO and Restlet.
[Deleted User] <[Deleted User]> #9
Another 20 in a row today on aprigoninjadev.appspot (Python)
I 2011-03-19 17:46:31.295
The process handling this request unexpectedly died. This is likely to cause a new process to be used for the next request to your application. (Error code 203)
Please either fix this or give us visibility into how we can make our code not exercise it. This is the only error message I get. It gives me no clue as to when and where in the execution something might be going wrong. No stack trace. Boom.
I 2011-03-19 17:46:31.295
The process handling this request unexpectedly died. This is likely to cause a new process to be used for the next request to your application. (Error code 203)
Please either fix this or give us visibility into how we can make our code not exercise it. This is the only error message I get. It gives me no clue as to when and where in the execution something might be going wrong. No stack trace. Boom.
pr...@gmail.com <pr...@gmail.com> #10
Last we saw this problem was on March 11. As mentioned earlier, we turned off always on reserved instances and we could get out of the issue. Earlier we were directly uploading new code (dev versions) and relying on GAE to switch to the new version. After the occurrence of this issue, now, we upload a new version of the code, test it and then make that a default version and somehow, we have not seen this issue occur ever since. Has this worked for others as well? Any kind of clues/feedback/workaround from Google on this issue would help us immensely. It will go a long way in improving our confidence and re-assure us. Considering that other apps are seeing this problem seems like it is only a matter of time we hit this issue again!
docpulseapp , java, 1.4.0
docpulseapp , java, 1.4.0
tr...@gmail.com <tr...@gmail.com> #11
I was getting this error and tracked it down to going over my memory usage. Every request was dying, some with this error message and some with an exception for going over the soft memory usage limit of 300MB. Cleaning up my app to use less memory made both errors go away. I gather that, when you go above some hard memory limit (presumably that limit is higher than 300MB), your entire process gets killed without explanation. I also noticed that my log messages written in the soft limit exception case were successfully recorded, while the hard limit case the log messages were not successfully recorded, which seems consistent with a hard kill of the process.
em...@gtempaccount.com <em...@gtempaccount.com> #12
I attended the Google IO talk on the high-replication datastore. It provided a really, really good overview of the (previously) standard master-slave datastore architecture and the performance issues associated with it. These issues include the potential for a rash of datastore timeouts when tablets need to be split. This would only affect apps that were using that tablet and only for a short window of time (like 1/2 hour). A tablet split could be cause by some other app and the results can affect your app. This is the kind of error pattern I was seeing with my app. In reality, the master-slave architecture has only a 99.9% expected up-time NOT including scheduled maintenance. So there is an ambient failure rate of 0.1%. That that means is that you can expect that 1 of every 1000 calls to the datastore will fail because of infrastructure issues. The reality is that these failures come in clusters. The solution is to move to the high-replication datastore. It solves the ambient-failure issue and actually eliminates downtime due to schedule maintenance windows as well. Overall up-time is supposed to improve to 99.999% and it costs the same. We are supposed to get some tools to help with the migration soon. Here is the IO talk - it is a great talk on the datastore and it is worthwhile even if you are not having this issue. http://www.google.com/events/io/2011/sessions/more-9s-please-under-the-covers-of-the-high-replication-datastore.html
jo...@gmail.com <jo...@gmail.com> #13
I too am getting the "(Error Code 203)" in my PYTHON code.
The error occurs when i GET a url that adds a task to the queue, but it only seems to occur if the page tries to write out html:
SO This will fail with (Error Code 203)
get(self):
self.response.out.write("starting Daily Cron Job<br/>")
DailyCronJob.Start() ## Task Start Code is static method of object
self.response.out.write("Daily Cron Job Started<br/>")
This will work:
get(self):
## self.response.out.write("starting Daily Cron Job<br/>")
DailyCronJob.Start()
## self.response.out.write("Daily Cron Job Started<br/>")
Removing the calls to response.out.write() seems to have fixed my problem (they were only there for debug use anyway!)
The error occurs when i GET a url that adds a task to the queue, but it only seems to occur if the page tries to write out html:
SO This will fail with (Error Code 203)
get(self):
self.response.out.write("starting Daily Cron Job<br/>")
DailyCronJob.Start() ## Task Start Code is static method of object
self.response.out.write("Daily Cron Job Started<br/>")
This will work:
get(self):
## self.response.out.write("starting Daily Cron Job<br/>")
DailyCronJob.Start()
## self.response.out.write("Daily Cron Job Started<br/>")
Removing the calls to response.out.write() seems to have fixed my problem (they were only there for debug use anyway!)
bl...@gmail.com <bl...@gmail.com> #14
[Comment deleted]
ik...@google.com <ik...@google.com> #15
Changed name of issue. Need to provide better error messaging for this error as well as actionable items for developers/site reliability.
[Deleted User] <[Deleted User]> #16
I am getting often this ... any solution ?
is...@google.com <is...@google.com>
ko...@gmail.com <ko...@gmail.com> #17
I'm getting the same message in Node. It's really frustrating, a crash with no way to troubleshoot the root cause.
In my case I'm getting this in cloud tasks handler. I have a long-running task that always crashes after about 2 minutes. The handler seems to crash or be killed from outside in a way that I cannot even catch.
In my case I'm getting this in cloud tasks handler. I have a long-running task that always crashes after about 2 minutes. The handler seems to crash or be killed from outside in a way that I cannot even catch.
go...@google.com <go...@google.com>
go...@google.com <go...@google.com> #18
Hello,
The feature request has been opened with the App Engine engineering team to provide better error messages for this error.
Please note that there are no ETAs or guarantees of implementation for feature requests. However, future communication regarding this request is to be held here.
The feature request has been opened with the App Engine engineering team to provide better error messages for this error.
Please note that there are no ETAs or guarantees of implementation for feature requests. However, future communication regarding this request is to be held here.
st...@noforeignland.com <st...@noforeignland.com> #19
I sumbled across this as we're having 203 errors reported and cannot figure out what is causing it..... Did this ever get worked on?
Description
I'm still seeing this on my Python app aprigoninjadev.appspot
I have queries failing with no logging except the line:
W 2011-03-08 16:17:38.883
A serious problem was encountered with the process that handled this request, causing it to exit. This is likely to cause a new process to be used for the next request to your application. If you see this message frequently, you should contact the App Engine team. (Error code 203)
Was there any fix actually deployed?