Can't Repro
Status Update
Comments
bt...@brandonthomson.com <bt...@brandonthomson.com> #2
Thank you very much for reporting this issue, Jack. Looks like the issue is caused by somewhat unnecessary and contrived logic in apksigner.bat, which is special-casing -J* parameters. Unfortunately, this logic doesn't handle whitespace containing parameters, even those which aren't -J*.
I've uploaded a tentative fixhttps://android-review.googlesource.com/#/c/394238/ for review. The patched apksigner.bat is attached. This fixes the issue on my Windows 10 box.
Jack, please test the fix in your environment and let us know whether it works for you too.
I've uploaded a tentative fix
Jack, please test the fix in your environment and let us know whether it works for you too.
wk...@gmail.com <wk...@gmail.com> #3
Confirmed, that fix works. Thank you for the quick turn-around on this.
gv...@gmail.com <gv...@gmail.com> #4
Project: platform/tools/apksig
Branch: master
commit b293b2087837853e32a1cff008d58644dbbd4cc2
Author: Alex Klyubin <klyubin@google.com>
Date: Wed May 10 16:53:25 2017
Handle whitespace when removing -J* parameters in .bat
apksigner.bat contains special logic for -J* parameters. This logic
was not designed to handle parameters containing whitespace. This
commit fixes the issue. It is unclear whether this -J* logic is even
needed. So, if it keeps breaking stuff, we should probably simply
remove it.
Test: apksigner sign --ks "nonexistent.jks" --ks-pass pass:123123 --ks-key-alias "bug apksigner test" nonexistent.apk
Fails with FileNotFoundException instead of "a was unexpected at this time"
Bug: 38132450
Change-Id: Ice3294e9993b075533c77d94eb870cfd35a65bbc
M etc/apksigner.bat
https://android-review.googlesource.com/394238
https://goto.google.com/android-sha1/b293b2087837853e32a1cff008d58644dbbd4cc2
Branch: master
commit b293b2087837853e32a1cff008d58644dbbd4cc2
Author: Alex Klyubin <klyubin@google.com>
Date: Wed May 10 16:53:25 2017
Handle whitespace when removing -J* parameters in .bat
apksigner.bat contains special logic for -J* parameters. This logic
was not designed to handle parameters containing whitespace. This
commit fixes the issue. It is unclear whether this -J* logic is even
needed. So, if it keeps breaking stuff, we should probably simply
remove it.
Test: apksigner sign --ks "nonexistent.jks" --ks-pass pass:123123 --ks-key-alias "bug apksigner test" nonexistent.apk
Fails with FileNotFoundException instead of "a was unexpected at this time"
Bug: 38132450
Change-Id: Ice3294e9993b075533c77d94eb870cfd35a65bbc
M etc/apksigner.bat
wk...@gmail.com <wk...@gmail.com> #5
Project: platform/tools/apksig
Branch: master
commit 0f88b97634034673f062a8ac6c3dab7d3d9befe3
Author: Alex Klyubin <klyubin@google.com>
Date: Thu Jun 22 09:43:22 2017
Bump apksigner version to 0.7
Changes since 0.6:
* Fixed a bug in whitespace handling in command-line parameters in
apksigner.bat.https://issuetracker.google.com/issues/38132450
* Fixed a bug in JAR signature verification when multiple digests
are present for the same named entry in MANIFEST.MF.
https://issuetracker.google.com/issues/38497270
* Honor android:targetSandboxVersion (introduced in Android O) when
verifying APKs. When android:targetSandboxVersion is set to 2 or
higher, the APK is required to be signed with APK Signature Scheme
v2.
* When signing, reject APKs with CR, LF or NUL in ZIP entry names.
Such names are not permitted by the JAR siging spec and are also
rejected by Android Package Manager.
Test: apksigner version
Bug: 38132450
Bug: 38497270
Bug: 36426653
Bug: 62211230
Change-Id: Ifa120b0e43b458c99c3da6fde1136e0cbb92caee
M src/apksigner/java/com/android/apksigner/ApkSignerTool.java
https://android-review.googlesource.com/420784
https://goto.google.com/android-sha1/0f88b97634034673f062a8ac6c3dab7d3d9befe3
Branch: master
commit 0f88b97634034673f062a8ac6c3dab7d3d9befe3
Author: Alex Klyubin <klyubin@google.com>
Date: Thu Jun 22 09:43:22 2017
Bump apksigner version to 0.7
Changes since 0.6:
* Fixed a bug in whitespace handling in command-line parameters in
apksigner.bat.
* Fixed a bug in JAR signature verification when multiple digests
are present for the same named entry in MANIFEST.MF.
* Honor android:targetSandboxVersion (introduced in Android O) when
verifying APKs. When android:targetSandboxVersion is set to 2 or
higher, the APK is required to be signed with APK Signature Scheme
v2.
* When signing, reject APKs with CR, LF or NUL in ZIP entry names.
Such names are not permitted by the JAR siging spec and are also
rejected by Android Package Manager.
Test: apksigner version
Bug: 38132450
Bug: 38497270
Bug: 36426653
Bug: 62211230
Change-Id: Ifa120b0e43b458c99c3da6fde1136e0cbb92caee
M src/apksigner/java/com/android/apksigner/ApkSignerTool.java
gv...@gmail.com <gv...@gmail.com> #6
The fix has been released in apksigner 0.7, released as part of Android SDK Build Tools 26.0.1.
sd...@gmail.com <sd...@gmail.com> #7
gvanrossum:
> Second, in production, for security reasons we don't
> use .pyc files, so everything is parsed from source when
> the process is loaded.
I can see why you may not want to run random .pyc files on your production servers,
but why not take the .py files as we upload them, process them into .pyc files with
your trusted tools, and then push those to the production servers?
Is it just the case that historically all Google services are heavily loaded and
startup time has never been a concern, so the easiest way to get trusted code to the
production servers was just to completely drop .pyc files?
The way app engine works, with many different packages of code being constantly load
balanced and brought up and down, start up time becomes important and quite
noticeable. I think .pyc files would help.
> Second, in production, for security reasons we don't
> use .pyc files, so everything is parsed from source when
> the process is loaded.
I can see why you may not want to run random .pyc files on your production servers,
but why not take the .py files as we upload them, process them into .pyc files with
your trusted tools, and then push those to the production servers?
Is it just the case that historically all Google services are heavily loaded and
startup time has never been a concern, so the easiest way to get trusted code to the
production servers was just to completely drop .pyc files?
The way app engine works, with many different packages of code being constantly load
balanced and brought up and down, start up time becomes important and quite
noticeable. I think .pyc files would help.
gv...@gmail.com <gv...@gmail.com> #8
I can't argue with that, but I won't discuss our priorities either. :-)
gv...@gmail.com <gv...@gmail.com> #9
Waldemar, one more question on the slowness. Did you check the status site
(http://code.google.com/status/appengine ) to see if it was showing slower than normal
responses at the same time?
(
responses at the same time?
wk...@gmail.com <wk...@gmail.com> #10
We did have higher error rates when App Engine was slower than normal, but even under
normal conditions we sometimes could have too slow load times.
The slowness must primarily be caused by too slow/overloaded machines and by the lack
of .pyc support. I hope you'll put .pyc support on your roadmap. Please take some time
to optimize App Engine.
BTW, did you think about integrating PyPy when it's ready? :)
normal conditions we sometimes could have too slow load times.
The slowness must primarily be caused by too slow/overloaded machines and by the lack
of .pyc support. I hope you'll put .pyc support on your roadmap. Please take some time
to optimize App Engine.
BTW, did you think about integrating PyPy when it's ready? :)
mo...@gmail.com <mo...@gmail.com> #11
I'm using Django with i18n in two applications - contact-birthdays and cubicteam-cube.
I wasn't having this problem until today, but right now I'm having them with "cubicteam-cube".
Several of the stacktraces show the time-out occurs while importing the module fcntl.
(take a look at log entries starting at 09-16 01:51AM) but probably that's just coincidence.
Some of the deadlines are in response to google sitemaps and google webmaster verification, hiting the '/'
page. The '/' page does now load anything from the datastore for an un-authenticated user.
I deployed a version shortly after (to fix the META verify-v1 header); however, the timeouts were occurring
*before* this deploy.
The deployment before this one was done yesterday (about 14 hours ago); after that I had someone test
several features and I have no timeouts in the logs (or timeouts reported by the tester), so I'd say its not
related to something I've changed in my deployment yesterday.
Could it be that there's a "bad" instance somewhere in your cluster and we are randomly hiting the problem?
Based on my app name and timestamp can you guy find out where it was running?
Let me know if I can give you any extra info.
I wasn't having this problem until today, but right now I'm having them with "cubicteam-cube".
Several of the stacktraces show the time-out occurs while importing the module fcntl.
(take a look at log entries starting at 09-16 01:51AM) but probably that's just coincidence.
Some of the deadlines are in response to google sitemaps and google webmaster verification, hiting the '/'
page. The '/' page does now load anything from the datastore for an un-authenticated user.
I deployed a version shortly after (to fix the META verify-v1 header); however, the timeouts were occurring
*before* this deploy.
The deployment before this one was done yesterday (about 14 hours ago); after that I had someone test
several features and I have no timeouts in the logs (or timeouts reported by the tester), so I'd say its not
related to something I've changed in my deployment yesterday.
Could it be that there's a "bad" instance somewhere in your cluster and we are randomly hiting the problem?
Based on my app name and timestamp can you guy find out where it was running?
Let me know if I can give you any extra info.
ja...@gmail.com <ja...@gmail.com> #12
There are a number of threads discussing further evidence of this issue. They all seem to have the Google-
supplied Django 1.0 in common. Some use app-engine-patch, some use django_appengine.
http://groups.google.com/group/google-appengine-python/browse_thread/thread/2337166b90dcce14
http://groups.google.com/group/google-appengine/browse_thread/thread/7487e5f429a2789d
http://groups.google.com/group/google-appengine/browse_thread/thread/5d308136a2b2adca
http://groups.google.com/group/google-appengine/browse_thread/thread/5c5405a323fdafc4
http://groups.google.com/group/google-appengine/browse_thread/thread/7ac5172ff0343d7c
http://groups.google.com/group/google-appengine-python/browse_thread/thread/6ea400e38af422bd
http://groups.google.com/group/google-appengine-python/browse_thread/thread/7804a0b2753d8e5e
supplied Django 1.0 in common. Some use app-engine-patch, some use django_appengine.
br...@gmail.com <br...@gmail.com> #13
I think due to the extra imports Django just makes this problem more likely to
surface. I see occasional bursty periods with long import times on a webapp
application also. They coincide with periods when datastore operations take a long
time and are likely to Timeout.
I've considered using fast ajax timeouts and automatic retries to help deal with this
issue, but unfortunately it usually happens that they occur in bursts such that
multiple requests in a short period of time are all likely to be affected.
surface. I see occasional bursty periods with long import times on a webapp
application also. They coincide with periods when datastore operations take a long
time and are likely to Timeout.
I've considered using fast ajax timeouts and automatic retries to help deal with this
issue, but unfortunately it usually happens that they occur in bursts such that
multiple requests in a short period of time are all likely to be affected.
mo...@gmail.com <mo...@gmail.com> #14
More DeadLines with the "cubicteam-cube" app.
09-17 01:37AM and 09-17 01:22AM
The root cause is again importing fcntl inside /base/python_lib/versions/third_party/django-
1.0/django/core/files/locks.py
Maybe people who stared this issue could add their app ids and times when it occured so Google might find a
pattern?
09-17 01:37AM and 09-17 01:22AM
The root cause is again importing fcntl inside /base/python_lib/versions/third_party/django-
1.0/django/core/files/locks.py
Maybe people who stared this issue could add their app ids and times when it occured so Google might find a
pattern?
mo...@gmail.com <mo...@gmail.com> #15
Instance still down.
Some more findings:
* The DeadLine is kicking in after about 30 seconds, which is correct; but the amount of CPU used is always in
the vicinity of 1000ms. So, only 1s was actually spent working during those 30 seconds?
* Yesterday this also occured at about the same time, GMT morning. Right now its about 2 AM PST, could it be
that you have some kind of nightly batch that's starving some instances?
Some more findings:
* The DeadLine is kicking in after about 30 seconds, which is correct; but the amount of CPU used is always in
the vicinity of 1000ms. So, only 1s was actually spent working during those 30 seconds?
* Yesterday this also occured at about the same time, GMT morning. Right now its about 2 AM PST, could it be
that you have some kind of nightly batch that's starving some instances?
na...@gmail.com <na...@gmail.com> #16
This is really getting annoying now. Everyday at around 7am I am getting this error. This is a prime time for my
user base as my target users are from Bangladesh. I have tried to work around by lazily loading modules inside
functions. But it still persists. Right now I am getting this error again. In case, you need my app address:
http://dhadharu.appspot.com
user base as my target users are from Bangladesh. I have tried to work around by lazily loading modules inside
functions. But it still persists. Right now I am getting this error again. In case, you need my app address:
li...@gmail.com <li...@gmail.com> #17
We get this all the time, and there's no pattern for time of day or user location or
anything. Started since the September 2nd maintenance. Using Django 1.0. I'll post an
example stack trace if that's okay, although the modules being imported by Django at
the time it gives up and the URLs spawning the instance seem random.
09-17 06:35AM 28.418 /userphoto/rgwatwormhill 500 29484ms 913cpu_ms
E 09-17 06:35AM 57.860
<class 'google.appengine.runtime.DeadlineExceededError'>:
Traceback (most recent call last):
File "/base/data/home/apps/skrit/7.336387532984481503/main.py", line 77, in
<module>
main()
File "/base/data/home/apps/skrit/7.336387532984481503/main.py", line 74, in main
util.run_wsgi_app(application)
File "/base/python_lib/versions/1/google/appengine/ext/webapp/util.py", line 76, in
run_wsgi_app
result = application(env, _start_response)
File "/base/python_lib/versions/third_party/django-
1.0/django/core/handlers/wsgi.py", line 228, in __call__
self.load_middleware()
File "/base/python_lib/versions/third_party/django-
1.0/django/core/handlers/base.py", line 38, in load_middleware
mod = __import__(mw_module, {}, {}, [''])
File "/base/python_lib/versions/third_party/django-1.0/django/middleware/gzip.py",
line 4, in <module>
from django.utils.cache import patch_vary_headers
File "/base/python_lib/versions/third_party/django-1.0/django/utils/cache.py", line
28, in <module>
from django.core.cache import cache
File "/base/python_lib/versions/third_party/django-
1.0/django/core/cache/__init__.py", line 57, in <module>
cache = get_cache(settings.CACHE_BACKEND)
File "/base/python_lib/versions/third_party/django-
1.0/django/core/cache/__init__.py", line 52, in get_cache
module = __import__('django.core.cache.backends.%s' % BACKENDS[scheme], {}, {},
[''])
File "/base/python_lib/versions/third_party/django-
1.0/django/core/cache/backends/locmem.py", line 5, in <module>
import cPickle as pickle
anything. Started since the September 2nd maintenance. Using Django 1.0. I'll post an
example stack trace if that's okay, although the modules being imported by Django at
the time it gives up and the URLs spawning the instance seem random.
09-17 06:35AM 28.418 /userphoto/rgwatwormhill 500 29484ms 913cpu_ms
E 09-17 06:35AM 57.860
<class 'google.appengine.runtime.DeadlineExceededError'>:
Traceback (most recent call last):
File "/base/data/home/apps/skrit/7.336387532984481503/main.py", line 77, in
<module>
main()
File "/base/data/home/apps/skrit/7.336387532984481503/main.py", line 74, in main
util.run_wsgi_app(application)
File "/base/python_lib/versions/1/google/appengine/ext/webapp/util.py", line 76, in
run_wsgi_app
result = application(env, _start_response)
File "/base/python_lib/versions/third_party/django-
1.0/django/core/handlers/wsgi.py", line 228, in __call__
self.load_middleware()
File "/base/python_lib/versions/third_party/django-
1.0/django/core/handlers/base.py", line 38, in load_middleware
mod = __import__(mw_module, {}, {}, [''])
File "/base/python_lib/versions/third_party/django-1.0/django/middleware/gzip.py",
line 4, in <module>
from django.utils.cache import patch_vary_headers
File "/base/python_lib/versions/third_party/django-1.0/django/utils/cache.py", line
28, in <module>
from django.core.cache import cache
File "/base/python_lib/versions/third_party/django-
1.0/django/core/cache/__init__.py", line 57, in <module>
cache = get_cache(settings.CACHE_BACKEND)
File "/base/python_lib/versions/third_party/django-
1.0/django/core/cache/__init__.py", line 52, in get_cache
module = __import__('django.core.cache.backends.%s' % BACKENDS[scheme], {}, {},
[''])
File "/base/python_lib/versions/third_party/django-
1.0/django/core/cache/backends/locmem.py", line 5, in <module>
import cPickle as pickle
ja...@gmail.com <ja...@gmail.com> #18
We (AppID: myfrontsteps) have 25 occurrences in the last hour (Sep 17, 6:24a-7:24a), but there looks to be
some in every hour of the day.
This app doesn't get much traffic (which of course exasperates this exception because our instances are not hot),
so this represents a large percentage of our hits.
some in every hour of the day.
This app doesn't get much traffic (which of course exasperates this exception because our instances are not hot),
so this represents a large percentage of our hits.
bt...@brandonthomson.com <bt...@brandonthomson.com> #19
Here is a thread where I try to track down slow import issues in a webapp app:
http://groups.google.com/group/google-appengine/browse_thread/thread/205dd45cda130c4d# So,
clearly switching from django->webapp will not automatically solve the problem.
clearly switching from django->webapp will not automatically solve the problem.
he...@gmail.com <he...@gmail.com> #20
[Comment deleted]
ca...@gmail.com <ca...@gmail.com> #21
I mentioned this in a forum post, but I want to make sure to note that this issue exists for apps using Django
0.96 as well, not just 1.0.
http://groups.google.com/group/google-
appengine/browse_thread/thread/7487e5f429a2789d/56f93701188cb5ff
Here is a sample stack trace from my app:gqueues.appspot.com
========
<class 'google.appengine.runtime.DeadlineExceededError'>:
Traceback (most recent call last):
File "/base/data/home/apps/gqueues/beta2-7-9.336205351043982571/gqueues.py", line 23, in
<module>
from controllers.admin import AdminHandler
File "/base/data/home/apps/gqueues/beta2-7-9.336205351043982571/controllers/admin.py", line 10, in
<module>
from controllers.baserequest import BaseRequestHandler
File "/base/data/home/apps/gqueues/beta2-7-9.336205351043982571/controllers/baserequest.py", line
6, in <module>
from google.appengine.ext.webapp import template
File "/base/python_lib/versions/1/google/appengine/ext/webapp/template.py", line 65, in <module>
import django.template
File "/base/python_lib/versions/third_party/django-0.96/django/template/__init__.py", line 57, in
<module>
import re
============
The DeadlineExceededErrors seemed to start cropping up after the Sept 2nd maintenance. The logs vary, but
that all seem to be related to timeouts when trying to import modules (for django). My app uses the pytz
library, which I know has a lot of imports. So I tried removing that library to see if that would improve things,
but I still get the same problems. It's not all the time, but every day or two for several hours my app is
basically inaccessible for all my users because of this error, which is quite frustrating.
0.96 as well, not just 1.0.
appengine/browse_thread/thread/7487e5f429a2789d/56f93701188cb5ff
Here is a sample stack trace from my app:
========
<class 'google.appengine.runtime.DeadlineExceededError'>:
Traceback (most recent call last):
File "/base/data/home/apps/gqueues/beta2-7-9.336205351043982571/gqueues.py", line 23, in
<module>
from controllers.admin import AdminHandler
File "/base/data/home/apps/gqueues/beta2-7-9.336205351043982571/controllers/admin.py", line 10, in
<module>
from controllers.baserequest import BaseRequestHandler
File "/base/data/home/apps/gqueues/beta2-7-9.336205351043982571/controllers/baserequest.py", line
6, in <module>
from google.appengine.ext.webapp import template
File "/base/python_lib/versions/1/google/appengine/ext/webapp/template.py", line 65, in <module>
import django.template
File "/base/python_lib/versions/third_party/django-0.96/django/template/__init__.py", line 57, in
<module>
import re
============
The DeadlineExceededErrors seemed to start cropping up after the Sept 2nd maintenance. The logs vary, but
that all seem to be related to timeouts when trying to import modules (for django). My app uses the pytz
library, which I know has a lot of imports. So I tried removing that library to see if that would improve things,
but I still get the same problems. It's not all the time, but every day or two for several hours my app is
basically inaccessible for all my users because of this error, which is quite frustrating.
wt...@yahoo.com <wt...@yahoo.com> #22
I also see this with the built-in Django 0.96. It seems to affect some app ids but
not others.
<class 'google.appengine.runtime.DeadlineExceededError'>:
Traceback (most recent call last):
File "/base/data/home/apps/gmz/profile.336427364511513903/main.py", line 35, in
<module>
import django.core.handlers.wsgi
File
"/base/python_lib/versions/third_party/django-0.96/django/core/handlers/wsgi.py",
line 1, in <module>
from django.core.handlers.base import BaseHandler
File
"/base/python_lib/versions/third_party/django-0.96/django/core/handlers/base.py",
line 4, in <module>
import sys
<class 'google.appengine.runtime.DeadlineExceededError'>:
I have also seen it die here several times:
...
File "/base/python_dist/lib/python2.5/logging/__init__.py", line 1301, in exception
apply(error, (msg,)+args, {'exc_info': 1})
File "/base/python_dist/lib/python2.5/logging/__init__.py", line 1294, in error
apply(root.error, (msg,)+args, kwargs)
File "/base/python_dist/lib/python2.5/logging/__init__.py", line 1015, in error
apply(self._log, (ERROR, msg, args), kwargs)
File "/base/python_dist/lib/python2.5/logging/__init__.py", line 1101, in _log
self.handle(record)
File "/base/python_dist/lib/python2.5/logging/__init__.py", line 1111, in handle
self.callHandlers(record)
File "/base/python_dist/lib/python2.5/logging/__init__.py", line 1148, in callHandlers
hdlr.handle(record)
File "/base/python_dist/lib/python2.5/logging/__init__.py", line 655, in handle
self.emit(record)
File "/base/python_lib/versions/1/google/appengine/api/app_logging.py", line 70, in
emit
message = self._AppLogsMessage(record)
File "/base/python_lib/versions/1/google/appengine/api/app_logging.py", line 83, in
_AppLogsMessage
message = self.format(record).replace("\n", NEWLINE_REPLACEMENT)
File "/base/python_dist/lib/python2.5/logging/__init__.py", line 630, in format
return fmt.format(record)
File "/base/python_dist/lib/python2.5/logging/__init__.py", line 426, in format
record.exc_text = self.formatException(record.exc_info)
File "/base/python_dist/lib/python2.5/logging/__init__.py", line 398, in
formatException
traceback.print_exception(ei[0], ei[1], ei[2], None, sio)
File "/base/python_dist/lib/python2.5/traceback.py", line 125, in print_exception
print_tb(tb, limit, file)
File "/base/python_dist/lib/python2.5/traceback.py", line 69, in print_tb
line = linecache.getline(filename, lineno, f.f_globals)
File "/base/python_dist/lib/python2.5/linecache.py", line 14, in getline
lines = getlines(filename, module_globals)
File "/base/python_dist/lib/python2.5/linecache.py", line 40, in getlines
return updatecache(filename, module_globals)
File "/base/python_dist/lib/python2.5/linecache.py", line 128, in updatecache
fp = open(fullname, 'rU')
not others.
<class 'google.appengine.runtime.DeadlineExceededError'>:
Traceback (most recent call last):
File "/base/data/home/apps/gmz/profile.336427364511513903/main.py", line 35, in
<module>
import django.core.handlers.wsgi
File
"/base/python_lib/versions/third_party/django-0.96/django/core/handlers/wsgi.py",
line 1, in <module>
from django.core.handlers.base import BaseHandler
File
"/base/python_lib/versions/third_party/django-0.96/django/core/handlers/base.py",
line 4, in <module>
import sys
<class 'google.appengine.runtime.DeadlineExceededError'>:
I have also seen it die here several times:
...
File "/base/python_dist/lib/python2.5/logging/__init__.py", line 1301, in exception
apply(error, (msg,)+args, {'exc_info': 1})
File "/base/python_dist/lib/python2.5/logging/__init__.py", line 1294, in error
apply(root.error, (msg,)+args, kwargs)
File "/base/python_dist/lib/python2.5/logging/__init__.py", line 1015, in error
apply(self._log, (ERROR, msg, args), kwargs)
File "/base/python_dist/lib/python2.5/logging/__init__.py", line 1101, in _log
self.handle(record)
File "/base/python_dist/lib/python2.5/logging/__init__.py", line 1111, in handle
self.callHandlers(record)
File "/base/python_dist/lib/python2.5/logging/__init__.py", line 1148, in callHandlers
hdlr.handle(record)
File "/base/python_dist/lib/python2.5/logging/__init__.py", line 655, in handle
self.emit(record)
File "/base/python_lib/versions/1/google/appengine/api/app_logging.py", line 70, in
emit
message = self._AppLogsMessage(record)
File "/base/python_lib/versions/1/google/appengine/api/app_logging.py", line 83, in
_AppLogsMessage
message = self.format(record).replace("\n", NEWLINE_REPLACEMENT)
File "/base/python_dist/lib/python2.5/logging/__init__.py", line 630, in format
return fmt.format(record)
File "/base/python_dist/lib/python2.5/logging/__init__.py", line 426, in format
record.exc_text = self.formatException(record.exc_info)
File "/base/python_dist/lib/python2.5/logging/__init__.py", line 398, in
formatException
traceback.print_exception(ei[0], ei[1], ei[2], None, sio)
File "/base/python_dist/lib/python2.5/traceback.py", line 125, in print_exception
print_tb(tb, limit, file)
File "/base/python_dist/lib/python2.5/traceback.py", line 69, in print_tb
line = linecache.getline(filename, lineno, f.f_globals)
File "/base/python_dist/lib/python2.5/linecache.py", line 14, in getline
lines = getlines(filename, module_globals)
File "/base/python_dist/lib/python2.5/linecache.py", line 40, in getlines
return updatecache(filename, module_globals)
File "/base/python_dist/lib/python2.5/linecache.py", line 128, in updatecache
fp = open(fullname, 'rU')
ja...@gmail.com <ja...@gmail.com> #23
I recall seeing in a thread somewhere that the Sep 2 maintenance added some sort of exception management
that reported this particular problem as a DeadlineExceededError. So it could be that this issue pre-dated the
Sep 2 maintenance, but prior to that it surfaced in different ways.
I do know that for our apps, we've previously had issues with slow spin-up that manifested itself in odd ways.
E.g, Django unable to do a reverse lookup on a url or unable to find a particular view. Both of these examples
were transient; the best way that I can think of to explain it is that the Django initialization had the rug pulled
out from under it (by a deadline issue, presumably due to slow imports judging by information on this Issue)
and then the Django instance seemed to be permanently damaged and that particular application instance
always threw exceptions.
Since the Sep 2 maintenance, I now see DeadlineExceededErrors, but I no longer see this "Django in a bad
initialization state" issue. So I suspect Sep 2 maintenance added a check to see if the DeadlineExceededError
happens on a newly inflating application instance, and if so, the application instance is dumped.
But, I still think this particular issue pre-dated the Sep 2 maintenance; we're all just seeing it very visibly now.
that reported this particular problem as a DeadlineExceededError. So it could be that this issue pre-dated the
Sep 2 maintenance, but prior to that it surfaced in different ways.
I do know that for our apps, we've previously had issues with slow spin-up that manifested itself in odd ways.
E.g, Django unable to do a reverse lookup on a url or unable to find a particular view. Both of these examples
were transient; the best way that I can think of to explain it is that the Django initialization had the rug pulled
out from under it (by a deadline issue, presumably due to slow imports judging by information on this Issue)
and then the Django instance seemed to be permanently damaged and that particular application instance
always threw exceptions.
Since the Sep 2 maintenance, I now see DeadlineExceededErrors, but I no longer see this "Django in a bad
initialization state" issue. So I suspect Sep 2 maintenance added a check to see if the DeadlineExceededError
happens on a newly inflating application instance, and if so, the application instance is dumped.
But, I still think this particular issue pre-dated the Sep 2 maintenance; we're all just seeing it very visibly now.
bt...@brandonthomson.com <bt...@brandonthomson.com> #24
Someone in IRC suggested changing app ids as a method of getting around this.
Unfortunately I have keys hard-coded in my app so it is not trivial for me, but it
might work for some of you who have custom domains...
Unfortunately I have keys hard-coded in my app so it is not trivial for me, but it
might work for some of you who have custom domains...
li...@gmail.com <li...@gmail.com> #25
Our site (app id: skrit) has had much higher error rates since Sep 2, so if this issue
predates it, it's also gotten much worse at the same time. We really didn't have any
problems before.
predates it, it's also gotten much worse at the same time. We really didn't have any
problems before.
pr...@brushgroup.com <pr...@brushgroup.com> #26
This week it always happens between 6am and 9am (time from logs). As if some scheduled
heavy maintenance operation takes place in the background.
heavy maintenance operation takes place in the background.
ja...@gmail.com <ja...@gmail.com> #27
re: 6a to 9a - could be the North American daytime load coming onstream and many application instances are
spinning up which causes contention for import resources and therefore more DeadlineExceededErrors.
(I should stop supposing as I may be clouding the real problem with my own beliefs. ;)
spinning up which causes contention for import resources and therefore more DeadlineExceededErrors.
(I should stop supposing as I may be clouding the real problem with my own beliefs. ;)
zu...@gmail.com <zu...@gmail.com> #28
Hi
I have had my app running now since for several months and the last deployment 46
days ago, recently I have seen an increase in the number of Deadline Exceeded error
messages. I only ever experience these on app startup. Normally my app startup time
is between 3-8 secs, and occasionally 12.
An example of heavy startup (ie a view that has a query as well is instance spinup)
/lilies/night_flowers/view 200 6868ms 3678cpu_ms 295api_cpu_ms
vs failure
/lilies/night_flowers/view 500 28572ms 1963cpu_ms
These types of failures are always during import phase.
So definately something is going on.
I am not using django by the way.
Rgds
Tim
I have had my app running now since for several months and the last deployment 46
days ago, recently I have seen an increase in the number of Deadline Exceeded error
messages. I only ever experience these on app startup. Normally my app startup time
is between 3-8 secs, and occasionally 12.
An example of heavy startup (ie a view that has a query as well is instance spinup)
/lilies/night_flowers/view 200 6868ms 3678cpu_ms 295api_cpu_ms
vs failure
/lilies/night_flowers/view 500 28572ms 1963cpu_ms
These types of failures are always during import phase.
So definately something is going on.
I am not using django by the way.
Rgds
Tim
bt...@brandonthomson.com <bt...@brandonthomson.com> #29
Here is an example from today:
09-19 05:47AM 37.673 /admin/tick 200 10488ms 678cpu_ms 56api_cpu_ms 0kb Python-
urllib/2.5,gzip(gfe),gzip(gfe)
D 09-19 05:47AM 41.381 App loading (main.py)...
D 09-19 05:47AM 47.567 admin.py loading...
D 09-19 05:47AM 47.804 Initial imports complete. WSGI application loading...
D 09-19 05:47AM 47.860 Handler starting...
D 09-19 05:47AM 48.117 ...
D 09-19 05:47AM 48.117 Handler is done.
The debug statement "App loading (main.py)..." is at the very top of the python
script described in app.yaml as the handler for this URL. It looks like this:
#!/usr/bin/env python
import logging
logging.debug("App loading (main.py)...")
So, before the very first import has been completed, almost 4 seconds have elapsed.
Before the WSGI application has been loaded, almost 10 seconds have elapsed.
09-19 05:47AM 37.673 /admin/tick 200 10488ms 678cpu_ms 56api_cpu_ms 0kb Python-
urllib/2.5,gzip(gfe),gzip(gfe)
D 09-19 05:47AM 41.381 App loading (main.py)...
D 09-19 05:47AM 47.567 admin.py loading...
D 09-19 05:47AM 47.804 Initial imports complete. WSGI application loading...
D 09-19 05:47AM 47.860 Handler starting...
D 09-19 05:47AM 48.117 ...
D 09-19 05:47AM 48.117 Handler is done.
The debug statement "App loading (main.py)..." is at the very top of the python
script described in app.yaml as the handler for this URL. It looks like this:
#!/usr/bin/env python
import logging
logging.debug("App loading (main.py)...")
So, before the very first import has been completed, almost 4 seconds have elapsed.
Before the WSGI application has been loaded, almost 10 seconds have elapsed.
ja...@gmail.com <ja...@gmail.com> #30
I am still seeing lots of DeadlineExceededError errors this morning, so last night's maintenance didn't address
this issue.
this issue.
na...@gmail.com <na...@gmail.com> #31
It is absolutely horrible now. Right now, my app is getting server error for more than one hour. My users are very
angry along with myself at the moment.
angry along with myself at the moment.
se...@gmail.com <se...@gmail.com> #32
Im also seeing lots of DeadlineExceededError
ja...@gmail.com <ja...@gmail.com> #33
I can't say I'm angry because it is an (awesome) beta product, but it sure would be nice to hear something from
Google on this issue...
Google on this issue...
ja...@gmail.com <ja...@gmail.com> #34
Please reply with:
a) which runtime you are using
b) when you began to notice an excess of these exceptions -- did these start after
yesterday's maintenance or earlier in the month?
c) any imports you have for the handlers/servlets that are timing out
a) which runtime you are using
b) when you began to notice an excess of these exceptions -- did these start after
yesterday's maintenance or earlier in the month?
c) any imports you have for the handlers/servlets that are timing out
li...@gmail.com <li...@gmail.com> #35
a) Python
b) September 2nd maintenance
c) I haven't made narrow test cases, so I can't say it definitely happens on requests
smaller than importing Django 1.0, our models.py, and some views.
b) September 2nd maintenance
c) I haven't made narrow test cases, so I can't say it definitely happens on requests
smaller than importing Django 1.0, our models.py, and some views.
ja...@gmail.com <ja...@gmail.com> #36
AppIDs: myfrontsteps, steprep, mfs-demo, steprep-demo are all affected
(a) Python
(b) DeadlineExceededError started after Sep 2 maintenance, but I believe that this was just a change in how a
pre-existing problem was being reported by the framework
(c) we use Django 1.0 (provided by Google, not uploaded) and the appengine_django helper (r90). The timeouts
usually occur when importing different parts of Django, i.e., before it reaches our code at all.
(a) Python
(b) DeadlineExceededError started after Sep 2 maintenance, but I believe that this was just a change in how a
pre-existing problem was being reported by the framework
(c) we use Django 1.0 (provided by Google, not uploaded) and the appengine_django helper (r90). The timeouts
usually occur when importing different parts of Django, i.e., before it reaches our code at all.
se...@gmail.com <se...@gmail.com> #37
(a) Python
(b) earlier in the month
(c) The exceptions usually raise importing parts of Django, but not always
(b) earlier in the month
(c) The exceptions usually raise importing parts of Django, but not always
na...@gmail.com <na...@gmail.com> #38
application: dhadharu
runtime: python
api_version: 1
I use appengine-patch with django 1.1.
I am getting this from the beginning of this month.
runtime: python
api_version: 1
I use appengine-patch with django 1.1.
I am getting this from the beginning of this month.
js...@gmail.com <js...@gmail.com> #39
We have made some changes to address this issue so the situation should be improved.
Please to continue to report instances of timeouts during initial load imports so
that we can investigate.
Please to continue to report instances of timeouts during initial load imports so
that we can investigate.
se...@gmail.com <se...@gmail.com> #40
Nice I notice the improve, i haven't saw any Load exception in the last two days.
Thanks
Thanks
ja...@gmail.com <ja...@gmail.com> #41
App ID: StepRep
Google-supplied-lib: Django 1.0
appengine_django helper: r90
DeadlineExceededErrors at:
- 09-28 03:01PM 53.677
- 09-28 03:01PM 53.664
- 09-28 03:01PM 53.645
- 09-28 02:16PM 37.373
Google-supplied-lib: Django 1.0
appengine_django helper: r90
DeadlineExceededErrors at:
- 09-28 03:01PM 53.677
- 09-28 03:01PM 53.664
- 09-28 03:01PM 53.645
- 09-28 02:16PM 37.373
ja...@gmail.com <ja...@gmail.com> #42
The DeadlineExceededErrors were happening quite regularly for AppID StepRep at around:
- 09-28 05:48PM 47.929
- 09-28 05:48PM 47.929
di...@gmail.com <di...@gmail.com> #43
Maybe it is just a coincidence but on a www.cloudstatus.com can be seen pretty
amazing picture from 09/25/09.
It measures response time of a some python code that is resides in AppEngine and
usually (before Sep 25) there was fluctuations in response time between 280 ms and
360 ms.
But now it is almost straight graph with a sustained response time of 280ms.
I mean if mentioned changes were made a couple of days ago - they did the trick (at
least for a test code of CloudStatus).
I am not affiliated with them - just an observation.
amazing picture from 09/25/09.
It measures response time of a some python code that is resides in AppEngine and
usually (before Sep 25) there was fluctuations in response time between 280 ms and
360 ms.
But now it is almost straight graph with a sustained response time of 280ms.
I mean if mentioned changes were made a couple of days ago - they did the trick (at
least for a test code of CloudStatus).
I am not affiliated with them - just an observation.
zu...@gmail.com <zu...@gmail.com> #44
Thought everything was great, but DeadLine Exceeded today during imports.
09-29 12:08AM 30.704 appid svfalf
09-29 12:08AM 30.704 appid svfalf
bt...@brandonthomson.com <bt...@brandonthomson.com> #45
I haven't seen hardly any slow imports for around a week now, very happy with the
changes here!
changes here!
wk...@gmail.com <wk...@gmail.com> #46
What exactly got changed? Do you now compile the source to bytecode? Or could that provide another
performance boost?
performance boost?
ch...@gmail.com <ch...@gmail.com> #47
hi there
app id: bny-imageapp
i am using pylons. first page is almost a static page and do have around 8-13 sec startup time.
app id: bny-imageapp
i am using pylons. first page is almost a static page and do have around 8-13 sec startup time.
ja...@gmail.com <ja...@gmail.com> #48
This is occurring quite frequently again. App IDs myfrontsteps, steprep, vendastaweb.
It even occurs occasionally on a new, exceedingly simple webapp-based app (i.e., no Django).
It even occurs occasionally on a new, exceedingly simple webapp-based app (i.e., no Django).
ke...@gmail.com <ke...@gmail.com> #49
(a) Python
(b) earlier in the month
(c) The exceptions usually raise importing parts of Django, but not always
Using App-Engine-Patch, and importing Django-1.1.zip
(b) earlier in the month
(c) The exceptions usually raise importing parts of Django, but not always
Using App-Engine-Patch, and importing Django-1.1.zip
ke...@gmail.com <ke...@gmail.com> #50
[Comment deleted]
ke...@gmail.com <ke...@gmail.com> #51
Loaded with DeadlineExceeded errors today, esp, between 6-6:35 AM Pacific time, then
again at 7AM.
again at 7AM.
na...@gmail.com <na...@gmail.com> #52
I have seen this more than 40 times from yesterday. And right now, my app is totally
inaccessible. All the requests, except static files, are failing.
inaccessible. All the requests, except static files, are failing.
dd...@gmail.com <dd...@gmail.com> #53
Having the same problems with massive timeouts. App is not usable.
(a) python
(b) today (uploaded for testing)
(c) raise during various django imports. Am using app-engine-patch and django1-1.zip.
Most times does not even get to my code. Many times dies in url importing....
deadline exceptions are somewhat random but guaranteed to happen if you let the app
set 10-15 seconds and then try any action. Once you try numerous actions it seems to
'catch up' until either a random dealine exception comes along or you let the app
'set' for 10 - 15 then it's dead again.
(a) python
(b) today (uploaded for testing)
(c) raise during various django imports. Am using app-engine-patch and django1-1.zip.
Most times does not even get to my code. Many times dies in url importing....
deadline exceptions are somewhat random but guaranteed to happen if you let the app
set 10-15 seconds and then try any action. Once you try numerous actions it seems to
'catch up' until either a random dealine exception comes along or you let the app
'set' for 10 - 15 then it's dead again.
se...@gmail.com <se...@gmail.com> #54
we also got a lot of errors the last days, appid: wegif-gae
zu...@gmail.com <zu...@gmail.com> #55
I concurr, the frequency of DeadLineExceeded errors has gone way up in the last day
or so,
The system status page should really be showing the status of loading a heavy stack
like Django too.
or so,
The system status page should really be showing the status of loading a heavy stack
like Django too.
pe...@gmail.com <pe...@gmail.com> #56
I am also seeing this A LOT.
My details:
a) Python
b) Since maybe 2 weeks ago
c) I'm often having it from within google-app-engine-django imports (which will load
Django 1.1 by default) but sometimes within normal imports and sometimes even into my
own code (e.g. importing simplejson).
My details:
a) Python
b) Since maybe 2 weeks ago
c) I'm often having it from within google-app-engine-django imports (which will load
Django 1.1 by default) but sometimes within normal imports and sometimes even into my
own code (e.g. importing simplejson).
ev...@gmail.com <ev...@gmail.com> #57
A new issue, 2683, asks for a new pricing model to allow longer HTTP requests. This
would at least keep our applications from sending users errors until this is
resolved. BTW, we would also need to increase the CPU limits for the purposes of
starting a new instance. My app is regularly using 45 seconds of CPU to start a new
instance. This causes an exception as well.
So basically, 2683 would solve the 30 second request limit. We would need another
issue to handle the CPU limit.
Of course this isn't ideal. It would just help us until we figure out a real
solution to the problem. A slow response is much better than an error.
would at least keep our applications from sending users errors until this is
resolved. BTW, we would also need to increase the CPU limits for the purposes of
starting a new instance. My app is regularly using 45 seconds of CPU to start a new
instance. This causes an exception as well.
So basically, 2683 would solve the 30 second request limit. We would need another
issue to handle the CPU limit.
Of course this isn't ideal. It would just help us until we figure out a real
solution to the problem. A slow response is much better than an error.
br...@gmail.com <br...@gmail.com> #58
It seems that the loading time reduces a lot recently.
tt...@google.com <tt...@google.com> #59
Indeed, we've made numerous improvements to the loading time of instances over the past year. We've also launched warming requests and the Always On feature which is meant to help those who have long initial startup times.
Is there any solid asks in this issue, outside of the pricing model change request that is captured in Issue 2683 ? If not, I am going to close this.
Is there any solid asks in this issue, outside of the pricing model change request that is captured in
da...@gmail.com <da...@gmail.com> #60
I just wanted to note that I still experience problems with loading causing DeadlineExceededError. Sometimes my loading requests finish in 5 seconds or so, other times it takes upwards of 30+ seconds.
I am currently hitting this issue on my app mind-well. There are no problems reported on the App Engine System Status page. Currently the app is unusable due to this problem.
I am currently hitting this issue on my app mind-well. There are no problems reported on the App Engine System Status page. Currently the app is unusable due to this problem.
bt...@brandonthomson.com <bt...@brandonthomson.com> #61
These issues with slow imports have been popping up lately again. Sometimes a fix is to deploy several versions of the application since a version is not always affected.
ka...@gmail.com <ka...@gmail.com> #62
We've recently seen a few of the same timeout-on-import issues as well. appid:khanexercises
fc...@gmail.com <fc...@gmail.com> #63
I experience this issue very frequently and in a quite reproductible way.
My app (paid appid: upcache) is under development with Djangoappengine for it's intended to be a control/command IU for a software engines running elsewhere.
It's receiving informations from engines via HTTP POST requests handled by Djangoappengine (I do know that it's far from the most efficient way, but I need Django database model for subsequent treatments).
In development it receives 3 requests per minutes (+ a cron warmup per minute, quite inefficient) plus tests user requests.
The result is clearly instable.
Requests last between 70 and 400ms when instances are warm (at least I suppose). Suddenly everything goes wrong and for a few minutes all requests fail with a DeadlineExceededError (I even saw this error thrown after 1248121ms - 20 minutes, yes).
After some kind of struggle with the system, the app become responsive again.
User requests often failed or take 10 to 30 secondes to be treated ; at other times they take only some hundreds of milliseconds.
Appstats doesn't seem to be of any help as it seems to start only after the code is launched so figures are always under the second.
I hardly imagine running this app for commercial purpose...
Except giving up DjangoAppengine, does anyone know how to speed-up its start-up?
Thank you very much for your help!
Florent
My app (paid appid: upcache) is under development with Djangoappengine for it's intended to be a control/command IU for a software engines running elsewhere.
It's receiving informations from engines via HTTP POST requests handled by Djangoappengine (I do know that it's far from the most efficient way, but I need Django database model for subsequent treatments).
In development it receives 3 requests per minutes (+ a cron warmup per minute, quite inefficient) plus tests user requests.
The result is clearly instable.
Requests last between 70 and 400ms when instances are warm (at least I suppose). Suddenly everything goes wrong and for a few minutes all requests fail with a DeadlineExceededError (I even saw this error thrown after 1248121ms - 20 minutes, yes).
After some kind of struggle with the system, the app become responsive again.
User requests often failed or take 10 to 30 secondes to be treated ; at other times they take only some hundreds of milliseconds.
Appstats doesn't seem to be of any help as it seems to start only after the code is launched so figures are always under the second.
I hardly imagine running this app for commercial purpose...
Except giving up DjangoAppengine, does anyone know how to speed-up its start-up?
Thank you very much for your help!
Florent
gu...@google.com <gu...@google.com> #64
Reluctantly I am marking this issue as NotRepeatable. That's because there is no single bug that can be fixed -- this issue has become a catch-all place to gripe about DeadlineExceededError. There are many different underlying causes, often unique to the specific app (or the packages it loads), and in most cases not enough information is given to be able to explain the reason for the failure observed by a particular user. I recommend that instead of adding a "me too" comment to this tracker issue, people who are experiencing this problem post to the support group (http://groups.google.com/group/google-appengine-python ) so they can be helped either by other users who can explain their symptoms or by an App Engine devrel engineer.
da...@gmail.com <da...@gmail.com> #66
I am experiencing the same issue as indicated above.
ju...@google.com <ju...@google.com> #67
@David, can you open a new issue per instruction at [1] so that one of our support staff look into your specific issue as suggested at #64 since there may be many different underlying causes?
[1]https://cloud.google.com/support/docs/issue-trackers
[1]
Description
I don't really know if the production server uses similar code, but at
least with dev_appserver an import takes approximately 2-3x longer than
without dev_appserver (on the production server it even takes 4x longer
than on my slow 1.6GHz single-core Pentium M laptop).
This is a huge problem because our users sometimes have to wait 20 seconds
for an instance to start! I can't really improve much here because most of
the time is spent importing Django.
Could you please try to somehow optimize the instance load process?
Could you also please consider adding a preload mechanism which allows for
importing all required modules before a request hits the instance?
Would it be possible to also cache initialized instances across servers?
For example, could the preload mechanism be used to make some kind of
snapshot of an "initial" instance which could then be replicated very
quickly?