Change theme
Help
Press space for more information.
Show links for this issue (Shortcut: i, l)
Copy issue ID
Previous Issue (Shortcut: k)
Next Issue (Shortcut: j)
Sign in to use full features.
Vote: I am impacted
Notification menu
Refresh (Shortcut: Shift+r)
Go home (Shortcut: u)
Pending code changes (auto-populated)
View issue level access limits(Press Alt + Right arrow for more information)
Attachment actions
Request for new functionality
View staffing
Description
* What is the desired behavior of the feature? (Be specific!)
For python appengine applications using apiproxy, the latency of async requests (both memcache and datastore) cannot always be measured in application code (see below to explain why post call hooks and callbacks are insufficient). This applies to any async request where the the application code does additional work or makes other blocking requests before calling a blocking operation on the async result (such as get_result() or wait()). We would like the api to either provide the request latency directly back to client applications or provide a method to add instrumentation that can be used to accurately measure latencies for async requests. We have previously been using the pre- and post- call hooks to measure latency and found for our memcache usage patterns, the majority of our memcache requests are incorrect, often by 100s of milliseconds. Our datastore access (we do considerable async parallel calls) has similar issues. See below for why using pre- and post- hooks and callbacks are insufficient.
* If relevant, why are current approaches or workarounds insufficient?
Why the post-call hook and callbacks are insufficient:
We have been unable to find an accurate way to measure the latency of async memcache or datastore requests (sync requests are fine). This is because both the post-hook and callbacks are triggered when a blocking call on the async response are called (e.g. .wait() or .get_result()) which may be after when the async response was received. And as far as I can tell there is no other way to accurately measure the latency of the async request.
For example:
def post_call_hook(service, call, request, response, rpc=None, error=None):
apiproxy_stub_map.apiproxy.GetPostCallHooks().Append('log hook', post_call_hook)
class MainPage(webapp2.RequestHandler):
def get(self):
async_result = memcache.Client().get_multi_async(['exampleKey'], rpc=memcache.create_rpc())
time.sleep(5)
async_result.get_result()
If run, the time between the request and the post_call hook will be 5 seconds, not the ~2ms that the memcache request takes. Callbacks behave similarly. It makes sense that post call hooks and callbacks are executed from within blocking calls such as get_result() and wait(), but there is no general way to measure latency using them.
I've attached a digram showing how using the post call hook results in incorrect latency measurements.
* If relevant, what new use cases will this feature will enable?
Provide developers using async requests a way to instrument their applications with accurate latency measurements of outbound calls to memcache and datastore so they can optimize performance.