Assigned
Status Update
Comments
su...@google.com <su...@google.com>
su...@google.com <su...@google.com> #2
Hello,
Thanks for the feature request! This has been forwarded to the Product Engineering Team so that they may evaluate it. Note that there are no ETAs or guarantees of implementation for feature requests. All communication regarding this feature request is to be done here.
su...@google.com <su...@google.com>
su...@google.com <su...@google.com>
av...@ofx.com <av...@ofx.com> #3
Just hit the limit at 32 GB too. 64 GB would be better
Description
Please describe your requested enhancement. Good feature requests will solve common problems or enable new use cases.
What you would like to accomplish:
We currently process individual csv files from cloud storage via an event driven pipeline which uses Google Cloud Functions (python 3.x). We read these files in, apply some changes, and write them back to a seperate cloud storage bucket.
Rarely (in about 0.1% of cases), these files are too large to fit within the 32GB memory limit of the function we are using for processing, so we have to chunk the reads/writes as per
How this might work:
Provide a 64GB memory allocation option (as was done with 32GB, which is currently in preview as per
If applicable, reasons why alternative solutions are not sufficient:
1. Current process writes one files to multiple different files and causes additional effort downstream
2. Cloud Functions is fit for purpose for 99.9% of our use cases so implementing workarounds for these edge cases using Compute Engine etc. would introduce additional operations effort and risk
Other information (workarounds you have tried, documentation consulted, etc):
Currently chunking the reads/writes as per