Status Update
Comments
ts...@google.com <ts...@google.com> #2
Any future updates to this feature will be posted here.
rp...@beneficiofacil.com.br <rp...@beneficiofacil.com.br> #3
I understand that the issue is potentially low priority and that numerous workarounds exists but I would like to share some background info about this.
We have a Cloud SQL Second Gen instance and that went nuts one day, consuming all disk space in the universe up until it reached the maximum disk capacity of ~9TB. That issue "self resolved" (some Cloud magick is not bad...) and we did not saw any impact on any of our queries or none of our database users noticed.
This is to some extent good: something that would otherwise cause service disruption did not, the issue self-resolved and the service continued to work. However, our data usage for the instance is around 20G. But our database now have 9TB of disk space allocated to it. Huge amounts of *wasted* disk space in the Cloud era.
I have lots os services that use this database and rebuilding it from scratch (*shutdown services*, *cause service disruption for users*, mysqldump, backup, stop old instance, start new one, use mysql to restore data, reconfigure all services to new database, new IP, etc.) is laborious and, well, can happen again in the future.
It would be a nice feature to reduce the size, but I understand as a developer that "it is not that simple". Lots of infrastructure and key design decisions prevent me as a user to "just change the size down", but I also have no way/tools to assist me into moving the relevant data (20GB) out of all that wasted reserved "empty" disk space I'm paying for, without causing service disruption. Instead of spending time testing and simulating that DB work to rebuild our DB, I would like to use it to improve our system to our customers. But I'll have to go to the office on a Sunday to make this slightly laborious change because I have no easy way around this one.
[Deleted User] <[Deleted User]> #4
[Deleted User] <[Deleted User]> #5
Perhaps a useful approach to this would be a duplicate function that let's me set the new DB size and keep all the other config the same.
Or even the ability to export the full config of the instance as a JSON file (or whatever) - then I could do that for both instances and figure out what the heck is different between them!!
[Deleted User] <[Deleted User]> #6
je...@panerabread.com <je...@panerabread.com> #7
The problem can compound if a customer then clones this db for other environments, as the excess (now-unused) disk space will also be reserved in the clones.
The end result can be a cloudSQL bill that is dramatically higher than expected or forecasted.
mo...@google.com <mo...@google.com> #8
da...@flockfreight.com <da...@flockfreight.com> #9
I just confirm that this is a big problem for us as well. We rely on the backup restores to create development environments, thus the resized disk now would have to propagate to all development instances as well.
sd...@gmail.com <sd...@gmail.com> #10
If we have to restore a copy of database to take back some data and then delete the restored db we have to do it in another istance, else where first istance grow
and you have to pay for that extra space forever.
It's also annoying that if you have to make a copy of db for some test you have always to do it in another istance, and at the end delete the test istance
elsewhere you pay forever for unused free space.
And if you have some script that refer to the istance name, it is a problem becouse you have to change script everytime.
em...@gmail.com <em...@gmail.com> #11
Hope this issue will be solved soon
sa...@gmail.com <sa...@gmail.com> #12
jk...@gmail.com <jk...@gmail.com> #13
ag...@gmail.com <ag...@gmail.com> #14
[Deleted User] <[Deleted User]> #15
sh...@datacue.co <sh...@datacue.co> #16
Here's how it could work:
1. Create a new read replica (allow specifying a new size - if the new size is too small, this operation can fail with an error)
2. Once read replica is created, allow triggering a failover to make the newly resized read replica the new master and drop the existing master (with excessive storage allocation).
Is that feasible? I understand it's easier said than done.
[Deleted User] <[Deleted User]> #17
wa...@greenaerotech.com <wa...@greenaerotech.com> #18
[Deleted User] <[Deleted User]> #19
wa...@gmail.com <wa...@gmail.com> #20
[Deleted User] <[Deleted User]> #21
da...@afterbanks.com <da...@afterbanks.com> #22
[Deleted User] <[Deleted User]> #23
jo...@citrusad.com <jo...@citrusad.com> #24
[Deleted User] <[Deleted User]> #25
cp...@paraty.es <cp...@paraty.es> #26
dg...@dataseekers.es <dg...@dataseekers.es> #27
fj...@dataseekers.es <fj...@dataseekers.es> #28
er...@dataseekers.es <er...@dataseekers.es> #29
fr...@gmail.com <fr...@gmail.com> #30
jm...@dataseekers.es <jm...@dataseekers.es> #31
gi...@etorox.com <gi...@etorox.com> #32
vm...@dataseekers.es <vm...@dataseekers.es> #33
na...@google.com <na...@google.com> #34
nl...@eryxsoluciones.com.ar <nl...@eryxsoluciones.com.ar> #35
ma...@gmail.com <ma...@gmail.com> #36
mc...@sossego.com.br <mc...@sossego.com.br> #37
[Deleted User] <[Deleted User]> #38
ti...@peekandpoke.com <ti...@peekandpoke.com> #39
ma...@gmail.com <ma...@gmail.com> #40
ky...@lodgify.com <ky...@lodgify.com> #41
yo...@backup.affluent.io <yo...@backup.affluent.io> #42
We recently went through a project to clean up and remove a bunch of old data. We have multiple instances now where we're using >3TB less than the allotted storage, costing us > $1,000 a month for unused space.
pa...@spidergap.com <pa...@spidergap.com> #43
de...@pluscompany.com <de...@pluscompany.com> #44
Same comment here, using auto scale HDD size put you at risk to have a surge in the storage then you are caught with size increase for nothing. And the workaround consisting of creating a new instance to downsize it maybe conplicated for actual production databases
Thanks
ab...@amperon.co <ab...@amperon.co> #45
[Deleted User] <[Deleted User]> #46
Paying double as what we should be paying now due to performing a VCAUUM on a very large table.
[Deleted User] <[Deleted User]> #47
ma...@gmail.com <ma...@gmail.com> #48
[Deleted User] <[Deleted User]> #49
jo...@cruxinformatics.com <jo...@cruxinformatics.com> #50
[Deleted User] <[Deleted User]> #51
co...@12parsecs.io <co...@12parsecs.io> #52
This not being addressed or acknowledged has me second guessing my chose to go with GCP as a whole even. I've been building up a POC startup over the last 6 months that I want to start pushing out more widely. Maybe now is the time to avoid GCP's lack of development on basic functionality. "Auto storage increase" shouldn't have been pushed as a feature until scaling back that automatic storage increase was a feature. Or at least marking it as beta if there was a decent workaround to reduce storage.
js...@paloaltonetworks.com <js...@paloaltonetworks.com> #53
oc...@gmail.com <oc...@gmail.com> #54
ra...@nibblecomm.com <ra...@nibblecomm.com> #55
fa...@gmail.com <fa...@gmail.com> #56
[Deleted User] <[Deleted User]> #57
sv...@scnmedia.net <sv...@scnmedia.net> #58
ti...@gmail.com <ti...@gmail.com> #59
el...@gmail.com <el...@gmail.com> #60
cv...@redapt.com <cv...@redapt.com> #61
my...@gmail.com <my...@gmail.com> #62
fi...@teamapt.com <fi...@teamapt.com> #63
ju...@google.com <ju...@google.com> #64
cp...@paraty.es <cp...@paraty.es> #65
ca...@leega.com.br <ca...@leega.com.br> #66
yu...@cohere.io <yu...@cohere.io> #67
an...@redso.com.hk <an...@redso.com.hk> #68
ma...@gmail.com <ma...@gmail.com> #69
To indicate you are impacted please don't add a "+1" comment but rather click the star next to the bug id. Thx!
da...@sourcegraph.com <da...@sourcegraph.com> #70
ky...@lodgify.com <ky...@lodgify.com> #71
yu...@gridwise.io <yu...@gridwise.io> #72
de...@gmail.com <de...@gmail.com> #73
gr...@1click2deliver.com <gr...@1click2deliver.com> #74
ma...@gmail.com <ma...@gmail.com> #75
[Deleted User] <[Deleted User]> #76
le...@gmail.com <le...@gmail.com> #77
[Deleted User] <[Deleted User]> #78
I just did this migrating from a mostly empty 2TB MySQL 5.7 database to a <300GB MySQl 8.0 database. The instructions here will work:
The main difference is that the IP address of the external source database will be the public IP of your current Google Cloud SQL database. You may have to create a public IP temporarily to facilitate the transfer.
After you have a live replica with a smaller footprint, you can promote it and migrate your dependent services to it.
ia...@upskillpeople.com <ia...@upskillpeople.com> #79
ki...@gmail.com <ki...@gmail.com> #80
si...@tumelo.com <si...@tumelo.com> #81
ma...@gmail.com <ma...@gmail.com> #82
sa...@gmail.com <sa...@gmail.com> #83
mo...@alphathena.com <mo...@alphathena.com> #84
[Deleted User] <[Deleted User]> #85
[Deleted User] <[Deleted User]> #86
fe...@gmail.com <fe...@gmail.com> #87
er...@paslists.com <er...@paslists.com> #88
pe...@emarsys.com <pe...@emarsys.com> #89
vi...@gmail.com <vi...@gmail.com> #90
ga...@gmail.com <ga...@gmail.com> #91
[Deleted User] <[Deleted User]> #92
[Deleted User] <[Deleted User]> #93
en...@globalfishingwatch.org <en...@globalfishingwatch.org> #94
ju...@globalfishingwatch.org <ju...@globalfishingwatch.org> #95
rd...@gmail.com <rd...@gmail.com> #96
al...@globalfishingwatch.org <al...@globalfishingwatch.org> #97
ki...@gmail.com <ki...@gmail.com> #98
[Deleted User] <[Deleted User]> #99
mo...@gmail.com <mo...@gmail.com> #100
vi...@gmail.com <vi...@gmail.com> #101
he...@gmail.com <he...@gmail.com> #102
ni...@gmail.com <ni...@gmail.com> #103
dr...@dericktronix.com <dr...@dericktronix.com> #104
[Deleted User] <[Deleted User]> #105
ia...@croptix.solutions <ia...@croptix.solutions> #106
om...@gmail.com <om...@gmail.com> #107
jc...@repairpal.com <jc...@repairpal.com> #108
[Deleted User] <[Deleted User]> #109
ki...@gmail.com <ki...@gmail.com> #110
ba...@gmail.com <ba...@gmail.com> #111
la...@wetranscloud.com <la...@wetranscloud.com> #112
ha...@iggstrom.com <ha...@iggstrom.com> #113
[Deleted User] <[Deleted User]> #114
dg...@gmail.com <dg...@gmail.com> #115
pa...@lawndoctor.com <pa...@lawndoctor.com> #116
na...@bitqit.com <na...@bitqit.com> #117
re...@amperon.co <re...@amperon.co> #118
+1
br...@aylien.com <br...@aylien.com> #119
[Deleted User] <[Deleted User]> #120
cs...@monoprix.fr <cs...@monoprix.fr> #121
we...@insert.com.pl <we...@insert.com.pl> #122
ag...@gmail.com <ag...@gmail.com> #123
ro...@gmail.com <ro...@gmail.com> #124
sa...@veolia.com <sa...@veolia.com> #125
as...@gmail.com <as...@gmail.com> #126
It's only been a *few years* since the original issue was raised...
Or is this such a cash-cow for you it'll hurt your profits?
+1
pa...@google.com <pa...@google.com> #127
va...@kramp.com <va...@kramp.com> #128
[Deleted User] <[Deleted User]> #129
al...@gmail.com <al...@gmail.com> #130
se...@piertwo.com <se...@piertwo.com> #131
[Deleted User] <[Deleted User]> #132
ak...@gmail.com <ak...@gmail.com> #133
ra...@gmail.com <ra...@gmail.com> #134
[Deleted User] <[Deleted User]> #135
pa...@weduu.com <pa...@weduu.com> #136
pe...@gmail.com <pe...@gmail.com> #137
[Deleted User] <[Deleted User]> #138
[Deleted User] <[Deleted User]> #139
ch...@primasoftware.com <ch...@primasoftware.com> #140
sh...@gmail.com <sh...@gmail.com> #141
di...@fincatto.com <di...@fincatto.com> #142
xm...@gmail.com <xm...@gmail.com> #143
There is no way for me to decrease, I'm stuck. Because;
It's not possible to export/import sql file since 1.8 TB takes more than a week to export/import, even though I use the highest spec VM.
Google Cloud SQL also doesn't allow me to access mysql data folder, so it's also not possible to export by copying data folder.
I planned to create a read-replica outside of Google Cloud SQL, then switch master to it, then get rid of cloud sql, but its impossible because it takes more than a week to export/import sql file, and Google Cloud only keeps 7 days of mysql bin logs, so new instance can't keep up with the master because of the missing bin logs.
I'M STUCK ON GOOGLE CLOUD SQL.
sh...@gmail.com <sh...@gmail.com> #144
aj...@google.com <aj...@google.com> #145
xmripper: while we don't support a managed solution to decrease storage size at this time, we do have a workaround that involves migrating your overprovisioned database to a right-sized Cloud SQL for MySQL instance.
Most customers have had success with Database Migration Service, which offers a seamless managed tool for migrations. We have had success with databases your size and larger:
For very large MySQL databases, we have customers migrating through our external server replication tool using a custom import:
In short, you can use this guide to use third party tools like mydumper and myloader to more rapidly dump and import multi-terabyte databases than would be possible with MySQL's native mysqldump and import utilities.
Best,
Akhil, Cloud SQL for MySQL Product Manager
ma...@cloudomation.com <ma...@cloudomation.com> #146
an...@gmail.com <an...@gmail.com> #147
ra...@snapchat.com <ra...@snapchat.com> #148
sh...@gmail.com <sh...@gmail.com> #149
ha...@feichtl.com <ha...@feichtl.com> #150
ak...@dac.co.jp <ak...@dac.co.jp> #151
+1
da...@tvh.com <da...@tvh.com> #152
bs...@gmail.com <bs...@gmail.com> #153
pr...@rtbhouse.com <pr...@rtbhouse.com> #154
vl...@aytm.com <vl...@aytm.com> #155
328 GB of 871 GB
Storage used after import SQL file, WTF?
I started instance with 400GB, space auto-increased to 850, then vacuumed (?) to 328GB
On prem instance uses ~350GB
Why we should pay for unused 400GB of SSD?
[Deleted User] <[Deleted User]> #156
an...@mantel.com <an...@mantel.com> #157
uq...@gmail.com <uq...@gmail.com> #158
ja...@atlas.health <ja...@atlas.health> #159
[Deleted User] <[Deleted User]> #160
[Deleted User] <[Deleted User]> #161
an...@gocardless.com <an...@gocardless.com> #162
do...@herondata.io <do...@herondata.io> #163
an...@gmail.com <an...@gmail.com> #164
le...@gmail.com <le...@gmail.com> #165
ay...@gocomet.com <ay...@gocomet.com> #166
ky...@lodgify.com <ky...@lodgify.com> #167
rt...@gmail.com <rt...@gmail.com> #168
[Deleted User] <[Deleted User]> #169
I have the same issue - after initial import of the databases the storage capacity is roughly at 200% of what we actually use for data... (after the WAL files have timed out and got removed)
[Deleted User] <[Deleted User]> #170
pa...@gmail.com <pa...@gmail.com> #171
be...@alphachain.io <be...@alphachain.io> #172
st...@iturn.it <st...@iturn.it> #173
ib...@gmail.com <ib...@gmail.com> #174
[Deleted User] <[Deleted User]> #175
am...@google.com <am...@google.com> #176
sn...@managedmethods.com <sn...@managedmethods.com> #177
yu...@gmail.com <yu...@gmail.com> #178
[Deleted User] <[Deleted User]> #179
al...@gmail.com <al...@gmail.com> #180
pa...@orbitremit.com <pa...@orbitremit.com> #181
za...@bosslogics.com <za...@bosslogics.com> #182
br...@technologik.io <br...@technologik.io> #183
mi...@28east.co.za <mi...@28east.co.za> #184
pa...@gmail.com <pa...@gmail.com> #185
ba...@gmail.com <ba...@gmail.com> #186
da...@paperflow.com <da...@paperflow.com> #187
vk...@anna.money <vk...@anna.money> #188
ma...@proexe.pl <ma...@proexe.pl> #189
se...@epam.com <se...@epam.com> #190
+1
sy...@brightedge.com <sy...@brightedge.com> #191
ge...@tactable.io <ge...@tactable.io> #192
ma...@adviqoapi.com <ma...@adviqoapi.com> #193
ju...@sneakybox.biz <ju...@sneakybox.biz> #194
ma...@megon.com.br <ma...@megon.com.br> #195
mi...@splitmedialabs.com <mi...@splitmedialabs.com> #196
dh...@gmail.com <dh...@gmail.com> #197
ok...@dto.kemkes.go.id <ok...@dto.kemkes.go.id> #198
of...@wilburlabs.com <of...@wilburlabs.com> #199
wp...@nyu.edu <wp...@nyu.edu> #200
cy...@xin-yin.net <cy...@xin-yin.net> #201
[Deleted User] <[Deleted User]> #202
cl...@bv.com.br <cl...@bv.com.br> #203
bu...@kllr.io <bu...@kllr.io> #204
lu...@purpleocean.eu <lu...@purpleocean.eu> #205
pa...@gmail.com <pa...@gmail.com> #206
br...@askgms.com <br...@askgms.com> #207
I'm pretty sure we can all guess why Google won't ever work on this issue. Each and every organization posting here is dramatically overpaying for storage, which is a perk to this particular billing/allocation scheme. The crazy number of hoops required to migrate and reduce allocation isn't a bug - it's a feature of Google's design, and I'd be shocked if they ever changed it since you'll only hit this issue after you're already deeper into the ecosystem.
st...@delcom.nl <st...@delcom.nl> #208
zo...@aliz.ai <zo...@aliz.ai> #209
vl...@gmail.com <vl...@gmail.com> #210
Looks like a lot of people here came into the same trap as I am.
My case was:
- restore dump to SQL instance
- x2 storage used of original (on-prem) instance
- after some time (~1 week) used storage decreased
- but we can't decrease storage of SQL instance.
I found a solution for myself: Just disable "Point-in-time recovery" before restore db and enable it after restore.
In my case difference is 985GB vs 380GB.
Hope it helps.
br...@askgms.com <br...@askgms.com> #211
@vl...@gmail.com That's a really helpful tip! It's wild that point-in-time recovery can amplify on-DB storage that much, but it makes sense. Next time we restore we'll give that a shot.
la...@gmail.com <la...@gmail.com> #212
[Deleted User] <[Deleted User]> #213
[Deleted User] <[Deleted User]> #214
sh...@keyvalue.systems <sh...@keyvalue.systems> #215
ch...@withtally.com <ch...@withtally.com> #216
ja...@gmail.com <ja...@gmail.com> #217
[Deleted User] <[Deleted User]> #218
jc...@gmail.com <jc...@gmail.com> #219
cy...@gmail.com <cy...@gmail.com> #220
al...@gmail.com <al...@gmail.com> #221
For example, AWS allows you to rename the instances so you can change the name of the old instance and rename the new instance to the original name. So the operation does not require any work from clients. Is this possible in GCP? Here are the AWS docs:
fa...@gmail.com <fa...@gmail.com> #222
th...@gmail.com <th...@gmail.com> #223
br...@plexm.com <br...@plexm.com> #224
ru...@simplify.jobs <ru...@simplify.jobs> #225
We have ~10GB of data and our instance is reserving ~600GB. Attempted a migration using import/exports and ran into hundreds of small issues. Then tried the Database Migration Service that also ran into internal errors.
I hate it here. Why can I just not clone this database with less space reserved? This really does not need to be this insanely complex.
br...@gmail.com <br...@gmail.com> #226
du...@rouseservices.com <du...@rouseservices.com> #227
sh...@yapily.com <sh...@yapily.com> #228
ru...@edgeandnode.com <ru...@edgeandnode.com> #229
me...@gmail.com <me...@gmail.com> #230
st...@loblaw.ca <st...@loblaw.ca> #231
la...@gmail.com <la...@gmail.com> #232
mv...@mercadona.es <mv...@mercadona.es> #233
ad...@deimos.co.za <ad...@deimos.co.za> #234
fi...@embriotech.ch <fi...@embriotech.ch> #235
ce...@libeo.io <ce...@libeo.io> #236
ro...@businessmind.es <ro...@businessmind.es> #237
+1
da...@optimumfleethealth.com <da...@optimumfleethealth.com> #238
we...@safigen.com <we...@safigen.com> #239
[Deleted User] <[Deleted User]> #240
ga...@cappuccino.fm <ga...@cappuccino.fm> #241
ai...@google.com <ai...@google.com> #242
ma...@gmail.com <ma...@gmail.com> #243
yu...@getcruise.com <yu...@getcruise.com> #244
mo...@gmail.com <mo...@gmail.com> #245
ag...@gmail.com <ag...@gmail.com> #246
ek...@stargcp.com <ek...@stargcp.com> #247
dp...@petabloc.com <dp...@petabloc.com> #248
am...@google.com <am...@google.com> #249
[Deleted User] <[Deleted User]> #250
mi...@gmail.com <mi...@gmail.com> #251
al...@anymindgroup.com <al...@anymindgroup.com> #252
th...@deliveree.com <th...@deliveree.com> #253
jo...@homedepot.com <jo...@homedepot.com> #254
ro...@gmail.com <ro...@gmail.com> #255
mo...@incorta.com <mo...@incorta.com> #256
va...@jetbrains.com <va...@jetbrains.com> #257
+1
ty...@eccogroupusa.com <ty...@eccogroupusa.com> #258
wi...@gmail.com <wi...@gmail.com> #259
[Deleted User] <[Deleted User]> #260
ok...@gmail.com <ok...@gmail.com> #261
be...@gmail.com <be...@gmail.com> #262
ar...@gmail.com <ar...@gmail.com> #263
19...@gmail.com <19...@gmail.com> #264
ma...@googlemail.com <ma...@googlemail.com> #265
kr...@gmail.com <kr...@gmail.com> #266
[Deleted User] <[Deleted User]> #267
[Deleted User] <[Deleted User]> #268
ma...@bendigoadelaide.com.au <ma...@bendigoadelaide.com.au> #269
ni...@gmail.com <ni...@gmail.com> #270
du...@gmail.com <du...@gmail.com> #271
a....@globalgames.net <a....@globalgames.net> #272
ot...@teamworkcommerce.com <ot...@teamworkcommerce.com> #273
sh...@palletapp.com <sh...@palletapp.com> #274
al...@noogata.com <al...@noogata.com> #275
ni...@onxmaps.com <ni...@onxmaps.com> #276
th...@empresometro.com.br <th...@empresometro.com.br> #277
sa...@doit.com <sa...@doit.com> #278
tw...@skydreams.nl <tw...@skydreams.nl> #279
ha...@onemount.com <ha...@onemount.com> #280
ka...@pm.me <ka...@pm.me> #281
da...@ciro.io <da...@ciro.io> #282
Please implement this feature!
-- Xoogler
tw...@skydreams.nl <tw...@skydreams.nl> #283
sh...@gmail.com <sh...@gmail.com> #284
+1
[Deleted User] <[Deleted User]> #285
an...@gmail.com <an...@gmail.com> #286
fe...@smart-pricer.com <fe...@smart-pricer.com> #287
ni...@gmail.com <ni...@gmail.com> #288
ar...@gmail.com <ar...@gmail.com> #289
ji...@investorhub.com <ji...@investorhub.com> #290
sc...@whoosh.io <sc...@whoosh.io> #291
ma...@wises.com.br <ma...@wises.com.br> #292
ra...@gaida.tech <ra...@gaida.tech> #293
ma...@gendigital.com <ma...@gendigital.com> #294
[Deleted User] <[Deleted User]> #295
pa...@serveracademy.com <pa...@serveracademy.com> #296
ma...@aniline.io <ma...@aniline.io> #297
ja...@tautona.ai <ja...@tautona.ai> #298
[Deleted User] <[Deleted User]> #299
am...@joyteam.games <am...@joyteam.games> #300
mr...@gmail.com <mr...@gmail.com> #301
ma...@wifworld.com <ma...@wifworld.com> #302
da...@m.co <da...@m.co> #303
sn...@gmail.com <sn...@gmail.com> #304
ja...@gmail.com <ja...@gmail.com> #305
ky...@orderlyhealth.com <ky...@orderlyhealth.com> #306
ri...@mc1global.com <ri...@mc1global.com> #307
hi...@cloudsales.sa <hi...@cloudsales.sa> #308
er...@gmail.com <er...@gmail.com> #309
tr...@gmail.com <tr...@gmail.com> #310
lu...@gendigital.com <lu...@gendigital.com> #311
se...@gmail.com <se...@gmail.com> #312
ba...@gmail.com <ba...@gmail.com> #313
ke...@micepadapp.com <ke...@micepadapp.com> #314
i....@redmed.ge <i....@redmed.ge> #315
je...@integral.xyz <je...@integral.xyz> #316
vi...@rivile.lt <vi...@rivile.lt> #317
So it's 2024, and Google is still doesn't care that its cloud users constantly have to deal with cases like in the attachment.
wi...@bexrealty.com <wi...@bexrealty.com> #318
jo...@synd.io <jo...@synd.io> #319
jo...@cotton.dev <jo...@cotton.dev> #320
jo...@synd.io <jo...@synd.io> #321
we...@sysmo.com.br <we...@sysmo.com.br> #322 Restricted+
ka...@gmail.com <ka...@gmail.com> #323
sa...@gmail.com <sa...@gmail.com> #324
qa...@gmail.com <qa...@gmail.com> #325
[Deleted User] <[Deleted User]> #326
ad...@hopper.com <ad...@hopper.com> #327
ba...@gmail.com <ba...@gmail.com> #328
bv...@gocardless.com <bv...@gocardless.com> #329
se...@gmail.com <se...@gmail.com> #330
va...@google.com <va...@google.com>
sa...@google.com <sa...@google.com> #331
[Deleted User] <[Deleted User]> #332
te...@gmail.com <te...@gmail.com> #333
[Deleted User] <[Deleted User]> #334
ku...@google.com <ku...@google.com>
ju...@clever.gy <ju...@clever.gy> #335
ma...@ingka.ikea.com <ma...@ingka.ikea.com> #336
ma...@gmail.com <ma...@gmail.com> #337
fr...@ggl.life <fr...@ggl.life> #338
sg...@google.com <sg...@google.com> #339
Hi @gcppit-team@google.com Team -
Since this request was raised for years now, do we have any progress or updates on this feature request. Thank you in advance.
va...@gmail.com <va...@gmail.com> #340
tm...@gmail.com <tm...@gmail.com> #341
fb...@gmail.com <fb...@gmail.com> #342
ad...@cross-entropy.com <ad...@cross-entropy.com> #343
ma...@duinker.eu <ma...@duinker.eu> #344
ko...@septeni-incubate.co.jp <ko...@septeni-incubate.co.jp> #345
ha...@donuts.ne.jp <ha...@donuts.ne.jp> #346
ja...@simplified.co <ja...@simplified.co> #347
pc...@mediarithmics.com <pc...@mediarithmics.com> #348
pr...@gmail.com <pr...@gmail.com> #349
sh...@adevinta.com <sh...@adevinta.com> #350
ga...@ford.com <ga...@ford.com> #351
pi...@gmail.com <pi...@gmail.com> #352
ja...@cart.com <ja...@cart.com> #353
---> P2 since 2017 ---- 1st upvote in 2025 ---- Its making them $$$ to do NOTHING ---- IssueTracker is where concerns go to be buried in the sands of time...
Description