Assigned
Status Update
Comments
al...@google.com <al...@google.com> #2
I have filed your feature request to our product team. However, there is no guarantee that this feature will be implemented, and the ETA cannot be provided. Rest assured that your feedback is always taken into account, as it allows us to improve the platform.
Any future updates to this feature will be posted here.
Any future updates to this feature will be posted here.
lu...@ciandt.com <lu...@ciandt.com> #3
Hi,
I understand that the issue is potentially low priority and that numerous workarounds exists but I would like to share some background info about this.
We have a Cloud SQL Second Gen instance and that went nuts one day, consuming all disk space in the universe up until it reached the maximum disk capacity of ~9TB. That issue "self resolved" (some Cloud magick is not bad...) and we did not saw any impact on any of our queries or none of our database users noticed.
This is to some extent good: something that would otherwise cause service disruption did not, the issue self-resolved and the service continued to work. However, our data usage for the instance is around 20G. But our database now have 9TB of disk space allocated to it. Huge amounts of *wasted* disk space in the Cloud era.
I have lots os services that use this database and rebuilding it from scratch (*shutdown services*, *cause service disruption for users*, mysqldump, backup, stop old instance, start new one, use mysql to restore data, reconfigure all services to new database, new IP, etc.) is laborious and, well, can happen again in the future.
It would be a nice feature to reduce the size, but I understand as a developer that "it is not that simple". Lots of infrastructure and key design decisions prevent me as a user to "just change the size down", but I also have no way/tools to assist me into moving the relevant data (20GB) out of all that wasted reserved "empty" disk space I'm paying for, without causing service disruption. Instead of spending time testing and simulating that DB work to rebuild our DB, I would like to use it to improve our system to our customers. But I'll have to go to the office on a Sunday to make this slightly laborious change because I have no easy way around this one.
I understand that the issue is potentially low priority and that numerous workarounds exists but I would like to share some background info about this.
We have a Cloud SQL Second Gen instance and that went nuts one day, consuming all disk space in the universe up until it reached the maximum disk capacity of ~9TB. That issue "self resolved" (some Cloud magick is not bad...) and we did not saw any impact on any of our queries or none of our database users noticed.
This is to some extent good: something that would otherwise cause service disruption did not, the issue self-resolved and the service continued to work. However, our data usage for the instance is around 20G. But our database now have 9TB of disk space allocated to it. Huge amounts of *wasted* disk space in the Cloud era.
I have lots os services that use this database and rebuilding it from scratch (*shutdown services*, *cause service disruption for users*, mysqldump, backup, stop old instance, start new one, use mysql to restore data, reconfigure all services to new database, new IP, etc.) is laborious and, well, can happen again in the future.
It would be a nice feature to reduce the size, but I understand as a developer that "it is not that simple". Lots of infrastructure and key design decisions prevent me as a user to "just change the size down", but I also have no way/tools to assist me into moving the relevant data (20GB) out of all that wasted reserved "empty" disk space I'm paying for, without causing service disruption. Instead of spending time testing and simulating that DB work to rebuild our DB, I would like to use it to improve our system to our customers. But I'll have to go to the office on a Sunday to make this slightly laborious change because I have no easy way around this one.
lu...@gmail.com <lu...@gmail.com> #4
+1 to this.
[Deleted User] <[Deleted User]> #5
I've wasted hours on this problem and still haven't found why my new database won't connect to my infrastructure.
Perhaps a useful approach to this would be a duplicate function that let's me set the new DB size and keep all the other config the same.
Or even the ability to export the full config of the instance as a JSON file (or whatever) - then I could do that for both instances and figure out what the heck is different between them!!
Perhaps a useful approach to this would be a duplicate function that let's me set the new DB size and keep all the other config the same.
Or even the ability to export the full config of the instance as a JSON file (or whatever) - then I could do that for both instances and figure out what the heck is different between them!!
fr...@ciphertracelabs.com <fr...@ciphertracelabs.com> #6
deleted
[Deleted User] <[Deleted User]> #7
This issue can have a major side effect from a cost perspective as well. If a customer uses a "temp" table for a data-load or test, the database will grow to make room for that table. However, once that data-load is complete, there is no way to shrink the DB back to its original size.
The problem can compound if a customer then clones this db for other environments, as the excess (now-unused) disk space will also be reserved in the clones.
The end result can be a cloudSQL bill that is dramatically higher than expected or forecasted.
The problem can compound if a customer then clones this db for other environments, as the excess (now-unused) disk space will also be reserved in the clones.
The end result can be a cloudSQL bill that is dramatically higher than expected or forecasted.
jp...@stanford.edu <jp...@stanford.edu> #8
deleted
xa...@shipfix.com <xa...@shipfix.com> #9
> The problem can compound if a customer then clones this db for other environments, as the excess (now-unused) disk space will also be reserved in the clones.
I just confirm that this is a big problem for us as well. We rely on the backup restores to create development environments, thus the resized disk now would have to propagate to all development instances as well.
I just confirm that this is a big problem for us as well. We rely on the backup restores to create development environments, thus the resized disk now would have to propagate to all development instances as well.
em...@gmail.com <em...@gmail.com> #10
Also for us this is a big problem.
If we have to restore a copy of database to take back some data and then delete the restored db we have to do it in another istance, else where first istance grow
and you have to pay for that extra space forever.
It's also annoying that if you have to make a copy of db for some test you have always to do it in another istance, and at the end delete the test istance
elsewhere you pay forever for unused free space.
And if you have some script that refer to the istance name, it is a problem becouse you have to change script everytime.
If we have to restore a copy of database to take back some data and then delete the restored db we have to do it in another istance, else where first istance grow
and you have to pay for that extra space forever.
It's also annoying that if you have to make a copy of db for some test you have always to do it in another istance, and at the end delete the test istance
elsewhere you pay forever for unused free space.
And if you have some script that refer to the istance name, it is a problem becouse you have to change script everytime.
[Deleted User] <[Deleted User]> #11
To be honest I was a bit surprised when I found out that you can't decrease the SQL instance size and I'm surprised that after 2 years not much was done, it happens often that you first store the data for example in the db and later compress them or you just need those data for a certain amount of time. It's a waste of space and money for the customers that are paying for something they are not using
Hope this issue will be solved soon
Hope this issue will be solved soon
sa...@hsbc.co.in <sa...@hsbc.co.in> #12
Need a way out myself too :/ We run ops 24/7 and I want an easy way to reduce size w/o downtime.
ji...@aeris.net <ji...@aeris.net> #13
+1 ... had a runaway storage issue that was blowing up a table size, that was solved but now we can't resize without significant downtime. This is a very real issue that I think a lot of other people are facing. Would love to see some movement on it!
ve...@gmail.com <ve...@gmail.com> #14
+1
[Deleted User] <[Deleted User]> #15
+1; Also the thing is that every time we need to resize, the whole database is needed to be dumped to sql/csv file, and then loaded up, which takes ages.
ja...@moove.ai <ja...@moove.ai> #16
+1 - an issue caused the db size to balloon to 9TB, and im using a fraction of this storage. If the ability to reduce the DB size is not feasible, perhaps a multi-step way to live migrate to newly resized DB could be a way.
Here's how it could work:
1. Create a new read replica (allow specifying a new size - if the new size is too small, this operation can fail with an error)
2. Once read replica is created, allow triggering a failover to make the newly resized read replica the new master and drop the existing master (with excessive storage allocation).
Is that feasible? I understand it's easier said than done.
Here's how it could work:
1. Create a new read replica (allow specifying a new size - if the new size is too small, this operation can fail with an error)
2. Once read replica is created, allow triggering a failover to make the newly resized read replica the new master and drop the existing master (with excessive storage allocation).
Is that feasible? I understand it's easier said than done.
[Deleted User] <[Deleted User]> #17
+1
de...@blockdaemon.com <de...@blockdaemon.com> #18
+1
br...@sada.com <br...@sada.com> #19
+1
nl...@gmail.com <nl...@gmail.com> #20
+1
[Deleted User] <[Deleted User]> #21
+1
vi...@parade.ai <vi...@parade.ai> #22
+1
va...@google.com <va...@google.com>
ku...@google.com <ku...@google.com>
al...@accenture.com <al...@accenture.com> #23
+1
an...@gicholdings2022.com <an...@gicholdings2022.com> #24
+1
zn...@woolworths.com.au <zn...@woolworths.com.au> #25 Restricted+
Restricted+
Comment has been deleted.
gr...@labourseauxlivres.fr <gr...@labourseauxlivres.fr> #26
+1 pleaseeee
Description
Users would like to import data in batch using filename wildcards, from a series of CSV files.
The current CloudSQL interface only allows importing 1 csv file at a time.
They would like a user interface or API to handle the batch import without having to design a system to monitor imports themselves.
Steps to reproduce the issue:
Run 'gcloud sql import csv' command using a wildcard * in the URI of the imported file.