Assigned
Status Update
Comments
is...@gmail.com <is...@gmail.com> #2
I have filed your feature request to our product team. However, there is no guarantee that this feature will be implemented, and the ETA cannot be provided. Rest assured that your feedback is always taken into account, as it allows us to improve the platform.
Any future updates to this feature will be posted here.
Any future updates to this feature will be posted here.
mc...@google.com <mc...@google.com> #3
Hi,
I understand that the issue is potentially low priority and that numerous workarounds exists but I would like to share some background info about this.
We have a Cloud SQL Second Gen instance and that went nuts one day, consuming all disk space in the universe up until it reached the maximum disk capacity of ~9TB. That issue "self resolved" (some Cloud magick is not bad...) and we did not saw any impact on any of our queries or none of our database users noticed.
This is to some extent good: something that would otherwise cause service disruption did not, the issue self-resolved and the service continued to work. However, our data usage for the instance is around 20G. But our database now have 9TB of disk space allocated to it. Huge amounts of *wasted* disk space in the Cloud era.
I have lots os services that use this database and rebuilding it from scratch (*shutdown services*, *cause service disruption for users*, mysqldump, backup, stop old instance, start new one, use mysql to restore data, reconfigure all services to new database, new IP, etc.) is laborious and, well, can happen again in the future.
It would be a nice feature to reduce the size, but I understand as a developer that "it is not that simple". Lots of infrastructure and key design decisions prevent me as a user to "just change the size down", but I also have no way/tools to assist me into moving the relevant data (20GB) out of all that wasted reserved "empty" disk space I'm paying for, without causing service disruption. Instead of spending time testing and simulating that DB work to rebuild our DB, I would like to use it to improve our system to our customers. But I'll have to go to the office on a Sunday to make this slightly laborious change because I have no easy way around this one.
I understand that the issue is potentially low priority and that numerous workarounds exists but I would like to share some background info about this.
We have a Cloud SQL Second Gen instance and that went nuts one day, consuming all disk space in the universe up until it reached the maximum disk capacity of ~9TB. That issue "self resolved" (some Cloud magick is not bad...) and we did not saw any impact on any of our queries or none of our database users noticed.
This is to some extent good: something that would otherwise cause service disruption did not, the issue self-resolved and the service continued to work. However, our data usage for the instance is around 20G. But our database now have 9TB of disk space allocated to it. Huge amounts of *wasted* disk space in the Cloud era.
I have lots os services that use this database and rebuilding it from scratch (*shutdown services*, *cause service disruption for users*, mysqldump, backup, stop old instance, start new one, use mysql to restore data, reconfigure all services to new database, new IP, etc.) is laborious and, well, can happen again in the future.
It would be a nice feature to reduce the size, but I understand as a developer that "it is not that simple". Lots of infrastructure and key design decisions prevent me as a user to "just change the size down", but I also have no way/tools to assist me into moving the relevant data (20GB) out of all that wasted reserved "empty" disk space I'm paying for, without causing service disruption. Instead of spending time testing and simulating that DB work to rebuild our DB, I would like to use it to improve our system to our customers. But I'll have to go to the office on a Sunday to make this slightly laborious change because I have no easy way around this one.
[Deleted User] <[Deleted User]> #4
+1 to this.
il...@afti.ca <il...@afti.ca> #5
I've wasted hours on this problem and still haven't found why my new database won't connect to my infrastructure.
Perhaps a useful approach to this would be a duplicate function that let's me set the new DB size and keep all the other config the same.
Or even the ability to export the full config of the instance as a JSON file (or whatever) - then I could do that for both instances and figure out what the heck is different between them!!
Perhaps a useful approach to this would be a duplicate function that let's me set the new DB size and keep all the other config the same.
Or even the ability to export the full config of the instance as a JSON file (or whatever) - then I could do that for both instances and figure out what the heck is different between them!!
gr...@atombank.co.uk <gr...@atombank.co.uk> #6
deleted
so...@gmail.com <so...@gmail.com> #7
This issue can have a major side effect from a cost perspective as well. If a customer uses a "temp" table for a data-load or test, the database will grow to make room for that table. However, once that data-load is complete, there is no way to shrink the DB back to its original size.
The problem can compound if a customer then clones this db for other environments, as the excess (now-unused) disk space will also be reserved in the clones.
The end result can be a cloudSQL bill that is dramatically higher than expected or forecasted.
The problem can compound if a customer then clones this db for other environments, as the excess (now-unused) disk space will also be reserved in the clones.
The end result can be a cloudSQL bill that is dramatically higher than expected or forecasted.
an...@ipnet.cloud <an...@ipnet.cloud> #8
deleted
ni...@avast.com <ni...@avast.com> #9
> The problem can compound if a customer then clones this db for other environments, as the excess (now-unused) disk space will also be reserved in the clones.
I just confirm that this is a big problem for us as well. We rely on the backup restores to create development environments, thus the resized disk now would have to propagate to all development instances as well.
I just confirm that this is a big problem for us as well. We rely on the backup restores to create development environments, thus the resized disk now would have to propagate to all development instances as well.
Description
This will create a feature request which anybody can view and comment on.
Please describe your requested enhancement. Good feature requests will solve common problems or enable new use cases.
What you would like to accomplish:
Cloud SQL for SQL server supports importing databases using BAK and SQL files [1].
It is possible to import a database full backup BAK, but it is not possible to import the differential or transaction log backups to Cloud SQL for SQL Server.
When is this option expected to be available?
Several customers would benefit from this.
Other information (workarounds you have tried, documentation consulted, etc):
[1] -https://cloud.google.com/sql/docs/sqlserver/import-export/import-export-bak#import_data_from_a_bak_file