Assigned
Status Update
Comments
ku...@google.com <ku...@google.com>
ku...@google.com <ku...@google.com> #2
I have filed your feature request to our product team. However, there is no guarantee that this feature will be implemented, and the ETA cannot be provided. Rest assured that your feedback is always taken into account, as it allows us to improve the platform.
Any future updates to this feature will be posted here.
Any future updates to this feature will be posted here.
va...@google.com <va...@google.com>
ba...@google.com <ba...@google.com>
ku...@google.com <ku...@google.com>
ma...@egym.com <ma...@egym.com> #3
Hi,
I understand that the issue is potentially low priority and that numerous workarounds exists but I would like to share some background info about this.
We have a Cloud SQL Second Gen instance and that went nuts one day, consuming all disk space in the universe up until it reached the maximum disk capacity of ~9TB. That issue "self resolved" (some Cloud magick is not bad...) and we did not saw any impact on any of our queries or none of our database users noticed.
This is to some extent good: something that would otherwise cause service disruption did not, the issue self-resolved and the service continued to work. However, our data usage for the instance is around 20G. But our database now have 9TB of disk space allocated to it. Huge amounts of *wasted* disk space in the Cloud era.
I have lots os services that use this database and rebuilding it from scratch (*shutdown services*, *cause service disruption for users*, mysqldump, backup, stop old instance, start new one, use mysql to restore data, reconfigure all services to new database, new IP, etc.) is laborious and, well, can happen again in the future.
It would be a nice feature to reduce the size, but I understand as a developer that "it is not that simple". Lots of infrastructure and key design decisions prevent me as a user to "just change the size down", but I also have no way/tools to assist me into moving the relevant data (20GB) out of all that wasted reserved "empty" disk space I'm paying for, without causing service disruption. Instead of spending time testing and simulating that DB work to rebuild our DB, I would like to use it to improve our system to our customers. But I'll have to go to the office on a Sunday to make this slightly laborious change because I have no easy way around this one.
I understand that the issue is potentially low priority and that numerous workarounds exists but I would like to share some background info about this.
We have a Cloud SQL Second Gen instance and that went nuts one day, consuming all disk space in the universe up until it reached the maximum disk capacity of ~9TB. That issue "self resolved" (some Cloud magick is not bad...) and we did not saw any impact on any of our queries or none of our database users noticed.
This is to some extent good: something that would otherwise cause service disruption did not, the issue self-resolved and the service continued to work. However, our data usage for the instance is around 20G. But our database now have 9TB of disk space allocated to it. Huge amounts of *wasted* disk space in the Cloud era.
I have lots os services that use this database and rebuilding it from scratch (*shutdown services*, *cause service disruption for users*, mysqldump, backup, stop old instance, start new one, use mysql to restore data, reconfigure all services to new database, new IP, etc.) is laborious and, well, can happen again in the future.
It would be a nice feature to reduce the size, but I understand as a developer that "it is not that simple". Lots of infrastructure and key design decisions prevent me as a user to "just change the size down", but I also have no way/tools to assist me into moving the relevant data (20GB) out of all that wasted reserved "empty" disk space I'm paying for, without causing service disruption. Instead of spending time testing and simulating that DB work to rebuild our DB, I would like to use it to improve our system to our customers. But I'll have to go to the office on a Sunday to make this slightly laborious change because I have no easy way around this one.
vo...@deliveryhero.com <vo...@deliveryhero.com> #4
+1 to this.
Description
How to reproduce:
1. Have two users, one with owner role and one with viewer permission
2. Use the user with owner role and create two cloud sql instances (test-instance-1, test-instance-2)
3. Use the user with the owner role and create following policy for the second user
bindings.json
{
"bindings": [
{
"role": "roles/cloudsql.admin",
"members": [
"user:TEST_USER_NAME"
],
"condition": {
"expression": "
&& resource.service == '
}
}
],
"etag": "BwWKmjvelug=",
"version": 3
}
gcloud projects set-iam-policy PROJECT_ID bindings.json
4. Login with the test user. You should be able to restart the instance 1 now but not instance 2
gcloud sql instances restart test-instance-1 -> should work
gcloud sql instances restart test-instance-2 -> should give access denied
5. Now the Bug: Go in the cloud console UI and select the instance test-instance-1 -> Everything will be greyed out although you have admin access. Expectation: All options should be available for this instance.