Status Update
Comments
ts...@google.com <ts...@google.com> #2
Also is it possible to attach the patch to this bug report so we can see what is involved on the clang side?
rp...@beneficiofacil.com.br <rp...@beneficiofacil.com.br> #3
In the past (e.g. when Intel disclosed LVI) our approach was to have patches ready to go and pre-reviewed by appropriate (and appropriately read-in) code owners but not post the patches "for real" until the embargo lifted. Usually this was part of a larger comms strategy where announcements were made in other forums.
Is there an entity beyond security@kernel.org that's coordinating this disclosure?
[Deleted User] <[Deleted User]> #4
[Deleted User] <[Deleted User]> #5
* Aaron Ballman <aaron@aaronballman.com> (a code owner for clang who has already reviewed my patch off list)
* Craig Topper <craig.topper@gmail.com> (a code owner for the x86 backend who has already reviewed my patch off list)
x86 is the only confirmed architecture affected. ARM has confirmed they're not affected for their micro-architectures. Not all x86 micro-architectures are affected. Not seeing one particular x86 vendor represented in the LLVM Security Group is...concerning.
> Also is it possible to attach the patch to this bug report so we can see what is involved on the clang side?
Yes, but just note that the LLVM code is somewhat a giveaway of what's going on here. GCC has had a similar feature for years; they did not publish the patch under embargo but rather did so publicly back when various spectre and meltdown mitigations were being tested before the development of retpoline. The very first comment on the GCC mailing list was along the lines of "What is this?" I'd like to avoid that here if possible; the name of the game is to not spill the beans to too wide an audience prematurely. Doing so would lose LLVM their (lone) seat at the table for Linux kernel vulnerability disclosures such as this. So we need to be super careful about keeping this need to know.
> Is there an entity beyond security@kernel.org that's coordinating this disclosure?
The encrypted kernel mailing list I'm referring to that is coordinating is not that specific email address. It's more controlled than even that list and members of that list (security@kernel.org) aren't even considered need to know for this one. I don't know whether linux-distros@vs.openwall.org has even been contacted yet; the kernel patch set is up to ~45 patches and growing, and backports will be painful.
[Deleted User] <[Deleted User]> #6
je...@panerabread.com <je...@panerabread.com> #7
mo...@google.com <mo...@google.com> #8
da...@flockfreight.com <da...@flockfreight.com> #9
> Nick, is there anything else you think you need from us?
Maybe if Aaron and Craig can confirm here that they've reviewed the patch, and if they'll be available when the embargo is scheduled to lift to publicly Accept the patch?
Otherwise, is there anything else I should be doing as part of the process?
> posting the patch should be considered equivalent to a public disclosure of the vulnerability.
Correct. I don't mind posting it here, but it MUST NOT be posted to phab until the embargo lift (scheduled for Tuesday July 12 2022 9am PDT). I'm more than happy to spend the rest of the week iterating on feedback of the initial design, but we need to ship a mitigation ASAP in a few open source repositories. The code has been reviewed by trusted reviewers and tested by trusted kernel developers, so I'm not looking to re-architect the patch one week out from embargo lift.
Patch attached.
sd...@gmail.com <sd...@gmail.com> #10
"be replace with" -> "be replaced with" in the LangRef.rst change
The include list in X86ReturnThunks.cpp seems to have more files than needed. StringSwitch.h was an obvious extra.
nit: "Modified |= true;" it's pretty uncommon for "|= true" to appear in the tree. Nearly everywhere does Modified = true.
em...@gmail.com <em...@gmail.com> #11
If it is, do we know if this is actually x86 specific? zero-daying every other architecture is also not a super great outcome.
sa...@gmail.com <sa...@gmail.com> #12
jk...@gmail.com <jk...@gmail.com> #13
To be clear: I am not saying this is a show stopper or anything like that, we were benchmarking JS so the type of code that is running in such is obviously different from pretty much anything else, this is more that in my experience this kind of manipulation *can* have 2nd order(?) perf impact, so I'd like to know someone has checked to make sure nothing unusable happens.
Pessimistic me would have a benchmark that was measuring deep recursion and iterative calling of an empty function. Not because it's representative, but just to get what is the impact in the worst possible case we can come up with. If it's not significant in that case there's no reason to spend any time doing the more complicated task of seeing if it impacts anything in real world code.
If there is a non-negligible impact we may be able to change the exact code gen or similar to compensate.
Or we may just accept the cost as being outweighed by the impact (as happened with SPECTRE, etc)
ag...@gmail.com <ag...@gmail.com> #14
I can't speak for Craig, but I reviewed the Clang frontend bits and am happy with them. I will be around on Jul 12 and should have no trouble accepting the patch pretty quickly.
[Deleted User] <[Deleted User]> #15
Aaron, thanks for the confirmation in
sh...@datacue.co <sh...@datacue.co> #16
[Deleted User] <[Deleted User]> #17
wa...@greenaerotech.com <wa...@greenaerotech.com> #18
[Deleted User] <[Deleted User]> #19
> is replacing ret with a jmp absolutely necessary
I think so; it might ultimately not be, but it is how Linux kernel developers have decided to mitigate the vulnerability for the time being. It's not my call though, I was just asked to implement -mfunction-return=/__attribute__((function_return(""))) like GCC has already had now for years. In order for clang to be useful as a drop in replacement for GCC, clang needs to match command line flag, function attribute, and codegen ABI frequently.
> my experience in other domains suggests that messing with the x86 call stack cache does pretty terrible things to performance.
Yep, the performance overhead of this isn't great for sure. An additional set of kernel patches are in the works to try to claw back some performance lost to this approach, but they're not pretty and haven't been vetted quite yet. Time will tell if this is the ultimate solution to this issue, but for now it's what we've got.
> If it is, do we know if this is actually x86 specific?
Not all x86 uArchs are effected. The exploit is specific to uArch's the speculate return addresses in a specific way, and how they behave under specific circumstances wrt. kernel vs userspace privilege boundaries. But there are at least two x86 vendors effected.
ARM commented that their micro architectures were not affected. I'm guessing that they can't vouch for their architectural licensees micro architectures though. That said, I can ask on the encrypted list about architectural licensees, and whether the reporters have tested other architectures or uArchs or contacted other vendors.
> zero-daying every other architecture is also not a super great outcome.
I agree but again it's not my call and there's no stopping the train now; regardless of my toolchain patch the embargo will lift July 12. The researchers that reported the vulnerability may not have access to every uArch under the sun, and I don't know who they have or have not contacted about this vulnerability.
That said, there are non-Linux-kernel vendors participating in the encrypted list IIUC. It might be worthwhile for security representatives at various organizations to get involved.
There's also other non-Linux operating system vendors on the linux-distros mailing list, which also has early access to embargo'd vuln reports+fixes:
FLOSS projects have a catch-22 of needing to develop mitigations behind closed doors, without publishing mitigations before embargo lift (similar to what we're doing here but for the toolchain). Those are the methods open source operating system developers have converged on, and made available to folks developing proprietary operating systems to participate in.
wa...@gmail.com <wa...@gmail.com> #20
[Deleted User] <[Deleted User]> #21
That said, it may be better to ask on the linux-distros list once the kernel mitigations are published there (I don't know if they are, please don't spill the beans otherwise!). Once the patches are public, I can point to any formal recommendations they may have.
But right now, it looks like Patch 10/45 addresses the x86 BPF JIT in the corresponding (and unpublished) Linux kernel patches.
My toolchain patch is orthogonal to the x86 BPF JIT. BPF programs aren't expected to be compiled with -mfunction-return, use __attribute__((function_return(""))), or rely on my compiler patch at all. BPF JIT backends may need to call the same runtime hooks that this toolchain patch generates jmps to, as provided by the runtime environment (in this case, the Linux kernel).
Also, it looks like work has started on backporting the kernel patches to the 5.18.y, 5.17.y, and 5.15.y branches of the LTS stable kernel tree.
(Not pushed publicly, just pointing where that tree is)
Requests have been made for 5.14.y and 4.19.y but it's not clear to me that work has started yet on those branches, or that it will. Distros shipping kernels based on older branches will be encouraged to move to something newer, and something based off LTS (as usual)(some distros DON'T use LTS).
da...@afterbanks.com <da...@afterbanks.com> #22
[Deleted User] <[Deleted User]> #23
I'm representing Rust here, and from what I gather these mitigations are only going to be needed by the Linux kernel. In that case there shouldn't be a need to make a security point release of rustc, as Rust is not yet used in mainline Linux. Is my reading here correct?
jo...@citrusad.com <jo...@citrusad.com> #24
For the time being, yes. Other operating systems may choose to mitigate the issue similarly, or not at all. I guess there could be another OS, written in Rust, that wants to mitigate the issue using this feature. But I'd perhaps wait for such a feature request from such theoretical developers if such a scenario exists.
> there shouldn't be a need to make a security point release of rustc, as Rust is not yet used in mainline Linux
The rust-for-linux folks might be interested in this eventually, but rustc would need feature development since the LLVM IR function attributes added here are controlled by command line flags and function attributes from clang and the C/C++ code respectively. Rust support isn't in the kernel yet, though I am hopeful it will be, and have been providing review upstream on their patch series.
So without the corresponding front end work to rustc, rolling my llvm+clang change out in a point release wouldn't be useful at this time, IMO.
[Deleted User] <[Deleted User]> #25
cp...@paraty.es <cp...@paraty.es> #26
Thanks for checking, our Rust team had pretty much the exact same question.
> zero-daying every other architecture is also not a super great outcome.
So I reached out to ARM folks on the encrypted mailing list, as well as the reporters. I heard back from both.
ARM re-iterated that they cannot comment on competing uArch implementations. They recommended the researchers contact Apple directly. ARM mentioned they do "brief" architectural licensees, but don't necessarily share the researchers' whitepaper. They also mentioned the same mailing lists be joined that I mentioned in
This is second hand paraphrasing from me though; I don't speak on behalf of ARM in any capacity.
The researchers mentioned they have not tested their exploit on m1/m2 macs, or researched other architectures.
dg...@dataseekers.es <dg...@dataseekers.es> #27
fj...@dataseekers.es <fj...@dataseekers.es> #28
iMessage is secure, I worked on the crypto myself, trust me :D
er...@dataseekers.es <er...@dataseekers.es> #29
- ohunt@apple.com
- oliver@nerget.com
Otherwise, I believe their email address is:
Johannes Wikner <kwikner@ethz.ch>
fr...@gmail.com <fr...@gmail.com> #30
If someone from Google knows, would they be able to?
jm...@dataseekers.es <jm...@dataseekers.es> #31
gi...@etorox.com <gi...@etorox.com> #32
vm...@dataseekers.es <vm...@dataseekers.es> #33
I'll hand you our paper and the pocs we made for Intel (Skylake-like), AMD (fam 17h, but likely 15h,16h,18h too) and ARM ThunderX2. As you will find, however, we don't mention ARM at all in the paper. The only ARM machine we have in our lab is a ThunderX2, and it seems to be vulnerable to all kinds of BTI Spectres under this user--kernel threat model. Because these CPUs don't seem have (or use?) the necessary SMC workaround or CSV2 feature, we assumed that they could not be representative for ARM.
As for the AMD and Intel ones, the end-to-end exploits are made specifically for the kernel build we carried out the research on, which was the latest ubuntu/focal kernel available at the time (5.8.0-63-generic).
na...@google.com <na...@google.com> #34
nl...@eryxsoluciones.com.ar <nl...@eryxsoluciones.com.ar> #35
ma...@gmail.com <ma...@gmail.com> #36
I'd be ok with keeping this thread embargoed until we have a confirmation from Oliver that we would not be zero-daying Apple.
mc...@sossego.com.br <mc...@sossego.com.br> #37
[Deleted User] <[Deleted User]> #38
Given above messages, and more than a week with no objections, I'm derestricting.
ti...@peekandpoke.com <ti...@peekandpoke.com> #39
ma...@gmail.com <ma...@gmail.com> #40
ky...@lodgify.com <ky...@lodgify.com> #41
yo...@backup.affluent.io <yo...@backup.affluent.io> #42
We recently went through a project to clean up and remove a bunch of old data. We have multiple instances now where we're using >3TB less than the allotted storage, costing us > $1,000 a month for unused space.
pa...@spidergap.com <pa...@spidergap.com> #43
de...@pluscompany.com <de...@pluscompany.com> #44
Same comment here, using auto scale HDD size put you at risk to have a surge in the storage then you are caught with size increase for nothing. And the workaround consisting of creating a new instance to downsize it maybe conplicated for actual production databases
Thanks
ab...@amperon.co <ab...@amperon.co> #45
[Deleted User] <[Deleted User]> #46
Paying double as what we should be paying now due to performing a VCAUUM on a very large table.
[Deleted User] <[Deleted User]> #47
ma...@gmail.com <ma...@gmail.com> #48
[Deleted User] <[Deleted User]> #49
jo...@cruxinformatics.com <jo...@cruxinformatics.com> #50
[Deleted User] <[Deleted User]> #51
co...@12parsecs.io <co...@12parsecs.io> #52
This not being addressed or acknowledged has me second guessing my chose to go with GCP as a whole even. I've been building up a POC startup over the last 6 months that I want to start pushing out more widely. Maybe now is the time to avoid GCP's lack of development on basic functionality. "Auto storage increase" shouldn't have been pushed as a feature until scaling back that automatic storage increase was a feature. Or at least marking it as beta if there was a decent workaround to reduce storage.
js...@paloaltonetworks.com <js...@paloaltonetworks.com> #53
oc...@gmail.com <oc...@gmail.com> #54
ra...@nibblecomm.com <ra...@nibblecomm.com> #55
fa...@gmail.com <fa...@gmail.com> #56
[Deleted User] <[Deleted User]> #57
sv...@scnmedia.net <sv...@scnmedia.net> #58
ti...@gmail.com <ti...@gmail.com> #59
el...@gmail.com <el...@gmail.com> #60
cv...@redapt.com <cv...@redapt.com> #61
my...@gmail.com <my...@gmail.com> #62
fi...@teamapt.com <fi...@teamapt.com> #63
ju...@google.com <ju...@google.com> #64
cp...@paraty.es <cp...@paraty.es> #65
ca...@leega.com.br <ca...@leega.com.br> #66
yu...@cohere.io <yu...@cohere.io> #67
an...@redso.com.hk <an...@redso.com.hk> #68
ma...@gmail.com <ma...@gmail.com> #69
To indicate you are impacted please don't add a "+1" comment but rather click the star next to the bug id. Thx!
[Deleted User] <[Deleted User]> #70
ky...@lodgify.com <ky...@lodgify.com> #71
yu...@gridwise.io <yu...@gridwise.io> #72
de...@gmail.com <de...@gmail.com> #73
[Deleted User] <[Deleted User]> #74
ma...@gmail.com <ma...@gmail.com> #75
[Deleted User] <[Deleted User]> #76
le...@gmail.com <le...@gmail.com> #77
[Deleted User] <[Deleted User]> #78
I just did this migrating from a mostly empty 2TB MySQL 5.7 database to a <300GB MySQl 8.0 database. The instructions here will work:
The main difference is that the IP address of the external source database will be the public IP of your current Google Cloud SQL database. You may have to create a public IP temporarily to facilitate the transfer.
After you have a live replica with a smaller footprint, you can promote it and migrate your dependent services to it.
ia...@upskillpeople.com <ia...@upskillpeople.com> #79
ki...@gmail.com <ki...@gmail.com> #80
si...@tumelo.com <si...@tumelo.com> #81
ma...@gmail.com <ma...@gmail.com> #82
sa...@gmail.com <sa...@gmail.com> #83
mo...@alphathena.com <mo...@alphathena.com> #84
[Deleted User] <[Deleted User]> #85
[Deleted User] <[Deleted User]> #86
fe...@gmail.com <fe...@gmail.com> #87
er...@paslists.com <er...@paslists.com> #88
pe...@emarsys.com <pe...@emarsys.com> #89
vi...@gmail.com <vi...@gmail.com> #90
ga...@gmail.com <ga...@gmail.com> #91
[Deleted User] <[Deleted User]> #92
[Deleted User] <[Deleted User]> #93
en...@globalfishingwatch.org <en...@globalfishingwatch.org> #94
ju...@globalfishingwatch.org <ju...@globalfishingwatch.org> #95
rd...@gmail.com <rd...@gmail.com> #96
al...@globalfishingwatch.org <al...@globalfishingwatch.org> #97
ki...@gmail.com <ki...@gmail.com> #98
[Deleted User] <[Deleted User]> #99
mo...@gmail.com <mo...@gmail.com> #100
vi...@gmail.com <vi...@gmail.com> #101
he...@gmail.com <he...@gmail.com> #102
ni...@gmail.com <ni...@gmail.com> #103
dr...@dericktronix.com <dr...@dericktronix.com> #104
[Deleted User] <[Deleted User]> #105
ia...@croptix.solutions <ia...@croptix.solutions> #106
om...@gmail.com <om...@gmail.com> #107
jc...@repairpal.com <jc...@repairpal.com> #108
[Deleted User] <[Deleted User]> #109
ki...@gmail.com <ki...@gmail.com> #110
ba...@gmail.com <ba...@gmail.com> #111
la...@wetranscloud.com <la...@wetranscloud.com> #112
ha...@iggstrom.com <ha...@iggstrom.com> #113
[Deleted User] <[Deleted User]> #114
dg...@gmail.com <dg...@gmail.com> #115
pa...@lawndoctor.com <pa...@lawndoctor.com> #116
na...@bitqit.com <na...@bitqit.com> #117
re...@amperon.co <re...@amperon.co> #118
+1
br...@aylien.com <br...@aylien.com> #119
[Deleted User] <[Deleted User]> #120
cs...@monoprix.fr <cs...@monoprix.fr> #121
we...@insert.com.pl <we...@insert.com.pl> #122
ag...@gmail.com <ag...@gmail.com> #123
ro...@gmail.com <ro...@gmail.com> #124
sa...@veolia.com <sa...@veolia.com> #125
as...@gmail.com <as...@gmail.com> #126
It's only been a *few years* since the original issue was raised...
Or is this such a cash-cow for you it'll hurt your profits?
+1
pa...@google.com <pa...@google.com> #127
va...@kramp.com <va...@kramp.com> #128
[Deleted User] <[Deleted User]> #129
al...@gmail.com <al...@gmail.com> #130
se...@piertwo.com <se...@piertwo.com> #131
[Deleted User] <[Deleted User]> #132
ak...@gmail.com <ak...@gmail.com> #133
ra...@gmail.com <ra...@gmail.com> #134
[Deleted User] <[Deleted User]> #135
pa...@weduu.com <pa...@weduu.com> #136
pe...@gmail.com <pe...@gmail.com> #137
[Deleted User] <[Deleted User]> #138
[Deleted User] <[Deleted User]> #139
ch...@primasoftware.com <ch...@primasoftware.com> #140
sh...@gmail.com <sh...@gmail.com> #141
di...@fincatto.com <di...@fincatto.com> #142
xm...@gmail.com <xm...@gmail.com> #143
There is no way for me to decrease, I'm stuck. Because;
It's not possible to export/import sql file since 1.8 TB takes more than a week to export/import, even though I use the highest spec VM.
Google Cloud SQL also doesn't allow me to access mysql data folder, so it's also not possible to export by copying data folder.
I planned to create a read-replica outside of Google Cloud SQL, then switch master to it, then get rid of cloud sql, but its impossible because it takes more than a week to export/import sql file, and Google Cloud only keeps 7 days of mysql bin logs, so new instance can't keep up with the master because of the missing bin logs.
I'M STUCK ON GOOGLE CLOUD SQL.
sh...@gmail.com <sh...@gmail.com> #144
aj...@google.com <aj...@google.com> #145
xmripper: while we don't support a managed solution to decrease storage size at this time, we do have a workaround that involves migrating your overprovisioned database to a right-sized Cloud SQL for MySQL instance.
Most customers have had success with Database Migration Service, which offers a seamless managed tool for migrations. We have had success with databases your size and larger:
For very large MySQL databases, we have customers migrating through our external server replication tool using a custom import:
In short, you can use this guide to use third party tools like mydumper and myloader to more rapidly dump and import multi-terabyte databases than would be possible with MySQL's native mysqldump and import utilities.
Best,
Akhil, Cloud SQL for MySQL Product Manager
ma...@cloudomation.com <ma...@cloudomation.com> #146
an...@gmail.com <an...@gmail.com> #147
ra...@snapchat.com <ra...@snapchat.com> #148
sh...@gmail.com <sh...@gmail.com> #149
ha...@feichtl.com <ha...@feichtl.com> #150
ak...@dac.co.jp <ak...@dac.co.jp> #151
+1
da...@tvh.com <da...@tvh.com> #152
bs...@gmail.com <bs...@gmail.com> #153
pr...@rtbhouse.com <pr...@rtbhouse.com> #154
vl...@aytm.com <vl...@aytm.com> #155
328 GB of 871 GB
Storage used after import SQL file, WTF?
I started instance with 400GB, space auto-increased to 850, then vacuumed (?) to 328GB
On prem instance uses ~350GB
Why we should pay for unused 400GB of SSD?
[Deleted User] <[Deleted User]> #156
an...@mantel.com <an...@mantel.com> #157
uq...@gmail.com <uq...@gmail.com> #158
ja...@atlas.health <ja...@atlas.health> #159
[Deleted User] <[Deleted User]> #160
[Deleted User] <[Deleted User]> #161
an...@gocardless.com <an...@gocardless.com> #162
do...@herondata.io <do...@herondata.io> #163
an...@gmail.com <an...@gmail.com> #164
le...@gmail.com <le...@gmail.com> #165
ay...@gocomet.com <ay...@gocomet.com> #166
ky...@lodgify.com <ky...@lodgify.com> #167
rt...@gmail.com <rt...@gmail.com> #168
[Deleted User] <[Deleted User]> #169
I have the same issue - after initial import of the databases the storage capacity is roughly at 200% of what we actually use for data... (after the WAL files have timed out and got removed)
[Deleted User] <[Deleted User]> #170
pa...@gmail.com <pa...@gmail.com> #171
be...@alphachain.io <be...@alphachain.io> #172
st...@iturn.it <st...@iturn.it> #173
ib...@gmail.com <ib...@gmail.com> #174
[Deleted User] <[Deleted User]> #175
am...@google.com <am...@google.com> #176
sn...@managedmethods.com <sn...@managedmethods.com> #177
yu...@gmail.com <yu...@gmail.com> #178
[Deleted User] <[Deleted User]> #179
al...@gmail.com <al...@gmail.com> #180
pa...@orbitremit.com <pa...@orbitremit.com> #181
za...@bosslogics.com <za...@bosslogics.com> #182
br...@technologik.io <br...@technologik.io> #183
mi...@28east.co.za <mi...@28east.co.za> #184
pa...@gmail.com <pa...@gmail.com> #185
ba...@gmail.com <ba...@gmail.com> #186
da...@paperflow.com <da...@paperflow.com> #187
vk...@anna.money <vk...@anna.money> #188
ma...@proexe.pl <ma...@proexe.pl> #189
se...@epam.com <se...@epam.com> #190
+1
sy...@brightedge.com <sy...@brightedge.com> #191
ge...@tactable.io <ge...@tactable.io> #192
ma...@adviqoapi.com <ma...@adviqoapi.com> #193
ju...@sneakybox.biz <ju...@sneakybox.biz> #194
ma...@megon.com.br <ma...@megon.com.br> #195
mi...@splitmedialabs.com <mi...@splitmedialabs.com> #196
dh...@gmail.com <dh...@gmail.com> #197
ok...@dto.kemkes.go.id <ok...@dto.kemkes.go.id> #198
of...@wilburlabs.com <of...@wilburlabs.com> #199
wp...@nyu.edu <wp...@nyu.edu> #200
cy...@xin-yin.net <cy...@xin-yin.net> #201
[Deleted User] <[Deleted User]> #202
cl...@bv.com.br <cl...@bv.com.br> #203
bu...@kllr.io <bu...@kllr.io> #204
lu...@purpleocean.eu <lu...@purpleocean.eu> #205
pa...@gmail.com <pa...@gmail.com> #206
br...@askgms.com <br...@askgms.com> #207
I'm pretty sure we can all guess why Google won't ever work on this issue. Each and every organization posting here is dramatically overpaying for storage, which is a perk to this particular billing/allocation scheme. The crazy number of hoops required to migrate and reduce allocation isn't a bug - it's a feature of Google's design, and I'd be shocked if they ever changed it since you'll only hit this issue after you're already deeper into the ecosystem.
st...@delcom.nl <st...@delcom.nl> #208
zo...@aliz.ai <zo...@aliz.ai> #209
vl...@gmail.com <vl...@gmail.com> #210
Looks like a lot of people here came into the same trap as I am.
My case was:
- restore dump to SQL instance
- x2 storage used of original (on-prem) instance
- after some time (~1 week) used storage decreased
- but we can't decrease storage of SQL instance.
I found a solution for myself: Just disable "Point-in-time recovery" before restore db and enable it after restore.
In my case difference is 985GB vs 380GB.
Hope it helps.
br...@askgms.com <br...@askgms.com> #211
@vl...@gmail.com That's a really helpful tip! It's wild that point-in-time recovery can amplify on-DB storage that much, but it makes sense. Next time we restore we'll give that a shot.
la...@gmail.com <la...@gmail.com> #212
[Deleted User] <[Deleted User]> #213
[Deleted User] <[Deleted User]> #214
sh...@keyvalue.systems <sh...@keyvalue.systems> #215
ch...@withtally.com <ch...@withtally.com> #216
ja...@gmail.com <ja...@gmail.com> #217
[Deleted User] <[Deleted User]> #218
jc...@gmail.com <jc...@gmail.com> #219
cy...@gmail.com <cy...@gmail.com> #220
al...@gmail.com <al...@gmail.com> #221
For example, AWS allows you to rename the instances so you can change the name of the old instance and rename the new instance to the original name. So the operation does not require any work from clients. Is this possible in GCP? Here are the AWS docs:
fa...@gmail.com <fa...@gmail.com> #222
th...@gmail.com <th...@gmail.com> #223
br...@plexm.com <br...@plexm.com> #224
ru...@simplify.jobs <ru...@simplify.jobs> #225
We have ~10GB of data and our instance is reserving ~600GB. Attempted a migration using import/exports and ran into hundreds of small issues. Then tried the Database Migration Service that also ran into internal errors.
I hate it here. Why can I just not clone this database with less space reserved? This really does not need to be this insanely complex.
br...@gmail.com <br...@gmail.com> #226
du...@rouseservices.com <du...@rouseservices.com> #227
sh...@yapily.com <sh...@yapily.com> #228
ru...@edgeandnode.com <ru...@edgeandnode.com> #229
me...@gmail.com <me...@gmail.com> #230
st...@loblaw.ca <st...@loblaw.ca> #231
la...@gmail.com <la...@gmail.com> #232
mv...@mercadona.es <mv...@mercadona.es> #233
ad...@deimos.co.za <ad...@deimos.co.za> #234
fi...@embriotech.ch <fi...@embriotech.ch> #235
ce...@libeo.io <ce...@libeo.io> #236
ro...@businessmind.es <ro...@businessmind.es> #237
+1
da...@optimumfleethealth.com <da...@optimumfleethealth.com> #238
we...@safigen.com <we...@safigen.com> #239
[Deleted User] <[Deleted User]> #240
ga...@cappuccino.fm <ga...@cappuccino.fm> #241
ai...@google.com <ai...@google.com> #242
ma...@gmail.com <ma...@gmail.com> #243
yu...@getcruise.com <yu...@getcruise.com> #244
mo...@gmail.com <mo...@gmail.com> #245
ag...@gmail.com <ag...@gmail.com> #246
ek...@stargcp.com <ek...@stargcp.com> #247
dp...@petabloc.com <dp...@petabloc.com> #248
am...@google.com <am...@google.com> #249
[Deleted User] <[Deleted User]> #250
mi...@gmail.com <mi...@gmail.com> #251
al...@anymindgroup.com <al...@anymindgroup.com> #252
th...@deliveree.com <th...@deliveree.com> #253
jo...@homedepot.com <jo...@homedepot.com> #254
ro...@gmail.com <ro...@gmail.com> #255
mo...@incorta.com <mo...@incorta.com> #256
va...@jetbrains.com <va...@jetbrains.com> #257
+1
ty...@eccogroupusa.com <ty...@eccogroupusa.com> #258
wi...@gmail.com <wi...@gmail.com> #259
[Deleted User] <[Deleted User]> #260
ok...@gmail.com <ok...@gmail.com> #261
be...@gmail.com <be...@gmail.com> #262
ar...@gmail.com <ar...@gmail.com> #263
19...@gmail.com <19...@gmail.com> #264
ma...@googlemail.com <ma...@googlemail.com> #265
kr...@gmail.com <kr...@gmail.com> #266
[Deleted User] <[Deleted User]> #267
[Deleted User] <[Deleted User]> #268
ma...@bendigoadelaide.com.au <ma...@bendigoadelaide.com.au> #269
ni...@gmail.com <ni...@gmail.com> #270
du...@gmail.com <du...@gmail.com> #271
a....@globalgames.net <a....@globalgames.net> #272
ot...@teamworkcommerce.com <ot...@teamworkcommerce.com> #273
sh...@palletapp.com <sh...@palletapp.com> #274
al...@noogata.com <al...@noogata.com> #275
ni...@onxmaps.com <ni...@onxmaps.com> #276
th...@empresometro.com.br <th...@empresometro.com.br> #277
sa...@doit.com <sa...@doit.com> #278
tw...@skydreams.nl <tw...@skydreams.nl> #279
ha...@onemount.com <ha...@onemount.com> #280
ka...@pm.me <ka...@pm.me> #281
da...@ciro.io <da...@ciro.io> #282
Please implement this feature!
-- Xoogler
tw...@skydreams.nl <tw...@skydreams.nl> #283
sh...@gmail.com <sh...@gmail.com> #284
+1
[Deleted User] <[Deleted User]> #285
an...@gmail.com <an...@gmail.com> #286
fe...@smart-pricer.com <fe...@smart-pricer.com> #287
ni...@gmail.com <ni...@gmail.com> #288
ar...@gmail.com <ar...@gmail.com> #289
ji...@investorhub.com <ji...@investorhub.com> #290
sc...@whoosh.io <sc...@whoosh.io> #291
ma...@wises.com.br <ma...@wises.com.br> #292
ra...@gaida.tech <ra...@gaida.tech> #293
ma...@gendigital.com <ma...@gendigital.com> #294
[Deleted User] <[Deleted User]> #295
pa...@serveracademy.com <pa...@serveracademy.com> #296
ma...@aniline.io <ma...@aniline.io> #297
ja...@tautona.ai <ja...@tautona.ai> #298
[Deleted User] <[Deleted User]> #299
am...@joyteam.games <am...@joyteam.games> #300
mr...@gmail.com <mr...@gmail.com> #301
ma...@wifworld.com <ma...@wifworld.com> #302
da...@m.co <da...@m.co> #303
sn...@gmail.com <sn...@gmail.com> #304
ja...@gmail.com <ja...@gmail.com> #305
ky...@orderlyhealth.com <ky...@orderlyhealth.com> #306
ri...@mc1global.com <ri...@mc1global.com> #307
hi...@cloudsales.sa <hi...@cloudsales.sa> #308
er...@gmail.com <er...@gmail.com> #309
tr...@gmail.com <tr...@gmail.com> #310
lu...@gendigital.com <lu...@gendigital.com> #311
se...@gmail.com <se...@gmail.com> #312
ba...@gmail.com <ba...@gmail.com> #313
ke...@micepadapp.com <ke...@micepadapp.com> #314
i....@redmed.ge <i....@redmed.ge> #315
je...@integral.xyz <je...@integral.xyz> #316
vi...@rivile.lt <vi...@rivile.lt> #317
So it's 2024, and Google is still doesn't care that its cloud users constantly have to deal with cases like in the attachment.
wi...@bexrealty.com <wi...@bexrealty.com> #318
jo...@synd.io <jo...@synd.io> #319
jo...@cotton.dev <jo...@cotton.dev> #320
jo...@synd.io <jo...@synd.io> #321
we...@sysmo.com.br <we...@sysmo.com.br> #322 Restricted+
ka...@gmail.com <ka...@gmail.com> #323
sa...@gmail.com <sa...@gmail.com> #324
qa...@gmail.com <qa...@gmail.com> #325
[Deleted User] <[Deleted User]> #326
ad...@hopper.com <ad...@hopper.com> #327
ba...@gmail.com <ba...@gmail.com> #328
bv...@gocardless.com <bv...@gocardless.com> #329
se...@gmail.com <se...@gmail.com> #330
va...@google.com <va...@google.com>
sa...@google.com <sa...@google.com> #331
[Deleted User] <[Deleted User]> #332
te...@gmail.com <te...@gmail.com> #333
[Deleted User] <[Deleted User]> #334
ku...@google.com <ku...@google.com>
ju...@clever.gy <ju...@clever.gy> #335
ma...@ingka.ikea.com <ma...@ingka.ikea.com> #336
ma...@gmail.com <ma...@gmail.com> #337
fr...@ggl.life <fr...@ggl.life> #338
sg...@google.com <sg...@google.com> #339
Hi @gcppit-team@google.com Team -
Since this request was raised for years now, do we have any progress or updates on this feature request. Thank you in advance.
va...@gmail.com <va...@gmail.com> #340
tm...@gmail.com <tm...@gmail.com> #341
fb...@gmail.com <fb...@gmail.com> #342
ad...@cross-entropy.com <ad...@cross-entropy.com> #343
ma...@duinker.eu <ma...@duinker.eu> #344
[Deleted User] <[Deleted User]> #345
ha...@donuts.ne.jp <ha...@donuts.ne.jp> #346
ja...@simplified.co <ja...@simplified.co> #347
pc...@mediarithmics.com <pc...@mediarithmics.com> #348
pr...@gmail.com <pr...@gmail.com> #349
sh...@adevinta.com <sh...@adevinta.com> #350
ga...@ford.com <ga...@ford.com> #351
pi...@gmail.com <pi...@gmail.com> #352
ja...@cart.com <ja...@cart.com> #353
---> P2 since 2017 ---- 1st upvote in 2025 ---- Its making them $$$ to do NOTHING ---- IssueTracker is where concerns go to be buried in the sands of time...
Description