Assigned
Status Update
Comments
va...@google.com <va...@google.com>
va...@google.com <va...@google.com> #2
I have filed your feature request to our product team. However, there is no guarantee that this feature will be implemented, and the ETA cannot be provided. Rest assured that your feedback is always taken into account, as it allows us to improve the platform.
Any future updates to this feature will be posted here.
Any future updates to this feature will be posted here.
th...@gmail.com <th...@gmail.com> #3
Comment has been deleted.
th...@parashift.io <th...@parashift.io> #4
+1 to this.
mi...@nextgatetech.com <mi...@nextgatetech.com> #5
I've wasted hours on this problem and still haven't found why my new database won't connect to my infrastructure.
Perhaps a useful approach to this would be a duplicate function that let's me set the new DB size and keep all the other config the same.
Or even the ability to export the full config of the instance as a JSON file (or whatever) - then I could do that for both instances and figure out what the heck is different between them!!
Perhaps a useful approach to this would be a duplicate function that let's me set the new DB size and keep all the other config the same.
Or even the ability to export the full config of the instance as a JSON file (or whatever) - then I could do that for both instances and figure out what the heck is different between them!!
to...@clarum.ai <to...@clarum.ai> #6
deleted
di...@extruct.ai <di...@extruct.ai> #7
This issue can have a major side effect from a cost perspective as well. If a customer uses a "temp" table for a data-load or test, the database will grow to make room for that table. However, once that data-load is complete, there is no way to shrink the DB back to its original size.
The problem can compound if a customer then clones this db for other environments, as the excess (now-unused) disk space will also be reserved in the clones.
The end result can be a cloudSQL bill that is dramatically higher than expected or forecasted.
The problem can compound if a customer then clones this db for other environments, as the excess (now-unused) disk space will also be reserved in the clones.
The end result can be a cloudSQL bill that is dramatically higher than expected or forecasted.
Description
It would be great if CloudSQL would support the pgvecto.rs[1] extension.
Reasons
CloudSQL already supports pgvector[2], there are some limitations which limits it's use cases.
You can't easily prefilter results while using an index (without partitioning the table), so queries like the following are not possible:
Dimensions are limited to 2000 (pgvector.rs supports up to 65535)
There are also some additional improvements on performance and transaction support. A detailed list can be found here:
References