Fixed
Status Update
Comments
bl...@google.com <bl...@google.com>
mc...@google.com <mc...@google.com>
we...@google.com <we...@google.com> #2
I understand the request for per-table access controls, but I'm unsure what you intend for query results "so that they cannot be exported."
Are you referring to the automatically-created anonymous result tables? These are currently restricted to _only_ the user who ran the job, a very restrictive form of permission control.
Furthermore, "cannot" implies that the permissions on a table would be narrower than the dataset in which the table exists. This is contrary to standard GCP approaches to resource hierarchies, where permissions granted on the project would be inherited by the dataset, and then the table within that dataset. Is this contrary to your intended use?
Are you referring to the automatically-created anonymous result tables? These are currently restricted to _only_ the user who ran the job, a very restrictive form of permission control.
Furthermore, "cannot" implies that the permissions on a table would be narrower than the dataset in which the table exists. This is contrary to standard GCP approaches to resource hierarchies, where permissions granted on the project would be inherited by the dataset, and then the table within that dataset. Is this contrary to your intended use?
lo...@gmail.com <lo...@gmail.com> #3
Is there a way to restrict access to export through this process from big query to cloud storage if someone has an access to google storage linked to another project & big query linked to another project.
we...@google.com <we...@google.com> #4
Internally, I've split this request into two parts:
1. Table-specific access controls.
2. Exfiltration controls: prevent data moving from one project to another.
Assignee, can you comment on #2, since #1 is well-understood?
1. Table-specific access controls.
2. Exfiltration controls: prevent data moving from one project to another.
Assignee, can you comment on #2, since #1 is well-understood?
al...@google.com <al...@google.com> #5
For exfiltration control mentioned in #2 above, please take a look at https://cloud.google.com/vpc-service-controls/ . It's a feature that's currently in Alpha, but its goal is exactly what you asked, to control data movement between different projects across one or more supported Cloud services, including BigQuery and Cloud Storage.
se...@google.com <se...@google.com>
ob...@gmail.com <ob...@gmail.com> #6
Hi there,
Is "Table-specific access controls" really fixed? I can't find any documentation for that anywhere; it all seems to suggest that there is IAM control per dataset.
It would be very helpful to be able to restrict access to a specific project.dataset.table and even more useful to restrict the permissions of that access (e.g. readonly / append only).
Example use cases and benefits:
- Restrict a specific REST query to a table / dataset
- Enable users to make readonly SQL queries of tables (e.g. prevent modification of data)
- Avoid SQL injection in the above case (because table/dataset is set in the request body, not as part of the SQL query)
- Expose specific tables to users
Is "Table-specific access controls" really fixed? I can't find any documentation for that anywhere; it all seems to suggest that there is IAM control per dataset.
It would be very helpful to be able to restrict access to a specific project.dataset.table and even more useful to restrict the permissions of that access (e.g. readonly / append only).
Example use cases and benefits:
- Restrict a specific REST query to a table / dataset
- Enable users to make readonly SQL queries of tables (e.g. prevent modification of data)
- Avoid SQL injection in the above case (because table/dataset is set in the request body, not as part of the SQL query)
- Expose specific tables to users
Description
Would it be possible to add the option to enable access control at table level? This would also apply to query results that have a very large number of rows, so that they cannot be exported if this access control is applied.
[1]: