Status Update
Comments
sa...@google.com <sa...@google.com>
sa...@google.com <sa...@google.com> #2
Hello,
Thanks for reaching out to us!
The Product Engineering Team has been made aware of your feature request, and will address it in due course. Though we can't provide an ETA on feature requests nor guarantee their implementation, rest assured that your feedback is always taken very seriously, as it allows us to improve our products. Thank you for your trust and continued support to improve Google Cloud Platform products.
In case you want to report a new issue, please do not hesitate to create a new [Issue Tracker]
Thanks and Regards,
Onkar Mhetre
Google Cloud Support
ar...@db.com <ar...@db.com> #3
Had the human-readable label in the UI said "E2 and N1 CPUs" instead of "CPUs" we would not have made that mistake: we would have tried for a smaller scale-up or requested more quota first.
I assume that the "internal"/computer-readable name `CPUS` is unlikely to change for compatibility reasons, but hopefully the UI display name is less hardcoded?
bo...@google.com <bo...@google.com> #4
Hi Arun,
Thanks for reaching out to us and for your trust and continued support to improve Google Cloud Platform products.
Information is passed to the Product engineering team and currently there is no ETA.
Description
Environment: Our GCP presence spans several regions with 100s of service projects in each region communicating with onprem data centers using dedicated interconnects, attached to host projects in shared VPCs. OnPrem DNS resolution for GCP resources is via CloudDNS configured on the host projects.
Scenario: -- Before provisioning a GKE cluster in a service project with the network in the host project, predefined roles need to be granted to the service project's serviceAccount(s)/GKE serviceAgent(s) (https://cloud.google.com/kubernetes-engine/docs/how-to/iam#predefined ); one of them being roles/container.hostServiceAgentUser. The role has DNS permissions to manage RPZs (response policy zones). - [1]
-- Upon completion of [1], when provisioning the GKE cluster with Cloud DNS as the kubedns provider, default RPZs are created in the host project (https://cloud.google.com/kubernetes-engine/docs/how-to/cloud-dns#created_resources ), with RPZ modification permissions available to the service project's serviceAccount(s)/GKE serviceAgent(s).
Problem: Considering RPZs mgmt access is provided at the host project level, this allows the service project's serviceAccount(s)/GKE serviceAgent(s) to add/mod/del RPZs/rules. While the scope is currently cluster-centric, an orchestrated impersonation can ensure RPZ/rule changes are done to any RPZ. Creating unnecessary RPZs/rules also has the potential to create query loops, if left unchecked.
What you expected to happen: The service project's serviceAccount(s)/GKE serviceAgent(s) should not have the ability to modify any/all RPZs under a host project, other than the RPZs/rules created by a GKE cluster.
Steps to reproduce:
Other information (workarounds you have tried, documentation consulted, etc): A custom role with all permissions except DNS RPZ mgmt will help as an interim solution, but is not scalable as the predefined role (GCP recommendation) is preferable and could provide a problem if changes are done that we are unaware of. In addition, Google Support might not assist if custom roles are utilized.