Change theme
Help
Press space for more information.
Show links for this issue (Shortcut: i, l)
Copy issue ID
Previous Issue (Shortcut: k)
Next Issue (Shortcut: j)
Sign in to use full features.
Vote: I am impacted
Notification menu
Refresh (Shortcut: Shift+r)
Go home (Shortcut: u)
Pending code changes (auto-populated)
View issue level access limits(Press Alt + Right arrow for more information)
Request for new functionality
View staffing
Description
Customers that want to access a private Autopilot Google Kubernetes Engine (GKE) cluster using Cloud Build (CB) private pools, without activating GKE Enterprise for that cluster, need clear guidance, in the form of a public document, on how to do that.
Reasons why alternative solutions are not sufficient:
Tutorial [1] describes how to access a private GKE cluster using CB private pools, where GKE is using a peered Virtual Private Cloud (VPC) in its control plane, in order to facilitate the communication between the CB private pools and its servcies. This access lets you use Cloud Build to deploy applications and manage resources on a private GKE cluster.
However, according to the information in GCP blog post [2], since March 2022, new clusters deployed on GCP began using Private Service Connect (PSC) instead of VPC as the networking service that the GKE control plane is using to communicate with the nodes.
In consequence, tutorial [3] was written, that presents how to access the control plane of a private GKE cluster using CB private pools, by creating a PSC instance to connect the CB pool nodes with a GKE cluster. However, the method described in this tutorial also requires enabling Identity Service for that cluster.
This tutorial, however, cannot be used by customers that have already deployed or are planning to deploy their GKE cluster using Autopilot, and more importantly, using Identity Service requires GKE Enterprise, as presented in document [4].
Links:
[1]:
[2]:
[3]:
[4]: