Feature Request P2
Status Update
Comments
ar...@gmail.com <ar...@gmail.com> #2
+1 to this feature request, came here seeking help at https://serverfault.com/questions/987908/custom-routes-rejected-by-gke-peer-configuration/988167#988167
We will to build CD pipelines with agents running in internal on-premise network, peered with GCP network. GKE private endpoint is not reachable in this case.
We will to build CD pipelines with agents running in internal on-premise network, peered with GCP network. GKE private endpoint is not reachable in this case.
mo...@gmail.com <mo...@gmail.com> #3
+1 Could be quite a useful feature for us.
Ideally, we don't want to have public IP for K8S API at all and manage it privately inside the org with a hybrid architecture site-2-site VPN etc.
Ideally, we don't want to have public IP for K8S API at all and manage it privately inside the org with a hybrid architecture site-2-site VPN etc.
[Deleted User] <[Deleted User]> #4
I thought this would be resolved now that the VPC Peerings of private GKE will accept routes exported from the VPC. But it still doesn't seem to work for me.
yo...@gmail.com <yo...@gmail.com> #6
Does anyone has been able to make it works?
[Deleted User] <[Deleted User]> #7
I stumbled upon this issue and wanted to share some thoughts on how we fixed this. Following the documentation listed above, there is a reference to the VPC peering connection, which links here: https://cloud.google.com/vpc/docs/using-vpc-peering#update-peer-connection . After setting the "Export custom routes" bit, I was able to start connecting from my private network. The exported routes bit injects the static/dynamic routes to the peered VPC, which therefore allows responses to flow backward to the clients from the api server.
[Deleted User] <[Deleted User]> #8
da...@myfitnesspal.com. - this does not work with private ip , something else in your setup (like public ip, nat or something) made it work, not the export/import , this issue was verified by google support investigating all setups in our projects including export/import custom routes.
[Deleted User] <[Deleted User]> #9
yo...@gmail.com. - the only solutions are: 1. proxy/nat on the vpc where gke is (small impact) 2. refactor vpc peering to replace by vpn between vpc (big impact to customer projects)
[Deleted User] <[Deleted User]> #10
[Deleted User] <[Deleted User]> #11
@ko...@cb4.com Happy to talk more on my setup, but by exporting the routes, it exposed the peering connection to the cloud router, which then advertised the paths over the BGP peer.
ne...@qwilt.com <ne...@qwilt.com> #12
This is a critical issue and crucial security factor, I cannot understand how this is still an open matter
This is true also for VPCs in the same project!
This is true also for VPCs in the same project!
ru...@timbrasil.com.br <ru...@timbrasil.com.br> #13
Weird, even I set the "Control plane global access = Enabled" it´s not reachable... So, what´s this purpose? I don´t want to set a public Endpoint for security reasons :(
Description
Be able to reach GKE private master endpoint from on-prem network through VPN tunnel/Cloud Interconnect + VPC Network peering
How this might work:
Product enhancement, currently it is only allowed by using a VM as bastion host or proxy within the same VPC network and region as GKE private cluster.
Task to accomplish:
To be able to reach GKE private cluster endpoint from on-prem Network.
Current functionality if applicable:
None
Current customer workaround:
None