Source: Bringing enterprise network security controls to your Kubernetes clusters on GKE from Google Cloud
At Google Cloud, we work hard to give you the controls you need to tailor your network and security configurations to your organization’s needs. Today, we’re excited to announce the general availability of a few important networking features for Google Kubernetes Engine (GKE) that provide additional security and privacy for your container infrastructure: private clusters, master authorized networks, and Shared Virtual Private Cloud (VPC).
These new features enable you to limit access to your Kubernetes clusters from the public internet, confining them within the secure perimeter of your VPC, and to share common resources across your organization without compromising on isolation. Specifically:
Private clusters let you deploy GKE clusters privately as part of your VPC, restricting access to within the secure boundaries of your VPC.
Master authorized networks block access to your clusters’ master API endpoint from the internet, limiting access to a set of IP addresses you control.
Shared VPC eases cluster maintenance and operation by separating responsibilities: it gives centralized control of critical network resources to network or security admins, and clusters responsibilities to project admins.
Credit Karma, a personal finance company that keeps track of its users’ credit scores, has been eagerly testing out these advanced GKE networking capabilities, especially as they work to meet compliance requirements such as PCI-DSS (Payment Card Industry Data Security Standard).
“GKE gives us the features we need to move faster. The private cluster capability enables us to meet strict security and compliance requirements without compromising on functionality. With private IPs and pod IP aliasing, we are able to communicate with other services in GCP while staying within Google’s private network.” – Kevin Jones, Staff engineer, Credit Karma
Now that we’ve been introduced to the new features, let’s take a look at each one in more detail.
Private clusters on GKE use only private IP addresses for your workloads so that they’re only reachable from within your VPC making the communication between the master and the nodes completely private.
In order to access your GKE master for administrative purposes, you can connect privately to the Kubernetes master from your on-prem via VPN or private Interconnect.
You can also whitelist a set of internet public IPs that are allowed to access the master endpoint, blocking traffic from unauthorized IP sources, with master authorized networks.
The access to images in Google Container Registry, or to Stackdriver to send logs is also done privately with Private Google Access without leaving Google’s network. To gain internet access from the private cluster for the nodes, you can either set up additional services, such as a NAT gateway, or use Google’s managed version, Cloud NAT.
Check out the documentation on how to create a private cluster to confine your workloads within the secure boundaries of your VPC.
Shared VPC allows many different GKE cluster admins in an Organization to carry out their cluster management duties autonomously while communicating and sharing common resources securely.
For example, you can assign administrative responsibilities such as creating and managing a GKE cluster to project admins, while tasking security and network admin teams with the responsibility for critical network resources like subnets, routes, and firewalls. Learn how to create Kubernetes clusters in a Shared VPC model and set appropriate access controls for critical network resources.
In conclusion GKE provides the network and security centralized management for your enterprise deployments, and allows your sensitive workloads to remain secure and private within the boundaries of your VPC. Read more about how to holistically think about networking to apply to your GKE deployments.