By Aparna Sinha, Group Product Manager, Container Engine
Just over a week ago Google led the most recent open source release of Kubernetes 1.7, and today, that version is available on Container Engine, Google Cloud Platform’s (GCP) managed container service. Container Engine is one of the first commercial Kubernetes offerings running the latest 1.7 release, and includes differentiated features for enterprise security, extensibility, hybrid networking and developer efficiency. Let’s take a look at what’s new in Container Engine.
Container Engine is designed with enterprise security in mind. By default, Container Engine clusters run a minimal, Google curated Container-Optimized OS (COS) to ensure you don’t have to worry about OS vulnerabilities. On top of that, a team of Google Site Reliability Engineers continuously monitor and manage the Container Engine clusters, so you don’t have to. Now, Container Engine adds several new security enhancements:
Together the above features improve workload isolation within a cluster, which is a frequently requested security feature in Kubernetes. Node Authorizer and NetworkPolicy can be combined with the existing RBAC control in Container Engine to improve the foundations of multi-tenancy:
Perhaps the most awaited features by our enterprise users are networking support for hybrid cloud and VPN with Container Engine. New in this release:
As more enterprises use Container Engine, we’re making a major investment to improve extensibility. We heard feedback that customers want to offer custom Kubernetes-style APIs in their clusters.
API Aggregation, launching today in beta on Container Engine, enables you to extend the Kubernetes API with custom APIs. For example, you can now add existing API solutions such as service catalog, or build your own in the future.
Users also want to incorporate custom business logic and third-party solutions into their Container Engine clusters. So we’re introducing Dynamic Admission Control in alpha clusters, providing two ways to add business logic to your cluster:
As part of our plans to improve extensibility for enterprises, we’re replacing the Third Party Resource (TPR) API with the improved Custom Resource Definition (CRD) API. CRDs are a lightweight way to store structured metadata in Kubernetes, which make it easy to interact with custom controllers via kubectl. If you use the TPR beta feature, please plan to migrate to CRD before upgrading to the 1.8 release.
Container Engine now enhances your ability to run stateful workloads like databases and key value stores, such as ZooKeeper, with a new automated application update capability. You can:
A popular workload on Google Cloud and Container Engine is training machine learning models for better predictive analytics. Many of you have requested GPUs to speed up training time, so we’ve updated Container Engine to support NVIDIA K80 GPUs in alpha clusters for experimentation with this exciting feature. We’ll support additional GPUs in the future.
When developers don’t have to worry about infrastructure, they can spend more time building applications. Kubernetes provides building blocks to de-couple infrastructure and application management, and Container Engine builds on that foundation with best-in-class automation features.
We’ve automated large parts of maintaining the health of the cluster, with auto-repair and auto-upgrade of nodes.
Container Engine also offers cluster- and pod-level auto-scaling so applications can respond to user demand without manual intervention. This release introduces several GCP-optimized enhancements to cluster autoscaling:
The combination of auto-repair, auto-upgrades and cluster autoscaling in Container Engine enables application developers to deploy and scale their apps without being cluster admins.
We’ve also updated the Container Engine UI to assist in debugging and troubleshooting by including detailed workload-related views. For each workload, we show the type (DaemonSet, Deployment, StatefulSet, etc.), running status, namespace and cluster. You can also debug each pod and view annotations, labels, the number of replicas and status, etc. All views are cross-cluster so if you’re using multiple clusters, these views allow you to focus on your workloads, no matter where they run. In addition, we also include load balancing and configuration views with deep links to GCP networking, storage and compute. This new UI will be rolling out in the coming week.
Google Cloud is enabling a shift in enterprise computing: from local to global, from days to seconds, and from proprietary to open. The benefits of this model are becoming clear and exemplified by Container Engine, which saw more than 10x growth last year.
To keep up with demand, we’re expanding our global capacity with new Container Engine clusters in our latest GCP regions:
These new regions join the half dozen others from Iowa to Belgium to Taiwan where Container Engine clusters are already up and running.
This blog post highlighted some of the new features available in Container Engine. You can find the complete list of new features in the Container Engine release notes.
The rapid adoption of Container Engine and its technology is translating into real customer impact. Here are a few recent stories that highlight the benefits companies are seeing:
At Google Cloud we’re really proud of our compute infrastructure, but what really makes it valuable is the services that run on top. Google creates game-changing services on top of world-class infrastructure and tooling. With Kubernetes and Container Engine, Google Cloud makes these innovations available to developers everywhere.
Thanks for your feedback and support. Keep the conversation going and connect with us on the Container Engine Slack channel.