GKE release notes archive

This page contains a historical archive of all release notes forGoogle Kubernetes Engine prior to 2020. To view more recent release notes, see theRelease notes.

You can see the latest product updates for all of Google Cloud on the Google Cloud page, browse and filter all release notes in theGoogle Cloud console, or programmatically access release notes inBigQuery.

To get the latest product updates delivered to you, add the URL of this page to yourfeed reader, or add thefeed URL directly.

December 23, 2019

Rapid channel
(1.16.x)

Global accessfor internal TCP/UDP load balancing Services is now Beta. Global access allowsinternal load balancing IP addresses to be accessed from any region withina VPC.

December 13, 2019

Version updates

GKE cluster versions have been updated.

Note: Your clusters might not have these versions available. Rollouts begin on the day of the note and take four or more business days to be completed across all Google Cloud zones.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and foropt-in master upgrades and node upgrades for existing clusters. Seethese instructionsfor more information on the Kubernetes versioning scheme.

No Channel

v.1.12.x
1.12.10-gke.22
v.1.15.x
1.15.4-gke.22

GKE 1.15 is generally available for new clusters.

Upgrading

Before creating GKE v1.15 clusters, youmust review theknown issues andurgent upgrade notes.

New features

By default, firewall rules restrict your cluster master to only initiate TCP connections to your nodes on ports 443 (HTTPS) and 10250 (kubelet). For some Kubernetes features, you might need to add firewall rules to allow access on additional ports. For example, in Kubernetes 1.9 and older, kubectl top accesses heapster, which needs a firewall rule to allow TCP connections on port 8080. To grant such access, you can add firewall rules.

Node-local DNS caching is now available in beta. This does create a single point of failure. If the node-cache goes down DNS for all Pods on that node will be broken until the cache is up.

Known Issues

There is a low risk that consumers of the published OpenAPI document that made assumptions about theabsence of schema info for a given type (for example, "no schema info means a resource is a custom resource") could have those assumptions broken once custom resources start publishing schema definitions.

Stable channel
and 1.13.x

Stable channel

There are no changes to the Stable channel this week.

No channel
  • 1.13.11-gke.15
  • 1.13.12-gke.16

Regular channel
and 1.14.x

Regular channel

There are no changes to the Regular channel, but 1.15 will be availablein this channel in January 2020.

Note: Relevant content is also available separately in theRegular channel release notes.
No channel
  • 1.14.7-gke.25
  • 1.14.8-gke.21
  • 1.14.9-gke.2

Rapid channel
(1.16.x)

Rapid channel
1.16.0-gke.20
Note: Relevant content is also available separately in theRapid channel release notes.

GKE 1.16.0-gke.20 (alpha) is now available for testingand validation in the Rapidrelease channel.

Important: Existing clusters enrolled in the Rapid release channel will beauto-upgraded to this version.

Retired APIs

extensions/v1beta1, apps/v1beta1, and apps/v1beta2 won't be served by default.

  • All resources underapps/v1beta1 andapps/v1beta2 - useapps/v1 instead.
  • daemonsets,deployments,replicasets resources underextensions/v1beta1 - useapps/v1 instead.
  • networkpolicies resources underextensions/v1beta1 - usenetworking.k8s.io/v1 instead.
  • podsecuritypolicies resources underextensions/v1beta1 - usepolicy/v1beta1 instead.

Changes

New clusters have thecos-metrics-enabled flag enabled by default. This change allows kernel crash logs to be collected. You can disable by adding--metadata cos-metrics-enabled=false when you create clusters.

Fixed

New features

Maintenance windows and exclusions, which was previously available in beta, is now generally available.

Changes

The beta version of Stackdriver Kubernetes Engine Monitoring is no longer supported.

Legacy Stackdriver support for Google Kubernetes Engine (GKE) is deprecated. If you're using Legacy Stackdriver for logging or monitoring, you mustmigrate to Stackdriver Kubernetes Engine Monitoring before Legacy Stackdriver is decommissioned. For more information, seeLegacy Stackdriver support for GKE deprecation.

December 6, 2019

TheDecember 4, 2019 rollout is paused.Versions that were made available for upgrades and new clusters in that releasewill no longer be available. This is to address an issue where newly creatednode pools are created successfully but are incorrectly shown as PROVISIONING.

December 4, 2019

Fixed

We have fixed an issue with cluster upgrade from a version earlier than1.14.2-gke.10 when gVisor is enabled in the cluster. It's now safe to upgrade toany version greater than 1.14.7-gke.17. This issue was originally noted in therelease notes for October 30, 2019.

Version updates

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and foropt-in master upgrades and node upgrades for existing clusters. Seethese instructionsfor more information on the Kubernetes versioning scheme.

v1.12.x

No new v1.12.x versions this week.

Stable channel
and 1.13.x

Stable channel

There are no changes to the Stable channel this week.

Note: Relevant content is also available separately in theStable channel release notes.
No channel
1.13.12-gke.14
Note: 1.13.12-gke.14 is not yet available in the Stable channel. It isavailable to clusters that do not use a release channel.

Regular channel
and 1.14.x

Regular channel

There are no changes to the Regular channel this week.

Note: Relevant content is also available separately in theRegular channel release notes.
No channel
1.14.8-gke.18
Note: 1.14.8-gke.18 is not yet available in the Stable channel. It isavailable to clusters that do not use a release channel.

Rapid channel
(1.15.x)

Rapid channel

There are no changes to the Rapid channel this week.

November 22, 2019

Fixed

The known issue in the COS kernel that may cause kernel panic, previouslyreported onNovember 5th, 2019, is resolved.The versions available in this release use updated versions of COS.GKE 1.12 usescos-69-10895-348-0and versions 1.13 and 1.14 usecos-stable-73-11647-348-0.

Version updates

GKE cluster versions have been updated.

Note: Your clusters might not have these versions available. Rollouts begin on the day of the note and take four or more business days to be completed across all Google Cloud zones.

Scheduled automatic upgrades

Masters and nodes with auto-upgrade enabled will be upgraded:

Current versionUpgrade version
1.12.10-gke.151.12.10-gke.17

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and foropt-in master upgrades and node upgrades for existing clusters. Seethese instructionsfor more information on the Kubernetes versioning scheme.

v1.12.x

1.12.10-gke.20

This version usescos-69-10895-348-0which fixes the known issue that may cause kernel panics, previouslyreported onNovember 5th, 2019.

Stable channel
and 1.13.x

Stable channel

There are no changes to the Stable channel this week.

No channel
1.13.12-gke.13

This version usescos-stable-73-11647-348-0 which fixes the known issue that may cause kernel panics, previously reported onNovember 5th, 2019.

Regular channel
and 1.14.x

Regular channel

There are no changes to the Regular channel this week.

No channel
1.14.8-gke.17

This version usescos-stable-73-11647-348-0 which fixes the known issue that may cause kernel panics, previously reported onNovember 5th, 2019.

Rapid channel
(1.15.x)

Rapid channel

There are no changes to the Rapid channel this week.

Versions no longer available

The following versions are no longer available for new clusters or upgrades.

  • 1.12.10-gke.15
  • 1.13.11-gke.5
  • 1.13.11-gke.9
  • 1.13.11-gke.11
  • 1.13.12-gke.2
  • 1.14.7-gke.10
  • 1.14.7-gke.14
  • 1.14.7-gke.17
  • 1.14.8-gke.2

November 18, 2019

Fixed

The known issue in the COS kernel that may cause nodes to crash, previously reported onNovember 5th, 2019, is resolved. This release downgrades COS tocos-73-11647-293-0.

Scheduled automatic upgrades

Masters and nodes with auto-upgrade enabled will be upgraded:

Current versionUpgrade version
1.13.0-gke.0 to 1.13.11-gke.131.13.11-gke.14 (Stable channel)
1.13.12-gke.0 to 1.13.12-gke.71.13.12-gke.8
1.14.0-gke.0 to 1.14.7-gke.221.14.7-gke.23
1.14.8-gke.0 to 1.14.8-gke.111.14.8-gke.12 (Regular channel)
Note: 1.15 was unaffected by this issue.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and foropt-in master upgrades and node upgrades for existing clusters. Seethese instructionsfor more information on the Kubernetes versioning scheme.

v1.12.x

1.12.10-gke.17

No new v1.12.x versions this week.

Stable channel
and 1.13.x

Stable channel
1.13.11-gke.14

This version includes a fix for a known issue in the COS kernel that may have caused nodes to crash.

Note: Relevant content is also available separately in theStable channel release notes.
No channel
1.13.12-gke.8
Note: 1.13.12-gke.8 is not yet available in the Stable channel. It isavailable to clusters that do not use a release channel.

This version includes a fix for a known issue in the COS kernel that may have caused nodes to crash.

Regular channel
and 1.14.x

Regular channel
1.14.8-gke.12
Note: Relevant content is also available separately in theRegular channel release notes.
No channel
1.14.7-gke.23

This version includes a fix for a known issue in the COS kernel that may have caused nodes to crash.

Rapid channel
(1.15.x)

1.15.4-gke.15

No new v1.15.x versions this week.

November 11, 2019

Changes

After November 11, 2019, new clusters and node pools created withgcloud havenode auto-upgrade enabled by default.

November 05, 2019

Version updates

GKE cluster versions have been updated.

Note: Your clusters might not have these versions available. Rollouts begin on the day of the note and take four or more business days to be completed across all Google Cloud zones.

Scheduled automatic upgrades

Masters and nodes with auto-upgrade enabled will be upgraded:

Current versionUpgrade version
v1.12.x1.12.10-gke.15
v1.13.x1.13.11-gke.5
v1.14.x1.14.7-gke.10
Note: Clusters usingrelease channelsare auto-upgraded when new versions are available in their channel as notedin the following sections.

Rollouts are phased across multiple weeks, to ensure cluster and fleet stability.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and foropt-in master upgrades and node upgrades for existing clusters. Seethese instructionsfor more information on the Kubernetes versioning scheme.

v1.12.x

v1.12.10-gke.17

This release includes a patch for the golang vulnerability CVE-2019-17596, fixed in go-boringcrypto 1.13.1 and 1.12.11.

Updated containerd to1.2.10

Stable channel
(1.13.x)

Note: Relevant content is also available separately in theRegular channel release notes.
v1.13.11-gke.11
Note: This version is available to clusters that do not use a release channel.

This release includes a patch for the golang vulnerability CVE-2019-17596, fixed in go-boringcrypto 1.13.1 and 1.12.11.

v1.13.12-gke.2
Note: This version is available to clusters that do not use a release channel.

This release includes a patch for the golang vulnerability CVE-2019-17596, fixed in go-boringcrypto 1.13.1 and 1.12.11.

Regular channel
(1.14.x)

v1.14.7-gke.17

This release includes a patch for the golang vulnerability CVE-2019-17596, fixed in go-boringcrypto 1.13.1 and 1.12.11.

v1.14.8-gke.2

This release includes a patch for the golang vulnerability CVE-2019-17596, fixed in go-boringcrypto 1.13.1 and 1.12.11.

Rapid channel
(1.15.x)

v1.15.4-gke.18
Note: Relevant content is also available separately in theRapid channel release notes.

GKE 1.15.4-gke.18 (alpha) is now available for testing andvalidation in the Rapidrelease channel. Formore details, refer to therelease notes for Kubernetes v1.15.

This release includes a patch for the golang vulnerability CVE-2019-17596, fixed in go-boringcrypto 1.13.1 and 1.12.11.

Known issues

We have found an issue in COS that might cause kernel panics on nodes.

This impacts node versions:
  • 1.13.11-gke.9
  • 1.13.11-gke.11
  • 1.13.11-gke.12
  • 1.13.12-gke.1
  • 1.13.12-gke.2
  • 1.13.12-gke.3
  • 1.13.12-gke.4
  • 1.14.7-gke.14
  • 1.14.7-gke.17
  • 1.14.8-gke.1
  • 1.14.8-gke.2
  • 1.14.8-gke.6
  • 1.14.8-gke.7

A patch is being tested and will rollout soon, but we recommend customers avoid these node versions or downgrade to previous, unaffected patches.

New features

Surge upgrades are now in beta. Surge upgrades allow you to configure speed and disruption of node upgrades

Changes

Node auto-provisioning has reached General Availability. Node auto-provisioning creates or deletes node pools from your cluster based upon resource requests.

October 30, 2019

Version updates

GKE cluster versions have been updated.

Note: Your clusters might not have these versions available. Rollouts begin on the day of the note and take four or more business days to be completed across all Google Cloud zones.

New default version

The default version for new clusters is now v1.13.11-gke.9(previously v1.13.10-gke.0). Clusters enrolledin the stablerelease channelwill be auto-upgraded to this version.

Scheduled automatic upgrades

Masters and nodes with auto-upgrade enabled will be upgraded:

Current versionUpgrade version
1.12.x versions1.12.10-gke.17
1.13.x versions1.13.11-gke.5
1.14.x versions1.14.7-gke.10
Note: Clusters usingrelease channelsare auto-upgraded when new versions are available in the relevant release channels, as notedin the following sections.

Rollouts are phased across multiple weeks, to ensure cluster and fleet stability.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and foropt-in master upgrades and node upgrades for existing clusters. Seethese instructionsfor more information on the Kubernetes versioning scheme.

v1.12.x

No new v1.12.x versions this week.

Stable channel
and 1.13.x

Stable channel
Note: Relevant content is also available separately in theStable channel release notes.
1.13.11-gke.9

Update containerd to 1.2.10.

Update COS to cos-u-73-11647-329-0.

This release includes a patch forCVE-2019-11253. For more information, see thesecurity bulletin for October 16, 2019.

Regular channel
and 1.14.x

Regular channel
Note: Relevant content is also available separately in theRegular channel release notes.
1.14.7-gke.10

This version was generally available onOctober 18, 2019and is now available in the Regularrelease channel.

This release includes a patch forCVE-2019-11253. For more information, see thesecurity bulletin for October 16, 2019.

No channel
1.14.7-gke.14
Note: 1.14.7-gke.14 is not yet available in the Regular channel. It isavailable to clusters that do not use a release channel.

Update COS to cos-u-73-11647-329-0.

Rapid channel
(1.15.x)

1.15.4-gke.17
Note: Relevant content is also available separately in theRapid channel release notes.

GKE 1.15.4-gke.17 (alpha) is now available for testingand validation in the Rapidrelease channel.

Important: Existing clusters enrolled in the Rapid release channel will beauto-upgraded to this version.

Fixes a known issue reported onOctober 11, 2019 regarding fdatasync performance regression on COS/Ubuntu. Node image for Container-Optimized OS updated tocos-77-12371-89-0. Node image for Ubuntu updated toubuntu-gke-1804-d1903-0-v20191011a

Versions no longer available

The following versions are no longer available for new clusters or upgrades.

  • 1.12.10-gke.15
  • 1.13.7-gke.24
  • 1.13.9-gke.3
  • 1.13.9-gke.11
  • 1.13.10-gke.0
  • 1.13.10-gke.7
  • 1.14.6-gke.1
  • 1.14.6-gke.2
  • 1.14.6-gke.13

Known Issues

If you use Sandbox Pods in your GKE cluster and plan to upgrade from a version less than 1.14.2-gke.10 to a version greater than 1.14.2-gke.10, you need to manually runkubectl delete mutatingwebhookconfiguration gvisor-admission-webhook-config after the upgrade.

October 18, 2019

Version updates

GKE cluster versions have been updated.

Note: Your clusters might not have these versions available. Rollouts begin on the day of the note and take four or more business days to be completed across all Google Cloud zones.

Scheduled automatic upgrades

Masters and nodes with auto-upgrade enabled will be upgraded:

Current versionUpgrade version
1.12.x versions1.13.7-gke.24
1.14.x versions 1.14.6-gke.0 and older1.14.6-gke.1
Note: Clusters usingrelease channelsare auto-upgraded when new versions are available in the relevant release channels, as notedin the following sections.

Rollouts are phased across multiple weeks, to ensure cluster and fleet stability.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and foropt-in master upgrades and node upgrades for existing clusters. Seethese instructionsfor more information on the Kubernetes versioning scheme.

v1.12.x

1.12.10-gke.15

This release includes a patch forCVE-2019-11253. For more information, see thesecurity bulletin for October 16, 2019.

Stable channel
and 1.13.x

Stable channel

There are no changes to the Stable channel this week.

Note: Relevant content is also available separately in theStable channel release notes.
No channel
1.13.11-gke.5
Note: 1.13.11-gke.5 is not yet available in the Stable channel. It isavailable to clusters that do not use a release channel.

Regular channel
and 1.14.x

Regular channel

There are no changes to the Regular channel this week.

Note: Relevant content is also available separately in theRegular channel release notes.
No channel
1.14.7-gke.10
Note: 1.14.7-gke.10 is not yet available in the Regular channel. It isavailable to clusters that do not use a release channel.

Rapid channel
(1.15.x)

1.15.4-gke.15
Note: Relevant content is also available separately in theRapid channel release notes.

GKE 1.15.4-gke.15 (alpha) is now available for testingand validation in the Rapidrelease channel.

Important: Existing clusters enrolled in the Rapid release channel will beauto-upgraded to this version.

Versions no longer available

The following versions are no longer available for new clusters or upgrades.

  • 1.12.9-gke.15
  • 1.12.9-gke.16
  • 1.12.10-gke.5
  • 1.12.10-gke.11

Security bulletin

A vulnerability was recently discovered in Kubernetes, described inCVE-2019-11253, which allows any user authorized to make POST requests to execute a remote Denial-of-Service attack on a Kubernetes API server. For more information, see thesecurity bulletin.

October 11, 2019

Version updates

GKE cluster versions have been updated.

Note: Your clusters might not have these versions available. Rollouts begin on the day of the note and take four or more business days to be completed across all Google Cloud zones.

New default version

The default version for new clusters is now v1.13.10-gke.0 (previouslyv1.13.7-gke.24). Clusters enrolled in the stablerelease channel will beauto-upgraded to this version.

Scheduled automatic upgrades

Masters and nodes with auto-upgrade enabled will be upgraded:

Current versionUpgrade version
versions older than 1.12.9-gke.131.12.9-gke.15
1.13.x versions older than 1.13.7-gke.191.13.7-gke.24
1.14.x versions older than 1.14.6-gke.01.14.6-gke.1
Note: Clusters usingrelease channelsare auto-upgraded when new versions are available in the relevant release channels, as notedin the following sections.

Rollouts are phased across multiple weeks, to ensure cluster and fleet stability.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and foropt-in master upgrades and node upgrades for existing clusters. Seethese instructionsfor more information on the Kubernetes versioning scheme.

v1.12.x

1.12.10-gke.11

Upgrade containerd to 1.2.9

Node image for Container-Optimized OS updated tocos-69-10895-348-0.

Node image for Ubuntu updated toubuntu-gke-1804-d1703-0-v20190917).

Stable channel
(1.13.x)

Stable channel
Note: Relevant content is also available separately in theStable channel release notes.
1.13.10-gke.0

This version was generally available onSeptember 16, 2019 andis now available in the Stablerelease channel.

This release includes a patch for CVE-2019-9512 and CVE-2019-9514. For more information, see thesecurity bulletin for September 16, 2019.

No channel
1.13.10-gke.7
Note: 1.13.10-gke.7 is not yet available in the Stable channel. It isavailable to clusters that do not use a release channel.

Node image for Container-Optimized OS updated tocos-u-73-11647-293-0.

Node image for Ubuntu updated toubuntu-gke-1804-d1809-0-v20190918. Upgrades Nvidia GPU driver to 418 driver, adds Vulkan ICD for graphical workloads, and fixes nvidia-uvm installation order.

Regular channel
(1.14.x)

Regular channel
Note: Relevant content is also available separately in theRegular channel release notes.
1.14.6-gke.1

This version was generally available onSeptember 9, 2019and is now available in the Regularrelease channel.

No channel
1.14.6-gke.13
Note: 1.14.6-gke.13 is not yet available in the Regular channel. It isavailable to clusters that do not use a release channel.

Enable SecureBoot on master VMs.

Node image for Ubuntu updated toubuntu-gke-1804-d1809-0-v20190918. Upgrades Nvidia GPU driver to 418 driver, adds Vulkan ICD for graphical workloads, and fixes nvidia-uvm installation order.

Upgrades GPU device plugin to the latest version with Vulkan support.

Do not upgrade to this version if you useWorkload Identity. There is a known issue where the gke-metadata-server Pods crashloop if you create or uprade a cluster to 1.14.6-gke.13.

Fixes an issue where cronjobs cannot be scheduled when the total number of existing jobs exceeds 500.

Rapid channel
(1.15.x)

1.15.3-gke.18
Note: Relevant content is also available separately in theRapid channel release notes.

GKE 1.15.3-gke.18 (alpha) is now available for testingand validation in the Rapidrelease channel.

Important: Existing clusters enrolled in the Rapid release channel will beauto-upgraded to this version.

Upgraded Istio to 1.2.5.

Improvements to gVisor.

Node image for Container-Optimized OS updated to cos-rc-77-12371-44-0. This update includes upgrading the kernel to 4.19 from 4.14 and upgrading Docker to 19.03 from 18.09.

Node image for Ubuntu updated to ubuntu-gke-1804-d1903-0-v20190917a. This update includes upgrading the kernel to 5 from 4.15 and upgrading Docker to 19.03 from 18.09.

Do not update to this version if you have clusters with hundreds of nodes per cluster or with I/O intensive workloads. Clusters with these characteristics may be impacted by a known issue in versions 4.19 and 5.0 of the Linux kernel that introduces performance regressions in thefdatasync system call.

Versions no longer available

v1.14.3-gke.11 is no longer available for new clusters or upgrades.

Features

Node auto-provisioning is now generally available.

Vertical Pod Autoscaler is now generally available.

Changes

UpgradeCloud Run on GKE to 0.9.0.

Fixed issues

Fixed a bug with fluentd that would prevent new nodes from starting on large clusters with over 1000 nodes on v1.12.6.

October 2, 2019

Maintenance windows and exclusionsnow give you granular control over when automatic maintenance occurs on yourclusters. You can specify the start time, duration, and recurrence of acluster's maintenance window. You can also designate specific periods of timewhen non-essential automatic maintenance should not occur.

September 26, 2019

Version updates

GKE cluster versions have been updated.

Note: Your clusters might not have these versions available. Rollouts begin on the day of the note and take four or more business days to be completed across all Google Cloud zones.

New default version

The default version for new clusters is now v1.13.7-gke.24(previously v1.13.7-gke.8). Clusters enrolledin the stablerelease channelwill be auto-upgraded to this version.

Scheduled automatic upgrades

Masters and nodes with auto-upgrade enabled will be upgraded:

Current versionUpgrade version
versions older than 1.12.9-gke.131.12.9-gke.15
1.13.x versions older than 1.13.7-gke.191.13.7-gke.24
Note: Clusters usingrelease channelsare auto-upgraded when new versions are available in the relevant release channels, as notedin the following sections.

Auto-upgrades are currently occurring two days behind therolloutschedule. Some1.11 clusters will be upgraded to 1.12 in the week of October 7th.

Rollouts are phased across multiple weeks, to ensure cluster and fleet stability.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and foropt-in master upgrades and node upgrades for existing clusters. Seethese instructionsfor more information on the Kubernetes versioning scheme.

1.12.x

No new v1.12.x versions this week.

Stable channel
(1.13.x)

No new v1.13.x versions this week.

v1.13.7-gke.24 is now available in the Stable release channel.

Regular channel
(1.14.x)

There are no changes to the Regular channel in this release.

Note: 1.14.6-gke.2 is not yet available in the Regular channel. It isavailable to clusters that do not use a release channel.Note: This version was previously available in the Rapid channel.
1.14.6-gke.2

This release includes a patch for CVE-2019-9512 and CVE-2019-9514.

Rapid channel
(1.15.x)

GKE 1.15.3-gke.1 (alpha) is now available for testing andvalidation in the Rapidrelease channel.

Important: Existing clusters enrolled in the Rapid release channel will beauto-upgraded to this version.

For more details, refer to therelease notes for Kubernetes v1.15.

Starting with GKE v1.15, the open sourceKubernetes Dashboard is no longer natively supported in GKE as a managed add-on. To deploy it manually, follow thedeployment instructions in the Kubernetes Dashboard documentation.

Resizing PersistentVolumes is now a beta feature. As part of this change, resizing a PersisntentVolume no longer requires you to restart the Pod.

Versions no longer available

The following versions are no longer available for new clusters or upgrades.

  • 1.12.7-gke.25
  • 1.12.7-gke.26
  • 1.12.8-gke.10
  • 1.12.8-gke.12
  • 1.12.9-gke.7
  • 1.12.9-gke.13
  • 1.13.6-gke.13
  • 1.13.7-gke.8
  • 1.13.7-gke.19

September 20, 2019

Ingress Controller v1.6, which was previously available in beta, is generally available for clusters running v1.13.7-gke.5 and higher.

Along with Ingress Controller, the following are also generally available:

This note has been corrected.Using Google-managed SSL certificates is currently in Beta.

September 16, 2019

Version updates

GKE cluster versions have been updated.

Note: Your clusters might not have these versions available. Rollouts begin on the day of the note and take four or more business days to be completed across all Google Cloud zones.

The release notes forSeptember 16, 2019 were incorrectlypublished early, on September 9. The incorrect release notes included anannouncement of the availability of a security patch that was notactually made available on that date. For moreinformation about the security patch, see thesecurity bulletin for September 16, 2019.

Scheduled automatic upgrades

Masters and nodes with auto-upgrade enabled will be upgraded:

Current versionUpgrade version
v1.11v1.12

Rollouts are phased across multiple weeks, to ensure cluster and fleet stability.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and foropt-in master upgrades and node upgrades for existing clusters. Seethese instructionsfor more information on the Kubernetes versioning scheme.

v1.12.x

v1.12.10-gke.5

Fixes an issue where Vertical Pod Autoscaler would reject valid Pod patches.

Stable channel
(1.13.x)

Note: Relevant content is also available separately in theStable channel release notes.
1.13.10-gke.0
Note: This version is not yet available in the Stable channel. It isavailable to clusters that do not use a release channel.

Reduces startup time for GPU nodes runningContainer-Optimized OS.

v1.13.7-gke.8

This version was generally available onJune 27, 2019 andis now available in the Stablerelease channel.

Regular channel
(1.14.x)

Note: Relevant content is also available separately in theRegular channel release notes.
v1.14.6-gke.1
Note: This version is not yet available in the Regular channel. It isavailable to clusters that do not use a release channel.

Reduces startup time for GPU nodes runningContainer-Optimized OS.

v1.14.3-gke.11

This version was generally available onSeptember 5, 2019 andis now available in the Regularrelease channel.

Rapid channel
(1.14.x)

v1.14.6-gke.1
Note: Relevant content is also available separately in theRapid channel release notes.

GKE v1.14.6-gke.1 (alpha) is now available for testing andvalidation in the Rapidrelease channel. Formore details, refer to therelease notes for Kubernetes v1.14.6.

This release includes a patch for CVE-2019-9512 and CVE-2019-9514. For more information, see thesecurity bulletin for September 16, 2019.

Reduces startup time for GPU nodes runningContainer-Optimized OS.

New features

Correction: This note was incorrectly published early. Ingress Controller v1.6 became generally available onSeptember 20, 2019.

Ingress Controller v1.6, which was previously available in beta, is generally available for clusters running v1.13.7-gke.5 and higher.

Network Endpoint Groups, which allow HTTP(S) load balancers to target Pods directly, are now generally available.

Release channels, which provide more control over which automatic upgrades your cluster receives, are generally available. In addition to the Rapid channel, you can now enroll your clusters in the Regular or Stable channel.

September 9, 2019

Correction

The release notes forSeptember 16, 2019 were incorrectlypublished early, on September 9. The incorrect release notes included anannouncement of the availability of a security patch that was notactually made available until the week of September 16, 2019. For moreinformation avbout the patch, see thesecurity bulletin for September 16, 2019.

No GKE releases occurred the week of September 9, 2019.

September 5, 2019

Version updates

GKE cluster versions have been updated.

Note: Your clusters might not have these versions available. Rollouts begin on the day of the note and take four or more business days to be completed across all Google Cloud zones.

New default version

The default version for new clusters is now 1.13.7-gke.8 (previously1.12.8-gke.10).

Scheduled automatic upgrades

Auto-upgrades are no longer paused.

Masters and nodes with auto-upgrade enabled will be upgraded:

Current versionupgrade version
1.11.x1.12.7-gke.25

Rollouts are phased across multiple weeks, to ensure cluster and fleet stability.

Versions no longer available

The following versions are no longer available for new clusters or clusterupgrades:

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and foropt-in master upgrades and node upgrades for existing clusters. Seethese instructionsfor more information on the Kubernetes versioning scheme.

v1.12.x

1.12.9-gke.16

Minor bug fixes and performance improvements.

v1.13.x

1.13.9-gke.3

Bug fixes and performance improvements.

v1.14.x

1.14.3-gke.11

GKE 1.14 is generally available.

Note: If you created a v1.14.x cluster using the Rapid channel, you cannot currently modify the cluster to use a non-Rapid version. To keep the cluster running v1.14.x, re-create the cluster without enrolling it in the Rapid channel.

Upgrading

Beforeupgrading clusters to GKE v1.14, youmust review theknown issues andurgent upgrade notes.

For example, the default RBAC policy no longer grants access to discovery and permission-checking APIs, and you must take specific action to preserve the old behavior for newly-created cluster users.

Differences between GKE v1.14.x and Kubernetes 1.14

GKE v1.14.x has the following differences fromKubernetes 1.14.x:

  • Storage Migrator is not supported on GKE v1.14.x.

  • CSI Inline Volumes (Alpha) are not supported on GKE v1.14.x.

  • Huge Pages is not supported on GKE 1.14.x. If you are interested in support for Huge Pages,register your interest.

New features

Pod Ready++ is generally available and supported on GKE v1.14.x.

Pod priority and preemption is generally available and supported on GKE v1.14.x.

TheRunAsGroup feature has been promoted to beta and enabled by default. PodSpec and PodSecurityPolicy objects can be used to control the primary GID of containers on Docker and containerd runtimes.

Early-access to test Windows containers is now available. If you are interested in testing Windows containers, fill outthis form.

Other changes

Thenode.k8s.io API group andruntimeclasses.node.k8s.io resource have been migrated to a built-in API. If you were using RuntimeClasses, you must recreate each of them after upgrading, and also delete theruntimeclasses.node.k8s.io CRD. RuntimeClasses can no longer be created without a defined handler.

When creating a new GKE cluster, Stackdriver Kubernetes Engine Monitoring is now the default Stackdriver support option. This is a change from prior versions where Stackdriver Logging and Stackdriver Monitoring were the default Stackdriver support option. For more information, seeOverview of Stackdriver support for GKE.

OS and Arch information is now recorded inkubernetes.io/os andkubernetes.io/arch labels on Node objects. The previous labels (beta.kubernetes.io/os andbeta.kubernetes.io/arch) are still recorded, but are deprecated and targeted for removal in Kubernetes 1.18.

Known Issues

Users with the Quobyte Volume plugin are advised not to upgrade between GKE 1.13.x and 1.14.x due to an issue with Kubernetes 1.14. This will be fixed in an upcoming release.

Bug fixes and performance improvements.

Rapid

The following versions are available to clusters enrolled in the Rapidrelease channel.

Note: This content is also available separately in theRapid channel release notes.
1.14.5-gke.5

GKE 1.14.5-gke.5 is now available in the Rapid releasechannel. It includes bug fixes and performance improvements.For more details, refer to therelease notes for Kubernetes v1.14.

New features

Intranode visibility is generally available.

You can now useCustomer-managed encryption keys (beta) to control the encryption used for attached persistent disks in your clusters. This is available as a dynamically provisioned PersistentVolume.

Rollout schedule

The rollout schedule is now included inUpgrades.

August 22, 2019

Version updates

GKE cluster versions have been updated.

Note: Your clusters might not have these versions available. Rollouts begin on the day of the note and take four or more business days to be completed across all Google Cloud zones.

Scheduled automatic upgrades

Auto-upgrades are currently paused.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and foropt-in master upgrades and node upgrades for existing clusters. Seethese instructionsfor more information on the Kubernetes versioning scheme.

v1.11.x

1.11.10-gke.6

This version was previously released and is available again. It mitigates against the vulnerability described in thesecurity bulletin published on August 5, 2019.

v1.12.x

Multiple v1.12.x versions are available this week:

1.12.9-gke.13

This version mitigates against the vulnerability described in thesecurity bulletin published on August 5, 2019.

1.12.9-gke.15

Fixes an issue that can cause Horizontal Pod Autoscaler to increase the replica count to the maximum, regardless of other autoscaling factors.

Upgrade Istio to 1.1.13, to address addresstwo vulnerabilities announced by the Istio project. These vulnerabilities can be used to mount a Denial of Service (DoS) attack against services using Istio.

The node image for Container-Optimized OS (COS) is nowcos-69-10895-329-0.

v1.13.x

Multiple v1.13.x versions are available this week:

1.13.7-gke.19

This version mitigates against the vulnerability described in thesecurity bulletin published on August 5, 2019.

1.13.7-gke.24

Fixes an issue that can cause Horizontal Pod Autoscaler to increase the replica count to the maximum during a rolling update, regardless of other autoscaling factors.

Upgrade Istio to 1.1.13, to address addresstwo vulnerabilities announced by the Istio project. These vulnerabilities can be used to mount a Denial of Service (DoS) attack against services using Istio.

The node image for Container-Optimized OS (COS) is nowcos-73-11647-267-0.

Rapid channel

1.14.3-gke.11

Note: This content is also available separately in theRapid channel release notes.

GKE 1.14.3-gke.11 (alpha) is now available for testing andvalidation in the Rapidrelease channel. Formore details, refer to therelease notes for Kubernetes v1.14.

This version mitigates against the vulnerability described in thesecurity bulletin published on August 5, 2019.

Upgrade Istio to 1.1.13, to address addresstwo vulnerabilities announced by the Istio project. These vulnerabilities can be used to mount a Denial of Service (DoS) attack against services using Istio.

The node image for Container-Optimized OS (COS) is nowcos-73-11647-267-0.

New features

Config Connector is a Kubernetes addon that allows you to manage your Google Cloud resources through Kubernetes configuration.

Rollout schedule

The rollout schedule is now included inUpgrades.

August 12, 2019

Version updates

GKE cluster versions have been updated.

Note: Your clusters might not have these versions available. Rollouts begin on the day of the note and take four or more business days to be completed across all Google Cloud zones.

Important information about v1.10.x nodes

In addition to GKE'sversion policy, Kubernetes has aversion skew policy of supporting only the three newest minor versions. Older versions are notguaranteed to receive bug fixes or security updates, and the control plane maybecome incompatible with nodes running unsupported versions.

Specifically, the Kubernetes v1.13.x control plane is not compatible with nodesrunning v1.10.x. Clusters in such a configuration could become unreachable orfail to run your workloads correctly. Additionally,securitypatches are not applied to v1.10.x and below.

We previously published a notice that Google would enable node auto-upgrade tonode pools running v1.10.x or lower, to bring those clusters into a supportedconfiguration and mitigate the incompatibility risk described above. To allowfor sufficient time for customers to complete the upgrade themselves, Googlepostponed upgrading cluster control planes to 1.13 until mid-September 2019.Please plan your manual node upgrade to keep your clusters healthy and up todate.

Scheduled automatic upgrades

Auto-upgrades are currently paused.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and foropt-in master upgrades and node upgrades for existing clusters. Seethese instructionsfor more information on the Kubernetes versioning scheme.

v1.11.x

1.11.10-gke.6

Fixes the vulnerability announced in thesecurity bulletin for August 5, 2019.

Fixes a problem where Cluster Autoscaler can create too many nodes when scaling up.

v1.12.x

Multiple v1.12.x versions are available this week:

1.12.9-gke.13
Note: This version contains all the changes in 1.12.9-gke.10, as well asthose listed here.

Fixes a problem where Cluster Autoscaler can create too many nodes when scaling up.

1.12.9-gke.10

Fixes a problem where Vertical Pod Autoscaler would reject valid patches to Pods.

Improvements to Cluster Autoscaler.

Updates Istio to v1.0.9-gke.0.

v1.12.8-gke.12

Updates Istio to v1.0.9-gke.0.

1.12.7-gke.2

Updates Istio to v1.0.9-gke.0.

Fixes aproblem where the kubelet could fail to start a Pod for the first time if the node was not completely configured and the Pod's restart policy wasNEVER.

v1.13.x

Multiple v1.13.x versions are available this week:

1.13.7-gke.19
Note: This version contains all the changes in 1.13.7-gke.15, as well asthose listed here.

Fixes a problem where Cluster Autoscaler can create too many nodes when scaling up.

1.13.7-gke.15

Fixes a problem where Vertical Pod Autoscaler would reject valid patches to Pods.

Improvements to Cluster Autoscaler.

You can now useVulkan with GPUs to process graphics workloads. The Vulkan configuration directorhy is mounted on/etc/vulkan/icd.d in the container.

Updates Istio to v1.1.10-gke.0.

Fixes aproblem where the kubelet could fail to start a Pod for the first time if the node was not completely configured and the Pod's restart policy wasNEVER.

Rapid (v1.14.x)

1.14.3-gke.10

Note: This content is also available separately in theRapid channel release notes.

GKE 1.14.3-gke.10 (alpha) is now available for testing andvalidation in the Rapidrelease channel. Formore details, refer to therelease notes for Kubernetes v1.14.

Fixes the vulnerability announced in thesecurity bulletin for August 5, 2019.

Fixes a problem where Cluster Autoscaler can create too many nodes when scaling up.

In v1.14.3-gke.10 and higher,GKE Sandbox uses thegvisor.config.common-webhooks.networking.gke.io webhook, which is created when the cluster starts and makes sandboxed nodes available faster.

Security bulletin

Kubernetes recently discovered a vulnerability,CVE-2019-11247, which allows cluster-scoped custom resource instances to be acted on as if they were namespaced objects existing in all Namespaces. This vulnerability is fixed in GKE versions also announced today. For more information, see thesecurity bulletin.

New features

Clusters running v1.13.6-gke.0 or higher can useShielded GKE Nodes (beta),which provide strong, verifiable node identity and integrity to increase thesecurity of your nodes.

Rollout schedule

The rollout schedule is now included inUpgrades.

August 1, 2019

Note: Auto-upgrades for masters and nodes are currently paused.

New versions available for upgrades and new clusters

During the week of July 8, 2019, a release resulted in a partial rollout.Release notes were not published at that time. Changes discussed in therest of this entry were appliedonly to the following zones:

  • europe-west2-a
  • us-east1
  • us-east1-d

In those zones only, the following new versions are available:

  • 1.13.7-gke.15
  • 1.12.9-gke.10
  • 1.12.7-gke.26
  • 1.12.8-gke.12

In those zones only, the following versions are no longer available fornew clusters or nodes:

  • 1.11.10-gke.5

In those zones only, clusters running v1.11.x with auto-upgrade enabledwere upgraded to v1.12.7-gke.25.

Security bulletin

New features

GKE usage metering (Beta) now supports tracking actualconsumption, in addition to resource requests, for clusters runningv1.12.8-gke.8 and higher, v1.13.6-gke.7 and higher, or 1.14.2-gke.8 and higher.A new BigQuery table,gke_cluster_resource_consumption, iscreated automatically in the BigQuery dataset. For more informationabout this and other improvements to Usage Metering, seeUsage metering (Beta).

Node auto-provisioningis supported on regional clusters running v1.12.x or higher.

July 29, 2019

VPC-native isno longer the default cluster network mode for newclusters created usinggcloud v256.0.0 or higher. Instead, the routes-basedcluster network mode is used by default. We recommend manually enablingVPC-native, toavoid exhausting routes quota.

VPC-native clusters are created by default when you use Google Cloud console orgcloud versions 251.0.0 through 255.0.0.Routes-based clusters are created by default when using the REST API.

June 27, 2019

Version updates

GKE cluster versions have been updated.

Note: Your clusters might not have these versions available. Rollouts begin on the day of the note and take four or more business days to be completed across all Google Cloud zones.

Important changes to clusters running unsupported versions

In addition to GKE'sversion policy, Kubernetes has aversion skew policy of supporting only the three newest minor versions. Older versions are notguaranteed to receive bug fixes or security updates, and the control plane maybecome incompatible with nodes running unsupported versions.

For example, the Kubernetes v1.13.x control plane is not compatible with nodesrunning v1.10.x. Clusters in such a configuration could become unreachable orfail to run your workloads correctly. Additionally,securitypatches are not applied to v1.10.x and below.

To keep your clusters operational and to protect Google's infrastructure, westrongly recommend that you upgrade existing nodes to v1.11.x or higher beforethe end of June 2019. At that time, Google will enable node auto-upgrade on nodepools older than v1.11.x, and these nodes will be updated to v1.11.x so that thecontrol plane can be upgraded to v1.13.x and remain compatible with existingnode pools.

We strongly recommend leaving node auto-upgrade enabled.

NOTE: As of 1.12 all kubelets are issued certificates from the cluster CA andverification of kubelet certificates is enabled automatically if all nodepoolsare 1.12+. We have observed that introducing older (pre 1.12) nodepools aftercertificate verification has started may cause connection problems for kubectllogs/exec/attach/portforward commands, and should be avoided.

Versions no longer available for upgrades and new clusters

The following versions are no longer available for new clusters or clusterupgrades:

  • 1.11.8-gke.10
  • 1.11.10-gke.4
  • 1.12.7-gke.10
  • 1.12.7-gke.21
  • 1.12.7-gke.22
  • 1.12.8-gke.6
  • 1.12.8-gke.7
  • 1.12.9-gke.3
  • 1.13.6-gke.5
  • 1.13.6-gke.6
  • 1.13.7-gke.0

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and foropt-in master upgrades and node upgrades for existing clusters. Seethese instructionsfor more information on the Kubernetes versioning scheme.

v1.11.x

1.11.10-gke.5

This version contains a patch for recently discovered TCP vulnerabilities in the Linux kernel. See the associatedsecurity bulletin for more information.

v1.12.x

1.12.7-gke.25

This version contains a patch for recently discovered TCP vulnerabilities in the Linux kernel. See the associatedsecurity bulletin for more information.

1.12.8-gke.10

This version contains a patch for recently discovered TCP vulnerabilities in the Linux kernel. See the associatedsecurity bulletin for more information.

1.12.9-gke.7

This version contains a patch for recently discovered TCP vulnerabilities in the Linux kernel. See the associatedsecurity bulletin for more information.

v1.13.x

1.13.6-gke.13

This version contains a patch for recently discovered TCP vulnerabilities in the Linux kernel. See the associatedsecurity bulletin for more information.

1.13.7-gke.8

This version contains a patch for recently discovered TCP vulnerabilities in the Linux kernel. See the associatedsecurity bulletin for more information.

Rapid channel

1.14.3-gke.9

This version contains a patch for recently discovered TCP vulnerabilities in the Linux kernel. See the associatedsecurity bulletin for more information.

Note: This content is also available separately in theRapid channel release notes.

Security bulletins

Patched versions are now available to address TCP vulnerabilities in the LinuxKernel. For more information, see thesecurity bulletinIn accordance with the documented support policy, patches will not be applied toGKE version 1.10 and older.

Kubernetes recently discovered a vulnerability inkubectl, CVE-2019-11246. For more information, see thesecurity bulletin.

Rollout schedule

The rollout schedule is now included inUpgrades.

Coming soon

We expect the following changes in the coming weeks.This information is not a guarantee, but is provided to help you plan forupcoming changes.

  • Early access to test Windows Containers
  • Usage metering will become generally available

June 4, 2019

Version updates

GKE cluster versions have been updated.

Note: Your clusters might not have these versions available. Rollouts begin on the day of the note and take four or more business days to be completed across all Google Cloud zones.

Scheduled automatic upgrades

Masters and nodes with auto-upgrade enabled will be upgraded:

Current versionUpgrade version
1.11.91.12.7-gke.10

Rollouts are phased across multiple weeks, to ensure cluster and fleet stability.

Versions no longer available

The following versions are no longer available for new clusters or clusterupgrades:

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and foropt-in master upgrades and node upgrades for existing clusters. Seethese instructionsfor more information on the Kubernetes versioning scheme.

v1.11.x

No v1.11.x versions this week.

v1.12.x

v1.12.8-gke.7 includes the following changes:

Improved Node Auto-Provisioning support for multi-zonal clusters with GPUs.

Cloud Run 0.6

v1.13.x

v1.13.6-gke.6 includes the following changes:

Improved Node Auto-Provisioning support for multi-zonal clusters with GPUs.

Cloud Run 0.6

COS images now use the Nvidia GPU 418.67 driver. Nvidia drivers on COS are now pre-compiled, greatly reducing driver installation time.

GKE nodes running Kubernetes v1.13.6 are affected by CVE-2019-11245. Information about the impact and mitigation of this vulnerability is available in thisKubernetes issue report. In addition to security concerns, this bug can cause Pods that must run as a specific UID to fail.

Rapid channel

v1.14.1-gke.5 is the default for new Rapid channel clusters. This versionincludes patched node images that addressCVE-2019-11245.

GKE nodes running Kubernetes v1.14.2 are affected by CVE-2019-11245. Information about the impact and mitigation of this vulnerability is available in thisKubernetes issue report. In addition to security concerns, this bug can cause Pods that must run as a specific UID to fail.

Security bulletin

GKE nodes running Kubernetes v1.13.6 and v1.14.2 are affected byCVE-2019-11245. Information about the impact and mitigation of this vulnerabilityis available in thisKubernetes issue report.In addition to security concerns, this bug can cause Pods that must run asa specific UID to fail.

Changes

Currently,VPC-native is the default for new clusters created withgcloud or the Google Cloud console. However, VPC-native is not the default for new clusters created with the REST API.

Rollout schedule

The rollout schedule is now included inUpgrades.

Coming soon

We expect the following changes in the coming weeks.This information is not a guarantee, but is provided to help you plan forupcoming changes.

  • Early access to test Windows Containers
  • Usage metering will become generally available
  • New clusters will begin to default toVPC-native

June 3, 2019

Corrections

Basic authentication and client certificate issuance are disabled by default for clusters created with GKE 1.12 and higher. We recommend switching your clusters to use OpenID instead. However, you can still enable basic authentication and client certificate issuance manually.

To learn more about cluster security, seeHardening your cluster.

This information was inadvertently omitted from theFebruary 27, 2019 release note. However, the documentation about cluster routing was updated.

The rollout dates for theMay 28, 2019 releases are incorrect. Day 2 spanned May 29-30, day 3 is May 31, and day 4 is June 3.

May 28, 2019

Version updates

GKE cluster versions have been updated.

Note: Your clusters might not have these versions available. Rollouts begin on the day of the note and take four or more business days to be completed across all Google Cloud zones.

Important changes to clusters running unsupported versions

In addition to GKE'sversion policy, Kubernetes has aversion skew policy of supporting only the three newest minor versions. Older versions are notguaranteed to receive bug fixes or security updates, and the control plane maybecome incompatible with nodes running unsupported versions.

For example, the Kubernetes v1.13.x control plane is not compatible with nodesrunning v1.10.x. Clusters in such a configuration could become unreachable orfail to run your workloads correctly.

To keep your clusters operational and to protect Google's infrastructure, westrongly recommend that you upgrade existing nodes to v1.11.x or higher beforethe end of June 2019. At that time, Google will enable node auto-upgrade on nodepools older than v1.11.x, and these nodes will be updated to v1.11.x so that thecontrol plane can be upgraded to v1.13.x and remain compatible with existingnode pools.

We strongly recommend leaving node auto-upgrade enabled.

Scheduled automatic upgrades

No new automatic upgrades this week; previously-announced automatic upgradesmay still be ongoing.

Rollouts are phased across multiple weeks, to ensure cluster and fleet stability.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and foropt-in master upgrades and node upgrades for existing clusters. Seethese instructionsfor more information on the Kubernetes versioning scheme.

v1.11.x

v1.11.10-gke.4 includes the following changes:

The node image for Container-Optimized OS (COS) is nowcos-69-10895-242-0.

The node image for Ubuntu is nowubuntu-gke-1804-d1703-0-v20190517.

Node images have been updated to fix Microarchitectural Data Sampling (MDS) vulnerabilities announced by Intel. For more information, see thesecurity bulletin.

The patch alone is not sufficient to mitigate exposure to this vulnerability. For more information, see thesecurity bulletin.

v1.12.x

v1.12.8-gke.6 includes the following changes:

The node image for Container-Optimized OS (COS) is nowcos-69-10895-242-0.

The node image for Ubuntu is nowubuntu-gke-1804-d1703-0-v20190517.

Node images have been updated to fix Microarchitectural Data Sampling (MDS) vulnerabilities announced by Intel.

The patch alone is not sufficient to mitigate exposure to this vulnerability. For more information, see thesecurity bulletin.

v1.13.x

v1.13.6-gke.5 includes the following changes:

The node image for Container-Optimized OS (COS) is nowcos-u-73-11647-182-0.

The node image for Ubuntu is nowubuntu-gke-1804-d1809-0-v20190517.

Node images have been updated to fix Microarchitectural Data Sampling (MDS) vulnerabilities announced by Intel. For more information, see thesecurity bulletin.

The patch alone is not sufficient to mitigate exposure to this vulnerability. For more information, see thesecurity bulletin.

Rapid channel

v1.14.2-gke.2 is the default for new Rapid channel clusters, and includesthe following changes:

GKE Sandbox is supported on v1.14.x clusters running v1.14.2-gke.2 or higher.

The node image for Container-Optimized OS (COS) is nowcos-u-73-11647-182-0.

The node image for Ubuntu is nowubuntu-gke-1804-d1809-0-v20190517.

  • Node images have been updated to fix Microarchitectural Data Sampling (MDS) vulnerabilities announced by Intel. For more information, see thesecurity bulletin.

    The patch alone is not sufficient to mitigate exposure to this vulnerability. For more information, see thesecurity bulletin.

  • Nodes using these images are nowshielded VMs with the following properties:

The following IP ranges have been added to default non-IP-masqiptables rules:

  • 100.64.0.0/10
  • 192.0.0.0/24
  • 192.0.2.0/24
  • 192.88.99.0/24
  • 198.18.0.0/15
  • 198.51.100.0/24
  • 203.0.113.0/24
  • 240.0.0.0/4

Rollout schedule

The rollout schedule is now included inUpgrades.

Coming soon

We expect the following changes in the coming weeks.This information is not a guarantee, but is provided to help you plan forupcoming changes.

  • Cloud Run will be upgraded
  • Istio will be upgraded for v1.13.x clusters
  • Early access to test Windows Containers, expected in early June
  • New clusters will begin to default toVPC-native

May 20, 2019

Version updates

GKE cluster versions have been updated.

Note: Your clusters might not have these versions available. Rollouts begin on the day of the note and take four or more business days to be completed across all Google Cloud zones.

Important changes to clusters running unsupported versions

In addition to GKE'sversion policy, Kubernetes has aversion skew policy of supporting only the three newest minor versions. Older versions are notguaranteed to receive bug fixes or security updates, and the control plane maybecome incompatible with nodes running unsupported versions.

For example, the Kubernetes v1.13.x control plane is not compatible with nodesrunning v1.10.x. Clusters in such a configuration could become unreachable orfail to run your workloads correctly.

To keep your clusters operational and to protect Google's infrastructure, westrongly recommend that you upgrade existing nodes to v1.11.x or higher beforethe end of June 2019. At that time, Google will enable node auto-upgrade on nodepools older than v1.11.x, and these nodes will be updated to v1.11.x so that thecontrol plane can be upgraded to v1.13.x and remain compatible with existingnode pools.

We strongly recommend leaving node auto-upgrade enabled.

Scheduled automatic upgrades

Masters and nodes with auto-upgrade enabled will be upgraded:

Current versionUpgrade version
1.10.x (nodes only, completing)1.11.8-gke.6
1.12.6-gke.101.12.6-gke.11
1.14.1-gke.4 and older 1.14.x (Alpha)1.14.1-gke.5 (Alpha)

Rollouts are phased across multiple weeks, to ensure cluster and fleet stability.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and foropt-in master upgrades and node upgrades for existing clusters. Seethese instructionsfor more information on the Kubernetes versioning scheme.

v1.11.x

No v1.11.x versions this week.

v1.12.x

No v1.12.x versions this week.

Correction: Istio was not upgraded to 1.1.3 in v1.12.7-gke.17. Therelease note for May 13, 2019 has been corrected.

v1.13.x

v1.13.6-gke.0 is available.

This version includes support forGKE Sandbox.

Update Istio to v1.1.3.

Node images have been updated as follows:

Nodes using these images are nowshielded VMs with the following properties:

Rapid channel

No v1.14.x versions this week.

Versions no longer available

The following versions are no longer available for new clusters or clusterupgrades:

  • 1.12.6-gke.10

New features

Google Cloud Observability Kubernetes Engine Monitoringis now generally available for clusters using the following GKEversions:

  • 1.12.x clusters v1.12.7-gke.17 and newer
  • 1.13.x clusters v1.13.5-gke.10 and newer
  • 1.14.x (Alpha) clusters v1.14.1-gke.5 and newer

Users of the legacy Google Cloud Observability support are encouraged tomigrate to Google Cloud Observability Kubernetes Engine Monitoringbefore support for legacy Google Cloud Observability is removed.

Rollout schedule

The rollout schedule is now included inUpgrades.

Coming soon

We expect the following changes in the coming weeks.This information is not a guarantee, but is provided to help you plan forupcoming changes.

  • GKE Sandboxsupport for v1.14.x (Alpha) clusters
  • v1.14.x nodes will beshielded VMs
  • Early access to test Windows Containers, expected in early June

May 13, 2019

Version updates

GKE cluster versions have been updated.

Note: Your clusters might not have these versions available. Rollouts begin on the day of the note and take four or more business days to be completed across all Google Cloud zones.

Important changes to clusters running unsupported versions

In addition to GKE'sversion policy, Kubernetes has aversion skew policy of supporting only the three newest minor versions. Older versions are notguaranteed to receive bug fixes or security updates, and the control plane maybecome incompatible with nodes running unsupported versions.

For example, the Kubernetes v1.13.x control plane is not compatible with nodesrunning v1.10.x. Clusters in such a configuration could become unreachable orfail to run your workloads correctly.

To keep your clusters operational and to protect Google's infrastructure, westrongly recommend that you upgrade existing nodes to v1.11.x or higher beforethe end of June 2019. At that time, Google will enable node auto-upgrade on nodepools older than v1.11.x, and these nodes will be updated to v1.11.x so that thecontrol plane can be upgraded to v1.13.x and remain compatible with existingnode pools.

We strongly recommend leaving node auto-upgrade enabled.

New default version

The default version for new clusters is now 1.12.7-gke.10(previously 1.11.8-gke.6). If your cluster is using v1.12.6-gke.10, upgrade tothis version to avoid a potential issue that causesauto-repairing nodes to fail.

Scheduled automatic upgrades

Masters and nodes with auto-upgrade enabled will be upgraded:

Note: Node auto-upgrade is no longer paused.
Current versionUpgrade version
All 1.10.x versions, including v1.10.12-gke.14 (continuing after unpausing node auto-upgrade)v1.11.8-gke.6
v1.11.x versions older than v1.11.8-gke.6v1.11.8-gke.6

Rollouts are phased across multiple weeks, to ensure cluster and fleet stability.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and foropt-in master upgrades and node upgrades for existing clusters. Seethese instructionsfor more information on the Kubernetes versioning scheme.

v1.11.x

v1.11.9-gke.13
  • Improvements to Vertical Pod Autoscaler
  • Improvements to Cluster Autoscaler
  • Cloud Run for GKE now uses the default Istio sidecarinjection behavior
  • Fix an issue that prevented the kubelet from seeing all GPUs available tonodes using the Ubuntu node image.

v1.12.x

v1.12.7-gke.17
  • Upgrade Ingress controller to1.5.2
  • Improvements to Vertical Pod Autoscaler
  • Improvements to Cluster Autoscaler
  • Fix an issue that prevented the kubelet from seeing all GPUs available tonodes using the Ubuntu node image
  • Fix an issue that sets the dynamic maximum volume count to 16 if yournodes use a custom machine type. The value is now set to 128.

v1.13.x

v1.13.5-gke.10
Upgrading to GKE v1.13.x

To prepare to upgrade your clusters, read theKubernetes 1.13 release notesand the following information. You may need to modify your cluster beforeupgrading.

scheduler.alpha.kubernetes.io/critical-pod is deprecated. To markPods as critical, usePod priority and preemption.

node.status.volumes.attached.devicePath is deprecated for ContainerStorage Interface (CSI) volumes and will not be enabled in futurereleases.

The built-insystem:csi-external-provisioner andsystem:csi-external-attacher Roles are no longer automatically createdYou can create your own Roles and modify your Deployments to use them.

Support for CSI drivers using 0.3 and older versions of the CSI API isdeprecated. Users should upgrade CSI drivers to use the 1.0 API during thedeprecation period.

Kubernetes cannot distinguish between manually-provisioned zonal andregional persistent disks with the same name. Ensure that persistent diskshave unique names across the Google Cloud project. This issue does not occurwhen using dynamically provisioned persistent disks.

If kubelet fails to register a CSI driver, it does not make a secondattempt. To work around this issue, restart the CSI driver Pod.

After resizing a PersistentVolumeClaim (PVC), the PVC is sometimes leftwith a spuriousRESIZING condition when expansion has already completed.The condition is spurious as long as the PVC's reported size is correct.If the value ofpvc.spec.capacity['storage'] matchespvc.status.capacity['storage'], the condition is spurious and you candelete or ignore it.

The CSIdriver-registrar external sidecar container v1.0.0 has aknown issue where it takes up to a minute to restart.

DaemonSets now use scheduling features that require kubelet version 1.11 orhigher. Google will update kubelet to 1.11 before upgrading clusters tov1.13.x.

kubelet can no longer delete their Node API objects.

Use of the--node-labels flag to set labels under thekubernetes.io/ andk8s.io/ prefix will be subject to restriction by the NodeRestrictionadmission plugin in future releases. See theadmission plugin documentationfor the list of allowed labels.

Rapid channel

1.14.1-gke.5
Note: This content is also available separately in theRapid channel release notes.

GKE v1.14.1-gke.5 (alpha) is now available for testing andvalidation in the Rapidrelease channel. Formore details, refer to therelease notes for Kubernetes v1.14.

GKE v1.14.x has the following differences fromKubernetes 1.14.x.

You cannot yet create an alpha cluster running GKEv1.14.x. If you attempt to use the--enable-kubernetes-alpha flag,cluster creation fails.

Security bulletin

If you run untrusted code in your own multi-tenant services withinGoogle Kubernetes Engine, we recommend that you disable Hyper-Threading to mitigateMicroarchitectural Data Sampling (MDS) vulnerabilities announced by Intel. Formore information, see thesecurity bulletin.

New features

With GKE 1.13.5-gke.10, GKE 1.13 is now generally available for use in production. You can upgrade clusters running older v1.13.x versions manually.

GKE v1.13.x has the following differences from Kubernetes 1.13:

For information about upgrading from v1.12.x, seeUpgrading to GKE v1.13.x inNew versions available for upgrades and new clusters.

We are introducingRelease channels, a new way to keep your GKE clusters up to date. The Rapid release channel is available, and includes v1.14.1-gke.5 (alpha). You cansign up to try release channels and preview GKE v1.14.x.

GKE Sandbox (Beta) is now available for clusters running v1.12.7-gke.17 and higher and v1.13.5-gke.15 and higher. You can use GKE Sandbox to isolate untrusted workloads in a sandbox to protect your nodes, other workloads, and cluster metadata from defective or malicious code.

Changes

For clusters running v1.12.x or higher and using nodes with less than 1 GB of memory, GKE reserves 255 MiB of memory. This is not a new change, but it was not previously noted. For more details about node resources, seeAllocatable memory and CPU resources.

Rollout schedule

The rollout schedule is now included inUpgrades.

Coming soon

We expect the following changes in the coming weeks.This information is not a guarantee, but is provided to help you plan forupcoming changes.

April 29, 2019

Version updates

GKE cluster versions have been updated.

Note: Your clusters might not have these versions available. Rollouts begin on the day of the note and take four or more business days to be completed across all Google Cloud zones.

Scheduled automatic upgrades

Masters only with auto-upgrade enabled will be upgraded as follows:

Current versionUpgrade version
All 1.10.x versions, including v1.10.12-gke.14 (continuing)1.11.8-gke.6
1.13.4-gke.x1.13.5-gke.10
Note: Node auto-upgrade is currently disabled. You can continue to upgradenode pools manually. Node auto-upgrade will be re-enabled in the coming weeks.

Rollouts are phased across multiple weeks, to ensure cluster and fleet stability.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and foropt-in master upgrades and node upgrades for existing clusters. Seethese instructionsfor more information on the Kubernetes versioning scheme.

  • 1.12.6-gke.11
    • Nodes continue to use Docker as the default runtime.
    • Fix a performance regression introduced in 1.12.6-gke.10. This regressioncaused delays when the kubelet reads the/sys/fs/cgroup/memory/memory.stat file to determine a node's memory usage.

The following versions are no longer available for new clusters or clusterupgrades:

  • 1.11.9-gke.5
  • 1.12.7-gke.7
  • 1.13.4-gke.10
  • 1.13.5-gke.7

Fixed issues

A problem was fixed in the Stackdriver Kubernetes Monitoring (Beta) Metadata agent. This problem caused the agent to generate unnecessary log messages.

Changes

Alpha clusters running Kubernetes 1.13 and higher created with the Google Cloud CLI version 242.0.0 and higher have auto-upgrade and auto-repair disabled. Previously, you were required to disable these feature manually.

Known issues

Under certain circumstances, Google-managed SSL certificates (Beta) are not being provisioned in regional clusters. If this happens, you are unable to create or update managed certificates. If you are experiencing this issue,contact Google Cloud support.

Node auto-upgrade is currently disabled. You can still upgrade node pools manually.

Rollout schedule

The rollout schedule is now included inUpgrades.

Coming soon

We expect the following changes in the coming weeks.This information is not a guarantee, but is provided to help you plan forupcoming changes.

  • Node auto-upgrade will be re-enabled
  • etcd will be upgraded
  • Improvements to Vertical Pod Autoscaler
  • Improvements to Cluster Autoscaler
  • Improvements to Managed Certificates

April 26, 2019

Due to delays during theApril 22 GKE release rollout,the release will not complete by April 26, 2019 as originally planned. Rolloutis expected to complete by April 29, 2019 GMT.

April 25, 2019

Changes

Google Cloud Observability Kubernetes Monitoring users: Google Cloud Observability Kubernetes Monitoring logging label fields change when you upgrade your GKE clusters to GKE v1.12.6 or higher. The following changes were effective the week of March 26, 2019:

  • Kubernetes Pod labels, currently located in themetadata.userLabels field, are moved to thelabels field in the LogEntry and the label keys have a prefix prefix ofk8s-pod/. The filter expressions in yoursinks,logs-based metrics,log exclusions, or queries might need to change.
  • Google Cloud Observability system labels that are in themetadata.systemLabels field are no longer available.

For detailed information about what changed, see therelease guide for Google Cloud Observability Beta Monitoring and Logging, also known as Google Cloud Observability Kubernetes Monitoring (Beta).

April 22, 2019

Version updates

GKE cluster versions have been updated.

Note: Your clusters might not have these versions available. Rollouts begin on the day of the note and take four or more business days to be completed across all Google Cloud zones.

Scheduled automatic upgrades

Masters and nodes with auto-upgrade enabled will be upgraded:

Current versionUpgrade version
All 1.10.x versions, including v1.10.12-gke.141.11.8-gke.6

This roll-out will be phased across multiple weeks.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and foropt-in master upgrades and node upgrades for existing clusters. Seethese instructionsfor more information on the Kubernetes versioning scheme.

  • 1.11.9-gke.8

    • Node image for Container-Optimized OS updated tocos-69-10895-211-0
      • Fix a performance regression introduced in v1.11.x node images older than 1.11.9-gke.8. This regressioncaused delays when the kubelet reads the/sys/fs/cgroup/memory/memory.stat file to determine a node's memory usage.
    • Upgrade Node Problem Detector to0.6.3
  • 1.12.7-gke.10

    • Node image for Container-Optimized OS updated tocos-69-10895-211-0
      • Fix a performance regression introduced in v1.12.x node images older than v1.12.6-gke.10. This regressioncaused delays when the kubelet reads the/sys/fs/cgroup/memory/memory.stat file to determine a node's memory usage.
    • Upgrade Node Problem Detector to0.6.3
  • 1.13.5-gke.10 (Preview)

    • To create a cluster, use the following command, replacing`my-alpha-cluster with the name of your cluster:

      gcloud container clusters createmy-alpha-cluster \  --cluster-version=1.13.5-gke.10 \  --enable-kubernetes-alpha \  --no-enable-autorepair
    • Upgrade Node Problem Detector to0.6.3

The following versions are no longer available for new clusters or clusterupgrades:

  • All 1.10.x versions, including v1.10.12-gke.14

Fixed issues

A known issue in v1.12.6-gke.10 and older has been fixed in 1.12.7-gke.10. This issue causes node auto-repair to fail. Upgrading is recommended.

A known issue in 1.12.7-gke.7 and older has been fixed in 1.12.7-gke.10. ThecurrentMetrics field now reports the correct value. The problem only affected reporting and did not impact the functionality of Horizontal Pod Autoscaler.

Deprecations

GKE v1.10.x has been deprecated, and is no longer available for new clusters, master upgrades, or node upgrades.

TheCluster.FIELDS.initial_node_count field has been deprecated in favor ofnodePool.initial_node_count in thev1 andv1beta1 GKE APIs.

Rollout schedule

The rollout schedule is now included inUpgrades.

Coming soon

We expect the following changes in the coming weeks.This information is not a guarantee, but is provided to help you plan forupcoming changes.

  • etcd will be upgraded
  • Improvements to Vertical Pod Autoscaler
  • Improvements to Cluster Autoscaler
  • Improvements to Managed Certificates

April 19, 2019

You can now useUsage metering with GKE 1.12.x and 1.13.x clusters.

April 18, 2019

You can now run GKE clusters in regionasia-northeast2 (Osaka, Japan) with zonesasia-northeast2-a,asia-northeast2-b, andasia-northeast2-c.

The new region and zones will be included in future rollout schedules.

April 15, 2019

Version updates

GKE cluster versions have been updated.

Note: Your clusters might not have these versions available. Rollouts begin on the day of the note and take four or more business days to be completed across all Google Cloud zones.

New default version

The default version for new clusters has been updated to 1.11.8-gke.6(previously 1.11.7-gke.12).

Scheduled automatic upgrades

Masters and nodes with auto-upgrade enabled will be upgraded:

Current versionUpgrade version
1.10.x versions 1.10.12-gke.13 and older1.10.12-gke.14
1.11.x versions 1.11.8-gke.5 and older1.11.8-gke.6
1.12.x versions 1.12.6-gke.9 and older1.12.6-gke.10
1.13.x versions 1.13.4-gke.9 and older1.13.4-gke.10 (Preview)

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and foropt-in master upgrades and node upgrades for existing clusters. Seethese instructionsfor more information on the Kubernetes versioning scheme.

  • 1.11.9-gke.5
    • Node image for Container-Optimized OS updated tocos-69-10895-201-0
      • This release note previously stated, in error, that Docker wasupgraded. However, the Docker version is still 17.03 in this image.
      • Apply a restart policy to the Docker daemon, so that it attempts torestart every 10 seconds if it is not running, with no maximum number ofretries.
      • Apply security update forCVE-2019-8912
    • Node image for Ubuntu updated toubuntu-gke-1804-d1703-0-v20190409
      • This release note previously stated, in error, that Docker wasupgraded. However, the Docker version is still 17.03 in this image.
      • Apply a restart policy to the Docker daemon, so that it attempts torestart every 10 seconds if it is not running, with no maximum number ofretries.
      • Apply security update forCVE-2019-8912
    • UpgradeCloud Run on GKE to 0.5.0
    • Upgrade containerd to1.1.7
  • 1.12.7-gke.7
    • Node image for Container-Optimized OS updated tocos-69-10895-201-0
      • This release note previously stated, in error, that Docker wasupgraded. However, the Docker version is still 17.03 in this image.
      • Apply a restart policy to the Docker daemon, so that it attempts torestart every 10 seconds if it is not running, with no maximum number ofretries.
      • Apply security update forCVE-2019-8912
    • Node image for Ubuntu updated toubuntu-gke-1804-d1703-0-v20190409
      • This release note previously stated, in error, that Docker wasupgraded. However, the Docker version is still 17.03 in this image.
      • Apply a restart policy to the Docker daemon, so that it attempts torestart every 10 seconds if it is not running, with no maximum number ofretries.
      • Apply security update forCVE-2019-8912
    • UpgradeCloud Run on GKE to 0.5.0
    • Upgrade containerd to1.2.6
  • 1.13.5-gke.7 (Preview)

    • To create a cluster, use the following command, replacing`my-alpha-cluster with the name of your cluster:

      gcloud container clusters createmy-alpha-cluster \  --cluster-version=1.13.5-gke.7 \  --enable-kubernetes-alpha \  --no-enable-autorepair
    • Node image for Container-Optimized OS updated tocos-u-73-11647-121-0

      • Upgrade Docker from 17.03 to 18.09
      • Apply a restart policy to the Docker daemon, so that it attempts torestart every 10 seconds if it is not running, with no maximum number ofretries.
      • Apply security update forCVE-2019-8912
    • Node image for Ubuntu updated toubuntu-gke-1804-d1809-0-v20190402a

      • Upgrade Docker from 17.03 to 18.09
      • Apply a restart policy to the Docker daemon, so that it attempts torestart every 10 seconds if it is not running, with no maximum number ofretries.
      • Apply security update forCVE-2019-8912
    • UpgradeCloud Run on GKE to 0.5.0

    • Upgrade containerd to1.2.6

    • Improvements to volume operation metrics

    • Cluster Autoscaler is now supported for GKE 1.13 clusters

    • Fix a problem that caused thecurrentMetrics field for Horizontal PodAutoscaler with 'AverageValue' target to always reportunknown. Theproblem only affected reporting and did not impact the functionality ofHorizontal Pod Autoscaler.

The following versions are no longer available for new clusters or clusterupgrades:

  • 1.10.12-gke.7
  • 1.10.12-gke.9
  • 1.11.6-gke.11
  • 1.11.6-gke.16
  • 1.11.7-gke.12
  • 1.11.7-gke.18
  • 1.11.8-gke.2
  • 1.11.8-gke.4
  • 1.11.8-gke.5
  • 1.12.5-gke.5
  • 1.12.6-gke.7
  • 1.13.4-gke.1
  • 1.13.4-gke.5

Changes

Improvements have been made to the automated rules for theadd-on resizer. It now uses 5 nodes as the inflection point.

Known issues

GKE 1.12.7-gke.7 and older, and 1.13.4-gke.10 and older have a known issue where thecurrentMetrics field for Horizontal Pod Autoscaler withAverageValue target always reportsunknown. The problem only affects reporting and does not impact the functionality of Horizontal Pod Autoscaler.

This issue has already been fixed in GKE 1.13.5-gke.7.

Rollout schedule

The rollout schedule is now included inUpgrades.

Coming soon

We expect the following changes in the coming weeks.This information is not a guarantee, but is provided to help you plan forupcoming changes.

  • Version 1.10.x will soon be unvailable for new clusters.
  • The known issue published this week about Horizontal Pod Autoscaler metricswill be fixed in GKE 1.12.x as well.
  • etcd will be upgraded.

April 2, 2019

Rollout schedule

The rollout schedule is now included inUpgrades.

March 26, 2019

Version updates

GKE cluster versions have been updated.

Note: Your clusters might not have these versions available. Rollouts begin on the day of the note and take four or more business days to be completed across all Google Cloud zones.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and foropt-in master upgrades and node upgrades for existing clusters. Seethese instructionsfor more information on the Kubernetes versioning scheme.

  • 1.11.8-gke.5
    • Improvements to Cluster Autoscaler
    • Improvements to gVisor
  • 1.12.6-gke.7
    • Improvements to Cluster Autoscaler
    • Update Ingress controller to1.5.1
    • Update containerd to1.2.5
  • 1.13.4-gke.5 (public preview)

    • To create a cluster, use the following command, replacingmy-alpha-cluster with the name of your cluster:
      gcloud container clusters createmy-alpha-cluster \--cluster-version=1.13.4-gke.5 \--enable-kubernetes-alpha \--no-enable-autorepair
    • Improvements to Vertical Pod Autoscaler
    • Improvements to gVisor
    • Update Ingress controller to1.5.1

    • Update containerd to1.2.5

    • Cluster Autoscaler is not operational in this GKE version.

Rollout schedule

The rollout schedule is now included inUpgrades.

March 19, 2019

GKE 1.13 public preview

GKE 1.13.4-gke.1 is available foralpha clustersas a public preview. The preview period helps Google Cloud to improve thequality of the final GA release, and allows you to test the new versionearlier.

To create a cluster using this version, use the followingcommand, replacingmy-alpha-cluster with the nameof your cluster. Use the exact cluster version provided in the command. You canadd other configuration options, but do not change any of the ones below.

gcloud container clusters createmy-alpha-cluster \  --cluster-version=1.13.4-gke.1 \  --enable-kubernetes-alpha \  --no-enable-autorepair
Note: Preview versions of GKE are not listed in the available cluster versions in Google Cloud console.

Alpha clusters become unavailable after 30 days.

Changes

Version updates

GKE cluster versions have been updated.

Note: Your clusters might not have these versions available. Rollouts begin on the day of the note and take four or more business days to be completed across all Google Cloud zones.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and foropt-in master upgrades and node upgrades for existing clusters. Seethese instructionsfor more information on the Kubernetes versioning scheme.

The following versions are no longer available for new clusters or clusterupgrades:

  • 1.11.5-gke.5
  • 1.11.6-gke.2
  • 1.11.6-gke.3
  • 1.11.6-gke.6
  • 1.11.6-gke.8
  • 1.11.7-gke.4
  • 1.11.7-gke.6

GKE 1.12.5-gke.10 is no longer available for new clusters, master upgrades, or node upgrades.

Last week, we began to make GKE 1.12.5-gke.10 unavailable for new clusters or upgrades, due to increased error rates. That process completes this week.

If you have already upgraded to 1.12.5-gke.10 and are experiencing elevated error rates, you cancontact support.

Automated master and node upgrades

The following versions will be updated for masters and nodes withauto-upgrade enabled. Automated upgrades are rolled out over multiple weeks toensure cluster stability.

  • 1.11.6 Masters and nodes with auto-upgrade enabled which are using versions1.11.6-gke.10 or earlier will begin to be upgraded to 1.11.7-gke.12.
  • 1.11.7 Masters and nodes with auto-upgrade enabled which are using version1.11.7-gke.11 or earlier will begin to be upgraded to 1.11.7-gke.12.

Rollout schedule

The rollout schedule is now included inUpgrades.

Coming soon

We expect the following changes in the coming weeks.This information is not a guarantee, but is provided to help you plan forupcoming changes.

  • Nodes with auto-upgrade enabled and masters running 1.11.x will be upgraded to 1.11.7-gke.12
  • GKE 1.12.x masters will begin using the containerd runtime with an upcoming release.

March 14, 2019

GKE 1.12.5-gke.10 is no longer available for new clusters or master upgrades.

We have received reports of master nodes experiencing elevated error rates when upgrading to version 1.12.5-gke.10 in all regions. Therefore, we have begun the process of making it unavailable for new clusters or upgrades.

If you have already upgraded to 1.12.5-gke.10 and are experiencing elevated error rates, you cancontact support.

March 11, 2019

You can now run GKE clusters in regioneurope-west6 (Zürich, Switzerland) with zoneseurope-west6-a,europe-west6-b, andeurope-west6-c.

The new region and zones will be included in future rollout schedules.

March 5, 2019

Version updates

GKE cluster versions have been updated.Note: Your clusters might not have these versions available. Rollouts begin on the day of the note and take four or more business days to be completed across all Google Cloud zones.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-inmaster upgrades for existing clusters:

Node image updates

Container-Optimized OS with containerd image for GKE 1.11 clusters

The Container-Optimized OS with containerd node image has been upgraded fromcos-69-10895-138-0-c115tocos-69-10895-138-0-c116for clusters runningKubernetes 1.11+.

SeeCOS image releasenotes and thecontainerd v1.1.5 to v1.1.6 changelog for more information.

Container-Optimized OS with containerd image for GKE 1.12 clusters

The Container-Optimized OS with containerd node image has been upgraded fromcos-69-10895-138-0-c123tocos-69-10895-138-0-c124for clusters runningKubernetes 1.12.5-gke.10+ and alpha clusters runningKubernetes 1.13+.

cos-69-10895-138-0-c124 upgrades Docker to v18.09.0.

SeeCOS image releasenotes and thecontainerd v1.2.3 to v1.2.4 changelog for more information.

Other Updates

  • GKE Ingress has been upgraded from v1.4.3 to v1.5.0 for clusters running1.12.5-gke.10+. For details, see the detailedchangelogandreleasenotes.

Rollout schedule

The rollout schedule is now included inUpgrades.

Coming soon

We expect the following changes in the coming weeks.This information is not a guarantee, but is provided to help you plan forupcoming changes.

  • Nodes with auto-upgrade enabled and masters running 1.11.x will be upgraded to 1.11.7-gke.12

February 27, 2019

GKE 1.12.5-gke.5 is generally available and includes Kubernetes 1.12. Kubernetes1.12 provides faster auto-scaling, faster affinity scheduling, topology-awaredynamic provisioning of storage, and advanced audit logging. For moreinformation, seeDigging into Kubernetes 1.12on the Google Cloud blog.

Version updates

GKE cluster versions have been updated.Note: Your clusters might not have these versions available. Rollouts begin on the day of the note and take four or more business days to be completed across all Google Cloud zones.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-inmaster upgrades for existing clusters:

Rollout schedule

The rollout schedule is now included inUpgrades.

Known issues

A known issue in GKE 1.12.5-gke.5 and all 1.11.x versions below 1.11.6 can cause significant delays when the cluster autoscaler adds new nodes to the cluster, if the cluster has hundreds of unschedulable Pods due to resource starvation. It may require a few minutes before all Pods are scheduled, depending on the number of unschedulable Pods and the size of the cluster. The workaround is to add an adequate number of nodes manually. If adding nodes does not resolve the issue,contact support.
A known issue in GKE 1.12.5-gke.5 can cause unbounded memory usage. This is caused by a memory leak in ReflectorMetricsProvider. Seethis issue for further details. This will be fixed in an upcoming patch.
A known issue in GKE 1.12.5-gke.5 slows down or stops Pod scheduling in clusters with large numbers of terminated Pods. Seethis issue for further details. This will be fixed in an upcoming patch.

Coming soon

We expect the following changes in the coming weeks.This information is not a guarantee, but is provided to help you plan forupcoming changes.

  • Nodes with auto-upgrade enabled and masters running 1.10 will begin to be upgraded to 1.11

February 18, 2019

Version updates

GKE cluster versions have been updated.Note: Your clusters might not have these versions available. Rollouts begin on the day of the note and take four or more business days to be completed across all Google Cloud zones.

New default version for new clusters

Kubernetes version 1.11.7-gke.4 is the default version for new clusters, availableaccording to this week's rollout schedule.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-inmaster upgrades for existing clusters:

Versions no longer available

The following versions are no longer available for new clusters or clusterupgrades:

Node image updates

Rollout schedule

The rollout schedule is now included inUpgrades.

New Features

GKE Ingress has been upgraded from v1.4.2 to v1.4.3 for clusters running 1.11.7-gke.6+. For details, see thedetailed changelog andrelease notes.

Coming soon

We expect the following changes in the coming weeks.This information is not a guarantee, but is provided to help you plan forupcoming changes.

  • GKE 1.12 will be made generally available.
  • Nodes with auto-upgrade enabled and masters running 1.10 will begin to be upgraded to 1.11.7-gke.4.

February 11, 2019

Version updates

GKE cluster versions have been updated.Note: Your clusters might not have these versions available. Rollouts begin on the day of the note and take four or more business days to be completed across all Google Cloud zones.

New versions available for upgrades and new clusters

The following Kubernetes versions will be available for new clusters and for opt-inmaster upgrades of existing clusters this week according to the rollout schedule:

Versions no longer available

The following versions are no longer available for new clusters or clusterupgrades:

Node image updates

The Ubuntu node image has been upgraded toubuntu-gke-1604-d1703-0-v20190124 for clusters running1.10.12-gke.7.

The Ubuntu node image has been upgraded toubuntu-gke-1804-d1703-0-v20190124 for clusters running1.11.6-gke.11,1.11.7-gke.4 and1.12.5-gke.5 (EAP).

Changes:

Rollout schedule

The rollout schedule is now included inUpgrades.

New Features

January 28, 2019

Version updates

Kubernetes Engine cluster versions have been updated as detailed in the following sections. See these instructions to get a full list of the Kubernetes versions you can run on your Kubernetes Engine masters and nodes.

For information about changes expected in the coming weeks, seeComing soon.

New default version for new clusters

GKE version 1.11.6-gke.2 is the default version for new clusters,available according to this week's rollout schedule.

New versions available for upgrades and new clusters

The following Kubernetes Engine versions are available, according to this week'srollout schedule, for new clusters and for opt-in master upgrades for existingclusters:

  • 1.11.6-gke.6

GKE Ingress controller update

GKE Ingress has been upgraded from v1.4.1 to v1.4.2 for clusters running 1.11.6-gke.6+. For details, see thechange log and therelease notes.

Fixed Issues

A bug in version 1.10.x and 1.11.x may lead to periodic persistent disk commit latency spikes exceeding one second. This may trigger master re-elections of GKE components and cause short (a few seconds) periods of unavailability in the cluster control plane. The issue is fixed in version 1.11.6-gke.6.

Rollout schedule

The rollout schedule is now included inUpgrades.

Coming soon

We expect the following changes in the coming weeks.This information is not a guarantee, but is provided to help you plan forupcoming changes.

  • 25% of the upgrades from 1.10 to 1.11.6-gke.2 will be complete.
  • Version 1.11.6-gke.8 will be made available.
  • Version 1.10 will be made unavailable.

January 21, 2019

Version updates

Kubernetes Engine cluster versions have been updated as detailed in the following sections. See these instructions to get a full list of the Kubernetes versions you can run on your Kubernetes Engine masters and nodes.

For information about changes expected in the coming weeks, seeComing soon.

New default version for new clusters

Kubernetes version 1.10.11-gke.1 is the default version for new clusters,available according to this week's rollout schedule.

New versions available for upgrades and new clusters

The following Kubernetes Engine versions are now available for new clusters andfor opt-in master upgrades for existing clusters:

  • 1.10.12-gke.1
  • 1.11.6-gke.3

The following versions are no longer available for new clusters or clusterupgrades:

  • 1.10.6-gke.13
  • 1.10.7-gke.11
  • 1.10.7-gke.13
  • 1.10.9-gke.5
  • 1.10.9-gke.7
  • 1.11.2-gke.26
  • 1.11.3-gke.24
  • 1.11.4-gke.13

Scheduled master auto-upgrades

  • Cluster masters running 1.10.x will be upgraded to 1.10.11-gke.1.
  • Cluster masters running 1.11.2 through 1.11.4 will be upgraded to 1.11.5-gke.5.

Scheduled node auto-upgrades

Cluster nodes with auto-upgrade enabled will be upgraded:

  • 1.10.x nodes with auto-upgrade enabled will be upgraded to 1.10.11-gke.1.
  • 1.11.2 through 1.11.4 nodes with auto-upgrade enabled will be upgraded to 1.11.5-gke.5.

Changes

GKE will not set--max-nodes-total, because--max-nodes-total is inaccurate when the cluster usesFlexible Pod CIDR ranges.This will be gated in 1.11.7+.

Rollout schedule

The rollout schedule is now included inUpgrades.

Coming soon

We expect the following changes in the coming weeks.This information is not a guarantee, but is provided to help you plan forupcoming changes.

  • GKE 1.11.6-gke.6 will be available.
  • A new COS image will be available.

January 14, 2019

Version updates

Kubernetes Engine cluster versions have been updated as detailed in the following sections. See these instructions to get a full list of the Kubernetes versions you can run on your Kubernetes Engine masters and nodes.

For information about changes expected in the coming weeks, seeComing soon.

New versions available for upgrades and new clusters

The following Kubernetes Engine versions are now available for new clusters andfor opt-in master upgrades for existing clusters:

  • 1.10.12-gke.0
  • 1.11.6-gke.0
  • 1.11.6-gke.2

The following versions are no longer available for new clusters or clusterupgrades:

  • 1.11.2-gke.25
  • 1.11.3-gke.23
  • 1.11.4-gke.12
  • 1.11.5-gke.4

Scheduled master auto-upgrades

  • Cluster masters running 1.9.x will be upgraded to 1.10.9-gke.5.
  • Cluster masters running 1.11.2-gke.25 will be upgraded to 1.11.2-gke.26.
  • Cluster masters running 1.11.3-gke.23 will be upgraded to 1.11.3-gke.24.
  • Cluster masters running 1.11.4-gke.12 will be upgraded to 1.11.4-gke.13.
  • Cluster masters running 1.11.5-gke.4 will be upgraded to 1.11.5-gke.5.

Scheduled node auto-upgrades

Cluster nodes with auto-upgrade enabled will be upgraded:

  • 1.9.x nodes with auto-upgrade enabled will be upgraded to 1.10.9-gke.5.
  • 1.11.2-gke.25 nodes with auto-upgrade enabled will be upgraded to 1.11.2-gke.26.
  • 1.11.3-gke.23 nodes with auto-upgrade enabled will be upgraded to 1.11.3-gke.24.
  • 1.11.4-gke.12 nodes with auto-upgrade enabled will be upgraded to 1.11.4-gke.13.
  • 1.11.5-gke.4 nodes with auto-upgrade enabled will be upgraded to 1.11.5-gke.5.

GKE Ingress controller update

The GKE Ingress controller has been upgraded from v1.4.0 to v1.4.1 for clustersrunning 1.11.6-gke.2+. For details, see thechange logand therelease notes.

Fixed Issues

If you use Stackdriver Kubernetes Monitoring Beta with structured JSON logging, an issue with the parsing of structured JSON log entries was introduced in GKE v1.11.4-gke.12. Seerelease guide for Stackdriver Kubernetes Monitoring. This is fixed by upgrading your cluster:

  • 1.11.6-gke.2

Users of GKE 1.11.2.x, 1.11.3-gke.18, 1.11.4-gke.8, or 1.11.5-gke.2 on clusters that use Calico network policies may experience failures due to a problem recreating theBGPConfigurations.crd.projectcalico.org resource. This is fixed by the automatic upgrades to masters and nodes that have auto-upgrade enabled.

A problem in Endpoints API object validation could prevent updates during an upgrade, leading to stale network information for Services. Symptoms of the problem include failed healthchecks with a502 status code or a message such asForbidden: Cannot change NodeName. This is fixed by the automatic upgrades to masters and nodes that have auto-upgrade enabled.

Rollout schedule

The rollout schedule is now included inUpgrades.

Coming soon

We expect the following changes in the coming weeks.This information is not a guarantee, but is provided to help you plan forupcoming changes.

  • All GKE 1.10.x masters will be upgraded to the latest 1.10 version.
  • All GKE 1.11.0 through 1.11.4 masters will be upgraded to the latest 1.11.5 version.

January 8, 2019

The rollout beginning January 8, 2019 has been paused after two days. This isbeing done as a caution, so that we can investigate an issue that will be fixedin next week's rollout. This is not a bug in any GKE version currently availableor planned to be made available.

December 17, 2018

Version updates

GKE cluster versions have been updated.Note: Your clusters might not have these versions available. Rollouts begin on the day of the note and take four or more business days to be completed across all Google Cloud zones.

For information about changes expected in the coming weeks, seeComing soon.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-inmaster upgrades for existing clusters:

  • 1.11.2-gke.26
  • 1.11.3-gke.24
  • 1.11.4-gke.13
  • 1.11.5-gke.5

The following versions are no longer available for new clusters or clusterupgrades:

  • 1.11.2-gke.18
  • 1.11.2-gke.20
  • 1.11.3-gke.18
  • 1.11.4-gke.8

Scheduled master auto-upgrades

Remaining cluster masters running GKE 1.9.x will be upgraded to GKE1.10.9-gke.5 in January 2019.

Scheduled node auto-upgrades

Cluster nodes with auto-upgrade enabled will be upgraded:

  • 1.11.2-gke.x nodes with auto-upgrade enabled will be upgraded to 1.11.2-gke.25
  • 1.11.3-gke.x nodes with auto-upgrade enabled will be upgraded to 1.11.3-gke.23
  • 1.11.4-gke.x nodes with auto-upgrade enabled will be upgraded to 1.11.4-gke.12
  • 1.11.5-gke.x nodes with auto-upgrade enabled will be upgraded to 1.11.5-gke.4

Fixed Issues

Users upgrading to GKE 1.11.2.x, 1.11.3-gke.18, 1.11.4-gke.8, or 1.11.5-gke.2 on clusters that use Calico network policies may experience failures due to a problem recreating theBGPConfigurations.crd.projectcalico.org resource. This problem does not affect newly-created clusters. This is fixed by upgrading your to one of the following versions:

  • 1.11.2-gke.25
  • 1.11.3-gke.23
  • 1.11.4-gke.12
  • 1.11.5-gke.4

A problem in Endpoints API object validation could prevent updates during an upgrade, leading to stale network information for Services. Symptoms of the problem include failed healthchecks with a502 status code or a message such asForbidden: Cannot change NodeName. If you encounter this problem, upgrade your cluster to one of the following versions:

  • 1.11.2-gke.26
  • 1.11.3-gke.24
  • 1.11.4-gke.13
  • 1.11.5-gke.5

This problem can also affect earlier versions of GKE, but the fix is not yet available for those versions. If you are running an earlier version and encounter this issue,contact support.

Rollout schedule

The rollout schedule is now included inUpgrades.

Coming soon

We expect the following changes in the coming weeks.This information is not a guarantee, but is provided to help you plan forupcoming changes.

  • Remaining GKE 1.9.x masters are expected to be upgraded in January 2019.

December 10, 2018

Version updates

GKE cluster versions have been updated.Note: Your clusters might not have these versions available. Rollouts begin on the day of the note and take four or more business days to be completed across all Google Cloud zones.

For information about changes expected in the coming weeks, seeComing soon.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-inmaster upgrades for existing clusters:

  • 1.10.11-gke.1
  • 1.11.2-gke.25
  • 1.11.3-gke.23
  • 1.11.4-gke.12
  • 1.11.5-gke.4

The following versions are no longer available for new clusters or clusterupgrades:

  • 1.9.x
  • 1.10.6-gke.11

Scheduled master auto-upgrades

We will begin upgrading cluster masters running GKE 1.9.x to GKE 1.10.9-gke.5.The upgrade will be completed in January 2019.

Scheduled node auto-upgrades

Cluster nodes with auto-upgrade enabled will be upgraded:

  • 1.11.2-gke.x nodes with auto-upgrade enabled will be upgraded to 1.11.2-gke.25
  • 1.11.3-gke.x nodes with auto-upgrade enabled will be upgraded to 1.11.3-gke.23
  • 1.11.4-gke.x nodes with auto-upgrade enabled will be upgraded to 1.11.4-gke.12
  • 1.11.5-gke.x nodes with auto-upgrade enabled will be upgraded to 1.11.5-gke.4

Node image updates

Container-Optimized OS node image has been upgraded tocos-stable-69-10895-91-0 for clusters runningKubernetes 1.11.2,Kubernetes 1.11.3,Kubernetes 1.11.4, andKubernetes 1.11.5..

Changes:

Fixed Issues

Users upgrading to GKE 1.11.3 on clusters that use Calico network policies may experience failures due to a problem recreating theBGPConfigurations.crd.projectcalico.org resource. This problem does not affect newly-created clusters. This is fixed by upgrading your GKE 1.11.3 clusters to 1.11.3-gke.23.

Users modifying or upgrading existing GKE 1.11.x clusters that use Alias IP may experience network failures due to a mismatch between the new IP range assigned the Pods and the alias IP address range for the nodes. This is fixed by upgrading your GKE 1.11.x clusters to one of the following versions:

  • 1.11.2-gke.25
  • 1.11.3-gke.23
  • 1.11.4-gke.12
  • 1.11.5-gke.4

Changes

Node Problem Detector (NPD) has been upgraded from 0.5.0 to 0.6.0 for clusters running GKE 1.10.11-gke.1+ and 1.11.5-gke.1+. For details, see theupstream pull request.

Known Issues

In GKE v1.11.4-gke.12 and later, if you use Stackdriver Kubernetes Monitoring Beta with structured JSON logging, there is an issue with the parsing of structured JSON log entries. As a workaround, you can downgrade to GKE 1.11.3. For more information, see therelease guide for Stackdriver Kubernetes Monitoring.

Rollout schedule

The rollout schedule is now included inUpgrades.

Coming soon

We expect the following changes in the coming weeks.This information is not a guarantee, but is provided to help you plan forupcoming changes.

  • All GKE 1.9.x masters will be upgraded to 1.10.9-gke.5.

December 4, 2018

Version updates

GKE cluster versions have been updated.Note: Your clusters might not have these versions available. Rollouts begin on the day of the note and take four or more business days to be completed across all Google Cloud zones.

For information about changes expected in the coming weeks, seeComing soon.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-inmaster upgrades for existing clusters:

  • 1.11.4-gke.8

Node image updates

Ubuntu node image has been upgraded toubuntu-gke-1804-d1703-0-v20181113.manifest for clusters runningKubernetes 1.11.4-gke.8.

Changes:
  • The following warning is now displayed to SSH clients that connect to Nodes using SSH or to run remote commands on Nodes over an SSH connection:
    WARNING: Any changes on the boot disk of the node must be made viaDaemonSet in order to preserve them across node (re)creations.Node will be (re)created during manual-upgrade, auto-upgrade,auto-repair or auto-scaling.

New features

Changes

  • You can now drain node pools and delete Nodes in parallel.
  • GKE data in Cloud Asset Inventory and Search is now available in near-real-time. Previously, data was dumped at 6-hour intervals.

Fixed Issues

When upgrading to GKE 1.11.x versions prior to GKE 1.11.4-gke.8, a problem with provisioning the ExternalIP on one or more Nodes causes thekubectl command to fail. The following error is logged in thekube-apiserver log:

Failed to getAddresses: no preferred addresses found; known addresses

This issue is fixed in GKE 1.11.4-gke.8. If you can't upgrade to that version, you can work around this issue by following these steps:

  1. Determine which Nodes have no ExternalIP set:

    kubectlgetnodes-owide

    Look for entries where the last column is<none>.

  2. Restart affected nodes.

Known Issues

Users upgrading to GKE 1.11.3 on clusters that use Calico network policies may experience failures due to a problem recreating theBGPConfigurations.crd.projectcalico.org resource. This problem does not affect newly-created clusters. This is expected to be fixed in the coming weeks.

To work around this problem, you can create theBGPConfigurations.crd.projectcalico.org resource manually:

  1. Copy the following script into a file namedbgp.yaml:
    apiVersion:apiextensions.k8s.io/v1beta1kind:CustomResourceDefinitionmetadata:name:bgpconfigurations.crd.projectcalico.orglabels:kubernetes.io/cluster-service:"true"addonmanager.kubernetes.io/mode:Reconcilespec:scope:Clustergroup:crd.projectcalico.orgversion:v1names:kind:BGPConfigurationplural:bgpconfigurationssingular:bgpconfiguration
  2. Apply the change to the affected cluster using the following command:
    kubectlapply-fbgp.yaml

Users modifying or upgrading existing GKE 1.11.x clusters that use Alias IP may experience network failures due to a mismatch between the new IP range assigned the Pods and the alias IP address range for the nodes. This is expected to be fixed in the coming weeks.

To work around this problem, follow these steps. Use the name of your node in place of[NODE_NAME], and use your cluster's zone in place of[ZONE].

  1. Cordon node that has been affected:
    kubectlcordon[NODE_NAME]
  2. Drain node of all workloads:
    kubectldrain[NODE_NAME]
  3. Delete the Node object from Kubernetes
    kubectldeletenodes[NODE_NAME]
  4. Reboot the Node. This is not optional.
    gcloudcomputeinstancesreset--zone[ZONE][NODE_NAME]

Rollout schedule

The rollout schedule is now included inUpgrades.

Coming soon

We expect the following changes in the coming weeks.This information is not a guarantee, but is provided to help you plan forupcoming changes.

  • We expect to begin upgrading cluster masters running GKE 1.9.x to 1.10.9-gke.5.
  • An updated Container-Optimized OS node image, includingcontainerd 1.1.5
  • Support for enabling Node auto-upgrade and auto-repair when creating or modifying node pools for GKE 1.11 clusters running Ubuntu node images

November 26, 2018

Version updates

GKE cluster versions have been updated.Note: Your clusters might not have these versions available. Rollouts begin on the day of the note and take four or more business days to be completed across all Google Cloud zones.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-inmaster upgrades for existing clusters:

Node image updates

Ubuntu node image has been upgraded toubuntu-gke-1804-bionic-20180921 for clusters runningKubernetes 1.11.3.

Changes:
  • Add GPU support on Ubuntu

Known Issues

When upgrading to GKE 1.11.x versions prior to GKE 1.11.4-gke.8, a problem with provisioning the ExternalIP on one or more Nodes causes somekubectl command to fail. The following error is logged in thekube-apiserver log:

Failed to getAddresses: no preferred addresses found; known addresses

You can work around this issue by following these steps:

  1. Determine which Nodes have no ExternalIP set:
    kubectlgetnodes-owide

    Look for entries where the last column is<none>.

  2. Restart affected nodes.

Rollout schedule

The rollout schedule is now included inUpgrades.

New Features

Vertical Pod Autoscaler (beta)is now available on 1.11.3-gke.11 and higher.

November 12, 2018

Version updates

GKE cluster versions have been updated.Note: Your clusters might not have these versions available. Rollouts begin on the day of the note and take four or more business days to be completed across all Google Cloud zones.

New default version for new clusters

Kubernetes version 1.9.7-gke.11 is the default version for new clusters, availableaccording to this week's rollout schedule.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-inmaster upgrades for existing clusters:

Scheduled master auto-upgrades

Cluster masters will be auto-upgraded as described below:

Versions no longer available

The following versions are no longer available for new clusters or clusterupgrades:

Known Issues

Other Updates

Patch for Kubernetes vulnerabilityCVE-2018-1002105. See thesecurity bulletin for more details.

November 5, 2018

Version updates

GKE cluster versions have been updated.Note: Your clusters might not have these versions available. Rollouts begin on the day of the note and take four or more business days to be completed across all Google Cloud zones.

New default version for new clusters

Kubernetes version 1.9.7-gke.7 is the default version for new clusters, availableaccording to this week's rollout schedule.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-inmaster upgrades for existing clusters:

Scheduled master auto-upgrades

Cluster masters running will be auto-upgraded as described below:

Versions no longer available

The following versions are no longer available for new clusters or clusterupgrades:

Other Updates

Patch 1 for Tigera Technical Advisory TTA-2018-001. See thesecurity bulletinfor further details. The November 12th release contains additional fixes thataddress TTA-2018-001 and we recommend customers upgrade to that release.

Rollout schedule

The rollout schedule is now included inUpgrades.

November 1, 2018

New Features

Node auto-provisioning is now available in beta.

October 30, 2018

Version updates

GKE cluster versions have been updated as detailed in the following sections. Seesupported versions for a full list of the Kubernetes versions you can run on your GKE masters and nodes.

New versions available for upgrades and new clusters

GKE 1.11.2-gke.9 is now generally available.

  • You can now select Container-Optimized OS withcontainerd images when creating, modifying, or upgrading a cluster to GKE v1.11. VisitUsing Container-Optimized OS with containerd for details.

  • The CustomResourceDefinition API supports aversions list field (and deprecates the previous singularversion field) that you can use to support multiple versions of custom resources you have developed, to indicate the stability of a given custom resource. All versions must currently use the same schema, so if you need to add a field, you must add it to all versions. Currently, versions only indicate the stability of your custom resource, and do not allow for any difference in functionality among versions. For more information, visitVersions of CustomResourceDefinitions.

  • Kubernetes 1.11 introduces beta support for increasing the size of an existing PersistentVolume. To increase the size of a PersistentVolume, edit the PersistentVolumeClaim (PVC) object. Kubernetes expands the file system automatically.

    Kubernetes 1.11 also includes alpha support for expanding an online PersistentVolume (one which is in use by a running deployment). To test this feature, use analpha cluster.

    Shrinking persistent volumes is not supported. For more details, visitResizing a volume containing a file system.

  • Subresources allow you to add capabilities to custom resources. You can enable/status and/scale REST endpoints for a given custom resource. You can access these endpoints to view or modify the behavior of the custom resource, usingPUT,POST, orPATCH requests. VisitSubresources for details.

Also, 1.10.9-gke.0 is available.

Scheduled master auto-upgrades

  • Cluster masters running GKE 1.10.6 will be upgraded to 1.10.6-gke.6.
  • Cluster masters running GKE 1.10.7 will be upgraded to 1.10.7-gke.6.

Fixed Issues

GKE 1.10.7-gke.6 and 1.11.2-gke.9 fix an issue that is present in GKE 1.10.6-gke.2 and higher and 1.11.2-gke.4 and higher, where master component logs are missing from Stackdriver Logging.

Other Updates

Container-Optimized OS node image has been upgraded to `cos-beta-69-10895-52-0`for clusters running Kubernetes 1.11.2-gke.9, 1.10.9-gke.0, or 1.10.7-gke.6. SeeCOS image release notesfor more information.

Rollout schedule

The rollout schedule is now included inUpgrades.

New Features

Cluster templates are now available when creating new GKE clusters in Google Cloud console.

Changes

Thekubectl command on new nodes has been upgraded from version 1.9 to 1.10. Thekubectl version is always one version behind the highest GKE version, to ensure compatibility with all supported versions.

Known Issues

In GKE 1.10.6-gke.2 and higher and 1.11.2-gke.4 and higher, master component logs are missing from Stackdriver Logging. This is due to an issue in the version of fluentd used in those versions of GKE.

Update: This issue is fixed in GKE 1.10.7-gke.6 and 1.11.2-gke.9, available fromOctober 30, 2018.

October 22, 2018

Fixed

Kubernetes 1.11.0+: Fixes a bug in kubeDNS where hostnames in SRV records were being incorrectly compressed.

Version updates

GKE cluster versions have been updated.Note: Your clusters might not have these versions available. Rollouts begin on the day of the note and take four or more business days to be completed across all Google Cloud zones.

Scheduled master auto-upgrades

Versions no longer available

The following versions are no longer available for new clusters or clusterupgrades:

Rollout schedule

The rollout schedule is now included inUpgrades.

New Features

Authorized networks is now generally available.

You can now run GKE clusters in regionasia-east2 (Hong Kong) with zonesasia-east2-a,asia-east2-b, andasia-east2-c.

October 18, 2018

Changes

Node autoupgrades are enabled by default for clusters andnode pools created with the Google Cloud console.

October 8, 2018

Known Issues

All GKE v1.10.6 releases includes aproblem with Ingress load balancing. The problem was first reported in the release notes forSeptember 18, 2018.

The problem is fixed in GKE v1.10.7 and higher. However, it cannot be fixed in GKE v1.10.6.If your cluster uses Ingress, do not upgrade to v1.10.6. Do not use GKE v1.10.6 for new clusters. If your cluster does not use Ingress for load balancing and you cannot upgrade to GKE v1.10.7 or higher, you can still use GKE v1.10.6.

Version updates

Kubernetes Engine cluster versions have been updated as detailed in thefollowing sections. Seesupported versionsfor a fulllist of the Kubernetes versions you can run on your Kubernetes Engine mastersand nodes.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and foropt-in master upgrades for existing clusters:

  • 1.10.6-gke.6
  • 1.10.7-gke.6
  • 1.11.2-gke.9 as EAP

Versions no longer available

The following versions are no longer available for new clusters or clusterupgrades:

  • 1.10.6-gke.4
  • 1.10.7-gke.2

Node image updates

Container-Optimized OS node imagecos-dev-69-10895-23-0 is now available. See COS imagerelease notes for more information.

Container-Optimized OS with containerd node imagecos-b-69-10895-52-0-c110 is now available. See COS imagerelease notes for more information.

Rollout schedule

The rollout schedule is now included inUpgrades.

October 2, 2018

New Features

Private clusters is now generally available.

September 21, 2018

New Features

Container-native load balancing is now available in beta.

September 18, 2018

Version updates

GKE cluster versions have been updated.Note: Your clusters might not have these versions available. Rollouts begin on the day of the note and take four or more business days to be completed across all Google Cloud zones.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-in master upgrades for existing clusters:

Scheduled master auto-upgrades

20% of cluster masters running Kubernetes versions 1.9.x will be updated to Kubernetes 1.9.7-gke.6, according to this week's rollout schedule.

Versions no longer available

The following versions are no longer available for new clusters or clusterupgrades:

  • 1.9.6-gke.2
  • 1.9.7-gke.5
  • 1.10.6-gke.3
  • 1.10.7-gke.1
  • 1.11.2-gke.2 (EAP version)
  • 1.11.2-gke.3 (EAP version)

Rollout schedule

The rollout schedule is now included inUpgrades.

Fixes

September 5, 2018

Version updates

GKE cluster versions have been updated.Note: Your clusters might not have these versions available. Rollouts begin on the day of the note and take four or more business days to be completed across all Google Cloud zones.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-in master upgrades for existing clusters:

Scheduled master auto-upgrades

Cluster masters running Kubernetes versions 1.10.x will be updated to Kubernetes 1.10.6-gke.2 according to this week's rollout schedule.

Cluster masters running Kubernetes versions 1.8.x will be updated to Kubernetes1.9.7-gke.5 according to this week's rollout schedule.

Versions no longer available

The following versions are no longer available for new clusters or clusterupgrades:

Rollout schedule

The rollout schedule is now included inUpgrades.

Fixes

  • 1.10.7-gke.1 fixes an issue where preempted GPU Pods would restart withoutproper GPU libraries.

August 20, 2018

Version updates

GKE cluster versions have been updated.Note: Your clusters might not have these versions available. Rollouts begin on the day of the note and take four or more business days to be completed across all Google Cloud zones.

New versions available for upgrades and new clusters

Scheduled master auto-upgrades

Auto-upgrades of Kubernetes1.8.x clusters to 1.9.7-gke.5 continues for the second week. You can alwaysupgrade your Kubernetes 1.8 mastersmanually.

Node image updates

Container-Optimized OS node image has been upgraded fromcos-stable-66-10452-109-0 tocos-dev-69-10895-23-0 for clusters running Kubernetes 1.10.6-gke.2 and Kubernetes 1.11.2-gke.3. See COS imagerelease notes for more information.

Container-Optimized OS node image has been upgraded fromcos-stable-65-10323-98-0-p2 tocos-stable-65-10323-99-0-p2 for clusters runningKubernetes 1.9.7-gke.6. See COS imagerelease notes for more information.

These images contain a fix for anL1 Terminal Fault vulnerability.

Ubuntu node image has been upgraded fromubuntu-gke-1804-bionic-20180718 toubuntu-gke-1804-bionic-20180814 for clusters runningKubernetes 1.11.2-gke.3.

Ubuntu node image has been upgraded fromubuntu-gke-1604-xenial-20180731-1 toubuntu-gke-1604-xenial-20180814-1 for clusters runningKubernetes 1.10.6-gke.2 andKubernetes 1.9.7-gke.6.

These images contain a fix for anL1 Terminal Fault vulnerability.

Rollout schedule

The rollout schedule is now included inUpgrades.

New Features

  • Cloud binary authorizationis promoted to Beta for GKE clusters.
  • GCE-Ingress has been upgraded to version 1.3.0. HTTP2 support for Ingress is promoted to Beta.
  • Private endpoints are promoted to Beta, for customers using private clusters.At cluster creation time, customers can now choose to use the Kubernetesmaster's private IP address as their API server endpoint.

Fixes

  • This week's releases address anL1 Terminal Fault vulnerability.Customers running containers from different customers on the same GKE Node, aswell as customers using COS images, should prioritize updating thoseenvironments.

August 13, 2018

Version updates

GKE cluster versions have been updated.Note: Your clusters might not have these versions available. Rollouts begin on the day of the note and take four or more business days to be completed across all Google Cloud zones.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and for opt-inmaster upgrades for existing clusters:

Scheduled master auto-upgrades

Versions no longer available

The following versions are no longer available for new clusters or clusterupgrades:

Rollout schedule

The rollout schedule is now included inUpgrades.

New Features

  • Containerd integration on the Container-Optimized OS (COS) image is now beta. You can now create a cluster or a node pool with image typecos_containerd. Refer toContainer-Optimized OS with containerd for details.

Fixes

August 6, 2018

Version updates

Kubernetes Engine cluster versions have been updated as detailed in the following sections. Seesupported versions for a full list of theKubernetes versions you can run on your Kubernetes Engine masters and nodes.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and opt-inmaster upgrades for existing clusters:

  • Kubernetes 1.9.7-gke.5 is nowgenerally available for use with Kubernetes Engine clusters.
  • New default version for new clusters

  • Kubernetes version1.9.7-gke.5 is the default version for new clusters, available according to this week's rollout schedule.
  • Scheduled master auto-upgrades

    Cluster masters running Kubernetes version1.8.10-gke.0 will be updated toKubernetes 1.8.10-gke.2, according to this week's rollout schedule.

    Cluster masters running Kubernetes versions1.8.12-gke.1 and1.8.12-gke.2 will be updated toKubernetes 1.8.12-gke.3, according to this week's rollout schedule.

    Cluster masters running Kubernetes version1.9.6-gke.1 will be updated toKubernetes 1.9.6-gke.2, according to this week's rollout schedule.

    Cluster masters running Kubernetes versions1.9.7-gke.0,1.9.7-gke.1,1.9.7-gke.3, and1.9.7-gke.4 will be updated toKubernetes 1.9.7-gke.5, according to this week's rollout schedule.

    Cluster masters running Kubernetes versions1.10.2-gke.0,1.10.2-gke.1, and1.10.2-gke.3 will be updated toKubernetes 1.10.2-gke.4, according to this week's rollout schedule.

    Cluster masters running Kubernetes versions1.10.4-gke.0 and1.10.4-gke.2 will be updated toKubernetes 1.10.4-gke.3, according to this week's rollout schedule.

    Cluster masters running Kubernetes versions1.10.5-gke.0 and1.10.5-gke.3 will be updated toKubernetes 1.10.5-gke.4, according to this week's rollout schedule.

    Rollout schedule

    The rollout schedule is now included inUpgrades.

    Fixes

    A patch for Kubernetes vulnerabilityCVE-2018-5390 is now available according to this week's rollout schedule. We recommend that youmanually upgrade your nodes as soon as the patch becomes available in your cluster's zone.

    August 3, 2018

    New Features

    In a future release, all newly-created Google Kubernetes Engineclusters areVPC-nativeby default.

    July 30, 2018

    Version updates

    GKE cluster versions have been updated.Note: Your clusters might not have these versions available. Rollouts begin on the day of the note and take four or more business days to be completed across all Google Cloud zones.

    July 12, 2018

    New Features

    Cloud TPU is now available with GKE in Beta. Run your machine learning workload in a Kubernetes cluster on Google Cloud, and let GKE manage and scale the Cloud TPU resources for you.

    Version updates

    GKE cluster versions have been updated.Note: Your clusters might not have these versions available. Rollouts begin on the day of the note and take four or more business days to be completed across all Google Cloud zones.

    New versions available for upgrades and new clusters

    The following Kubernetes versions are now available for new clusters and opt-inmaster upgrades for existing clusters:

    Enabling/disabling network policy on already created 1.11 clusters may not workproperly.

    Scheduled master auto-upgrades

    Cluster masters running Kubernetes versions 1.8 will be updated to Kubernetes1.8.10-gke.0 according to this week's rollout schedule.

    Rollout schedule

    The rollout schedule is now included inUpgrades.

    July 10, 2018

    New Features

    You can now run GKE clusters in regionus-west2 (Los Angeles) with zonesus-west2-a,us-west2-b, andus-west2-c.

    June 28, 2018

    Version Updates

    Kubernetes Engine cluster versions have been updated as detailed in the following sections. Seeversioning and upgrades for a full list of the Kubernetes versions you can run on your Kubernetes Engine masters and nodes.

    The following versions are now available according to this week's rollout schedule:

    Kubernetes 1.10.5-gke.0 is nowgenerally available for use with GKE clusters.

    New default version for new clusters

    Kubernetes version 1.9.7-gke.3 is the default version for new clusters,available according to this week's rollout schedule.

    New versions available for upgrades and new clusters

    The following Kubernetes versions are now available for new clusters and opt-in master upgrades for existing clusters:

    • 1.10.5-gke.0

    Scheduled master auto-upgrades

    Cluster masters running Kubernetes versions older than 1.8.10-gke.0 will beupdated to Kubernetes 1.8.10-gke.0 according to this week's rollout schedule.

    Versions no longer available

    The following versions are no longer available for new clusters or clusterupgrades:

    • 1.8.8-gke.0
    • 1.10.4-gke.0

    Rollout schedule

    The rollout schedule is now included inUpgrades.

    Known Issues

    Currently, OS Login is not fully compatible with Google Kubernetes Engine clusters running Kubernetes version 1.10.x. The following functionalities of kubectl might not work properly when OS Login is enabled: kubectl logs, proxy, exec, attach, and port-forward. Until OS Login is fully supported, the settings at the project-level are ignored at the nodes level. The settings at project-level are ignored in Kubernetes Engine.

    June 18, 2018

    Version Updates

    Kubernetes Engine cluster versions have been updated as detailed in the following sections. Seeversioning and upgrades for a full list of the Kubernetes versions you can run on your Kubernetes Engine masters and nodes.

    The following versions are now available according to this week's rollout schedule:

    Kubernetes 1.10.4-gke.2 is nowgenerally available for use with GKE clusters.

    New versions available for upgrades and new clusters

    The following Kubernetes versions are now available for new clusters and opt-in master upgrades for existing clusters:

    • 1.10.4-gke.2

    Versions no longer available

    The following versions are no longer available for new clusters or clusterupgrades:

    • 1.9.7-gke.1
    • 1.10.2-gke.3

    New Features

    GPUs for Google Kubernetes Engine is now generally available.

    Rollout schedule

    The rollout schedule is now included inUpgrades.

    June 11, 2018

    Version Updates

    Kubernetes Engine cluster versions have been updated as detailed in the following sections. Seeversioning and upgrades for a full list of the Kubernetes versions you can run on your Kubernetes Engine masters and nodes.

    The following versions are now available according to this week's rollout schedule:

    Kubernetes 1.10.4-gke.0 is nowgenerally available for use with GKE clusters.

    The base image for this version iscos-stable-66-10452-101-0, which contains a fix for an issue that causes deadlock in the Linux kernel.

    New Features

    You can now run GKE clusters in regioneurope-north1 (Finland) with zoneseurope-north1-a,europe-north1-b, andeurope-north1-c.

    Refer to the rollout schedule below for the specific rollout dates in each zone.

    A new `cos_containerd` image is now available and set by default for trying out thecontainerd integration in thealpha clusters runningKubernetes 1.10.4-gke.0 and above. See thecontainerd runtime alpha user guide for more information, or learn about the containerd integration in the recentKubernetes blog post.

    Rollout schedule

    The rollout schedule is now included inUpgrades.

    June 04, 2018

    New versions available for upgrades and new clusters

    The following Kubernetes versions are now available for new clusters and opt-inmaster upgrades for existing clusters:

    • 1.9.7-gke.3

    Rollout schedule

    The rollout schedule is now included inUpgrades.

    May 22, 2018

    New versions available for upgrades and new clusters

    Kubernetes 1.10.2-gke.3 is now available for use with Kubernetes Engine clusters.

    Versions no longer available

    The following versions are no longer available for new clusters or clusterupgrades:
    • 1.8.12-gke.0
    • 1.9.7-gke.0
    • 1.10.2-gke.1

    Rollout schedule

    The rollout schedule is now included inUpgrades.

    New Features

    Custom Boot Disks is now available in Beta.

    Alias IPs is now generally available.

    May 16, 2018

    New Features

    Kubernetes Engine Shared VPC is now available in Beta.

    May 15, 2018

    The rollout of the release has been delayed. Refer to the revised rollout schedule below.

    Version updates

    Kubernetes Engine cluster versions have been updated as detailed in the following sections. Seeversioning and upgrades for a full list of the Kubernetes versions you can run on your Kubernetes Engine masters and nodes.

    New versions available for upgrades and new clusters

    Clusters runningKubernetes 1.9.0 - 1.9.6-gke.0 that have opted intoautomatic node upgrades will be upgraded toKubernetes 1.9.6-gke.1 according to this week's rollout schedule.

    Kubernetes 1.10.2-gke.1 is nowgenerally available for use with Google Kubernetes Engine clusters.

    Kubernetes 1.9.7-gke.1 is nowgenerally available for use with Google Kubernetes Engine clusters.

    Kubernetes 1.8.12-gke.1 is nowgenerally available for use with Google Kubernetes Engine clusters.

    New default version for new clusters

    The following versions are now default according to this week's rollout schedule:

    Kubernetes 1.8.10-gke.0 is now the default version for new clusters.

    Rollout schedule

    The rollout schedule is now included inUpgrades.

    New Features

    Load balancers and ingresses are now automatically deleted upon cluster deletion.

    Other Updates

    The base image has been changed tocos-stable-66-10452-89-0 for clusters runningKubernetes 1.10.2-gke.1.

    This image contains a fix for Linux kernel CVE-2018-1000199 and CVEs in ext4 (CVE-2018-1092, CVE-2018-1093, CVE-2018-1094, CVE-2018-1095).

    The base image has been changed tocos-stable-65-10323-85-0 for clusters runningKubernetes 1.8.12-gke.0 andKubernetes 1.9.7-gke.1.

    This image contains a fix for Linux kernel CVE-2018-1000199.

    The base image has been changed toubuntu-gke-1604-xenial-20180509-1 for clusters runningKubernetes 1.9.7-gke.1 andKubernetes 1.10.2-gke.1.

    The base image has been changed toubuntu-gke-1604-xenial-20180509 for clusters runningKubernetes 1.8.12-gke.1.

    These images contain a fix for Linux kernel CVE-2018-1000199. Refer toUSN-3641-1 for more information.

    May 7, 2018

    Version updates

    Kubernetes Engine cluster versions have been updated as detailed in the following sections. Seeversioning and upgrades for a full list of the Kubernetes versions you can run on your Kubernetes Engine masters and nodes.

    Scheduled master auto-upgrades

    100% of cluster masters running Kubernetes versions1.7.0 and1.7.12-gke.2 will be updated toKubernetes 1.8.8-gke.0, according to this week's rollout schedule.

    100% of cluster masters running Kubernetes versions1.7.14.gke-1 and1.7.15-gke.0 will be updated toKubernetes 1.8.10-gke.0, according to this week's rollout schedule.

    100% of cluster masters running Kubernetes versions1.9.X will be updated toKubernetes 1.9.6, according to this week's rollout schedule.

    Versions no longer available

    The following versions are no longer available for new clusters or clusterupgrades:

    • 1.7.15-gke.l0
    • 1.9.3-gke.0

    Rollout schedule

    The rollout schedule is now included inUpgrades.

    Known Issues

    The Kubernetes Dashboard in version 1.8.8-gke.0 isn't compatible with nodes running versions1.7.13 through 1.7.15.

    May 1, 2018

    Known Issues

    InKubernetes versions 1.9.7, 1.10.0, and 1.10.2, if an NVIDIA GPU device plugin restarts but the associated kubelet does not, then the node allocatable for the GPU resourcenvidia.com/gpu stays zero until the kubelet restarts. This prevents new pods from consuming GPU devices.

    The most likely scenario when this problem occurs is after a cluster is created or upgraded with Kubernetes 1.9.7, 1.10.0, or 1.10.2 and the cluster master is upgraded to a new version, which triggers an NVIDIA GPU device plugin DaemonSet upgrade. The DaemonSet upgrade causes the NVIDIA GPU device plugin to restart itself.

    If you use the GPU feature, do not create or upgrade your cluster with Kubernetes 1.9.7, 1.10.0, or 1.10.2. This issue will be addressed in an upcoming release.

    April 30, 2018

    Version updates

    Kubernetes Engine cluster versions have been updated as detailed in the following sections. Seeversioning and upgrades for a full list of the Kubernetes versions you can run on your Kubernetes Engine masters and nodes.

    The following versions are now available according to this week's rollout schedule:

    Kubernetes 1.8.12-gke.0 is nowgenerally available for use with Google Kubernetes Engine clusters.

    Kubernetes 1.9.7-gke.0 is nowgenerally available for use with Google Kubernetes Engine clusters.

    Kubernetes 1.10.2-gke.0 clusters are now available for whitelisted early-access users. Non-whitelisted users can specify version 1.10.2-gke.0 inAlpha Clusters.

    Scheduled master auto-upgrades

    100% of cluster masters running Kubernetes versions1.7.x will be updated toKubernetes 1.8.8-gke.0, according to this week's rollout schedule.

    Rollout schedule

    The rollout schedule is now included inUpgrades.

    Other Updates

    The base image has been changed tocos-stable-65-10323-75-0-p for clusters runningKubernetes 1.8.12-gke.0.

    The base image has been changed tocos-stable-65-10323-75-0-p2 for clusters runningKubernetes 1.9.7-gke.0.

    The base image has been changed tocos-stable-66-10452-74-0 for clusters runningKubernetes 1.10.2-gke.0.

    April 24, 2018

    Version updates

    Kubernetes Engine cluster versions have been updated as detailed in the following sections. Seeversioning and upgrades for a full list of the Kubernetes versions you can run on your Kubernetes Engine masters and nodes.

    Scheduled master auto-upgrades

    • 10 % of cluster masters running Kubernetes versions 1.7.x will be updated to Kubernetes 1.8.8-gke.0, according to this week's rollout schedule.
    • Cluster masters running Kubernetes versions 1.8.x will be updated to Kubernetes 1.8.8-gke.0, according to this week's rollout schedule.
    • Cluster masters running Kubernetes versions 1.9.x will be updated to Kubernetes 1.9.3-gke.0, according to this week's rollout schedule.

    Versions no longer available

    The following versions are no longer available for new clusters or clusterupgrades:

    • Kubernetes 1.9.6-gke.0

    The following versions are no longer available for new clusters or clusterupgrades:

    • 1.8.7-gke.1
    • 1.9.2-gke.1
    • 1.9.6-gke.0

    Rollout schedule

    The rollout schedule is now included inUpgrades.

    April 16, 2018

    Version updates

    Kubernetes Engine cluster versions have been updated as detailed in the following sections. Seeversioning and upgrades for a full list of the Kubernetes versions you can run on your Kubernetes Engine masters and nodes.

    Versions no longer available

    The following versions are no longer available for new clusters or clusterupgrades:
    • Kubernetes 1.7.14-gke.1
    • Kubernetes 1.8.9-gke.1
    • Kubernetes 1.9.4-gke.1

    Rollout schedule

    The rollout schedule is now included inUpgrades.

    April 9, 2018

    Version updates

    Kubernetes Engine cluster versions have been updated as detailed in the following sections. Seeversioning and upgrades for a full list of the Kubernetes versions you can run on your Kubernetes Engine masters and nodes.

    New versions available for upgrades and new clusters

    The following versions are now available according to this week's rollout schedule:

    Kubernetes 1.9.6-gke.1 is nowgenerally available for use with Google Kubernetes Engine clusters.

    Kubernetes 1.10.0-gke.0 clusters are now available for whitelisted early-access users. Non-whitelisted users can specify version 1.10.0-gke.0 inAlpha Clusters.

    Scheduled master auto-upgrades

    Cluster masters running Kubernetes versions 1.7.x will be updated to Kubernetes1.7.12-gke.2, according to this week's rollout schedule.

    Versions no longer available

    The following versions are no longer available for new clusters or clusterupgrades:
    • Kubernetes 1.7.12-gke.1

    Other Updates

    Container-Optimized OS node image has been upgraded tocos-stable-65-10323-69-0-p2 for clusters runningKubernetes 1.9.6-gke.1. See COS imagerelease notes for more information.

    Container-Optimized OS node image is usingcos-beta-66-10452-28-0 for clusters runningKubernetes 1.10.0-gke.0. See COS imagerelease notes for more information.

    Rollout schedule

    The rollout schedule is now included inUpgrades.

    March 30, 2018

    Note: TheMarch 27, 2018 releasehas been rolled back, so this release supersedes the rollout schedule and clusterdefault version previously stated.

    New versions available for upgrades and new clusters

    The following versions are now available according to this week's rollout schedule:

    Kubernetes 1.9.6-gke.0.,Kubernetes 1.8.10-gke.0., andKubernetes 1.7.15-gke.0. are nowgenerally available for use with Google Kubernetes Engine clusters.

    New default version for new clusters

    The following versions are now default according to this week's rollout schedule:

    Zonal clusters

    The default version has been reverted from theMarch 27, 2018 release.Kubernetes 1.8.8-gke.0 is now the default version for newzonal andregional clusters.

    Versions no longer available

    The following versions are no longer available for new clusters or clusterupgrades:

    • Kubernetes 1.9.2-gke.1
    • Kubernetes 1.7.12-gke.2

    Other Updates

    The following updates are the same as in theMarch 27, 2018 release. They have not been changed by the rollback.

    Ubuntu node image has been upgraded toubuntu-gke-1604-xenial-v20180317-1 for clusters runningKubernetes 1.9.6-gke.0.

    Issues fixed:

    • Inubuntu-gke-1604-xenial-v20180207-1, used by Kubernetes 1.9.3-gke.0 and 1.9.4-gke.1, new pods could not be scheduled to node where Docker gets restarted.
    • Security fix forUSN-3586-1

    Ubuntu node image has been upgraded toubuntu-gke-1604-xenial-v20180308 for clusters runningKubernetes 2.8.10-gke.0 and1.7.15-gke.0.

    Issue fixed:

    Container-Optimized OS node image has been upgraded tocos-beta-65-10323-12-0 for clusters runningKubernetes 1.7.15-gke.0. See COS imagerelease notes for more information.

    March 27, 2018

    Note: This release has been rolled back. Refer to theMarch 30, 2018 release notefor the rollout schedule and cluster default version.

    New versions available for upgrades and new clusters

    The following versions are now available according to this week's rollout schedule:

    Kubernetes 1.9.6-gke.0.,Kubernetes 1.8.10-gke.0., andKubernetes 1.7.15-gke.0. are nowgenerally available for use with Google Kubernetes Engine clusters.

    New default version for new clusters

    The following versions are now default according to this week's rollout schedule:

    Zonal clusters

    Kubernetes 1.8.9-gke.1 is now the default version for newzonal andregional clusters.

    Versions no longer available

    The following versions are no longer available for new clusters or clusterupgrades:

    • Kubernetes 1.9.2-gke.1
    • Kubernetes 1.8.8-gke.0
    • Kubernetes 1.7.12-gke.2

    Other Updates

    Ubuntu node image has been upgraded toubuntu-gke-1604-xenial-v20180317-1 for clusters runningKubernetes 1.9.6-gke.0.

    Issues fixed:

    • Inubuntu-gke-1604-xenial-v20180207-1, used by Kubernetes 1.9.3-gke.0 and 1.9.4-gke.1, new pods could not be scheduled to node where Docker gets restarted.
    • Security fix forUSN-3586-1

    Ubuntu node image has been upgraded toubuntu-gke-1604-xenial-v20180308 for clusters runningKubernetes 1.8.10-gke.0 and1.7.15-gke.0.

    Issue fixed:

    Container-Optimized OS node image has been upgraded tocos-beta-65-10323-12-0 for clusters runningKubernetes 1.7.15-gke.0. See COS imagerelease notes for more information.

    March 21, 2018

    New Features

    Private Clusters are now available in Beta.

    March 19, 2018

    Fixed

    Kubernetes 1.9.4+: Fixes a bug that prevented clusters with IP aliases fromappearing.

    March 13, 2018

    Fixed

    A patch for Kubernetes vulnerabilities CVE-2017-1002101 and CVE-2017-1002102 is now available according to this week's rollout schedule. We recommend that youmanually upgrade your nodes as soon as the patch becomes available in your cluster's zone.

    Issues

    Breaking Change: Do not upgrade your cluster if your application requires mounting asecret,configMap,downwardAPI, orprojected volume withwrite access.

    To fix security vulnerabilityCVE-2017-1002102,Kubernetes 1.9.4-gke.1,Kubernetes 1.8.9-gke.1, andKubernetes 1.7.14-gke.1 changed secret, configMap, downwardAPI, and projected volumes to mount read-only, instead of allowing applications to write data and then reverting it automatically. We recommend that you modify your application to accommodate these changes before you upgrade your cluster.

    If your cluster usesIP Aliases and was created with the--enable-ip-alias flag, upgrading the master to Kubernetes 1.9.4-gke.1 will prevent it from starting properly. This issue will be addressed in an upcoming release.

    Version updates

    Kubernetes Engine cluster versions have been updated as detailed in the following sections. Seeversioning and upgrades for a full list of the Kubernetes versions you can run on your Kubernetes Engine masters and nodes.

    New versions available for upgrades and new clusters

    The following versions are now available according to this week's rollout schedule:

    Kubernetes 1.9.4-gke.1,Kubernetes 1.8.9-gke.1, andKubernetes 1.7.14-gke.1 are nowgenerally available for use with Google Kubernetes Engine clusters.

    New default version for new clusters

    The following versions are now default according to this week's rollout schedule:

    Zonal clusters

    Kubernetes 1.8.8-gke.0 is now the default version for newzonal andregional clusters.

    Scheduled auto-upgrades

    Clusters running the following Kubernetes versions will be automatically upgraded as follows, according to the rollout schedule:

    • Regional clusters runningKubernetes 1.7.x will be upgraded toKubernetes 1.8.7-gke.1.

    This upgrade applies to cluster masters.

    Versions no longer available

    The following versions are no longer available for new clusters or clusterupgrades:

    • Kubernetes 1.8.7-gke.1

    New Features

    You can now use version aliases with gcloud's--cluster-version option to specify Kubernetes versions. Version aliases allow you to specify the latest version or a specific version, without including the `-gke.0` version suffix. Seeversioning and upgrades for a complete overview of version aliases.

    March 12, 2018

    Issues

    A patch for Kubernetes vulnerabilities CVE-2017-1002101 and CVE-2017-1002102 will be available in the upcoming release. We recommend that youmanually upgrade your nodes as soon as the patch becomes available.

    Rollout schedule

    The rollout schedule is now included inUpgrades.

    March 08, 2018

    New Features

    You can now easily debug your Kubernetes services from the Google Cloud console with port-forwarding and web preview.

    March 06, 2018

    Version updates

    Kubernetes Engine cluster versions have been updated as detailed in the following sections. Seeversioning and upgrades for a full list of the Kubernetes versions you can run on your Kubernetes Engine masters and nodes.

    Versions no longer available

    The following versions are no longer available for new clusters or clusterupgrades:

    • Kubernetes 1.7.12-gke.1

    Rollout schedule

    The rollout schedule is now included inUpgrades.

    February 27, 2018

    Version updates

    Kubernetes Engine cluster versions have been updated as detailed in the following sections. Seeversioning and upgrades for a full list of the Kubernetes versions you can run on your Kubernetes Engine masters and nodes.

    New versions available for upgrades and new clusters

    The following versions are now available according to this week's rollout schedule:

    Kubernetes 1.9.3-gke.0,Kubernetes 1.8.8-gke.0, andKubernetes 1.7.12-gke.2 are nowgenerally available for use with Google Kubernetes Engine clusters.

    Scheduled auto-upgrades

    Clusters running the following Kubernetes versions will be automatically upgraded as follows, according to the rollout schedule:

    • Clusters runningKubernetes 1.8.x will be upgraded toKubernetes 1.8.7-gke.1.
    • Regional clusters runningKubernetes 1.8.x will have etcd upgraded toetcd 3.1.11.

    This upgrade applies to cluster masters.

    Versions no longer available

    The following versions are no longer available for new clusters or clusterupgrades:

    • Kubernetes 1.8.5-gke.0

    The following versions are no longer available for new clusters or clusterupgrades:

    • Kubernetes 1.8.5-gke.0

    Rollout schedule

    The rollout schedule is now included inUpgrades.

    New Features

    Beginning with Kubernetes version 1.9.3, you can enable metadata concealment to prevent user Pods from accessing certain VM metadata for your cluster's nodes. For more information, seeProtecting Cluster Metadata.

    Other Updates

    Ubuntu node image has been upgraded fromubuntu-gke-1604-xenial-v20180122 toubuntu-gke-1604-xenial-v20180207 for clusters runningKubernetes 1.7.12-gke.2 and1.8.8-gke.0.

    Ubuntu node image has been upgraded fromubuntu-gke-1604-xenial-v20180122 toubuntu-gke-1604-xenial-v20180207-1 for clusters runningKubernetes 1.9.3-gke.0.

    • Security fix forUSN-3548-2
    • Docker upgraded from 1.12 to 17.03 and default storage driver changed to overlay2
    • Known issue: When Docker gets restarted on a node, new pods cannot be scheduled on that node and they will be stuck in `ContainerCreating` state.

    Container-Optimized OS node image has been upgraded fromcos-stable-63-10032-71-0 tocos-beta-65-10323-12-0 for clusters runningKubernetes 1.9.3-gke.0 and1.8.8-gke.0. See COS imagerelease notes for more information.

    February 13, 2018

    Version updates

    Kubernetes Engine cluster versions have been updated as detailed in the following sections. Seeversioning and upgrades for a full list of the Kubernetes versions you can run on your Kubernetes Engine masters and nodes.

    New default version for new clusters

    The following versions are now default according to this week's rollout schedule:

    Zonal clusters

    Kubernetes version 1.8.7-gke.1 is now the default version for newzonal andregional clusters.

    Scheduled auto-upgrades

    Clusters running the following Kubernetes versions will be automatically upgraded as follows, according to the rollout schedule:

    • Clusters runningKubernetes 1.6.13-gke.1 and1.7.12-gke.0 will be upgraded toKubernetes 1.7.12-gke.1.
    • Clusters runningKubernetes 1.9.1-gke.0 and1.9.2-gke.0 will be upgraded toKubernetes 1.9.2-gke.1.
    • Clusters runningetcd 2.* will be upgraded toetcd 3.0.17-gke.2.

    This upgrade applies to cluster masters.

    Rollout schedule

    The rollout schedule is now included inUpgrades.

    February 8, 2018

    New Features

    GPUs on Kubernetes Engine are now available in Beta.

    February 5, 2018

    Version updates

    Kubernetes Engine cluster versions have been updated as detailed in the following sections. Seeversioning and upgrades for a full list of the Kubernetes versions you can run on your Kubernetes Engine masters and nodes.

    New versions available for upgrades and new clusters

    The following versions are now available according to this week's rollout schedule:

    Kubernetes 1.9.2-gke.1 is nowgenerally available for use with Google Kubernetes Engine clusters.

    New default version for new clusters

    The following versions are now default according to this week's rollout schedule:

    Zonal clusters

    Kubernetes version 1.7.12-gke.1 is now the default version for newzonal clusters.

    Regional clusters

    Kubernetes version 1.8.7-gke.1 is now the default version for newregional clusters.

    The new cluster versions can be used with the latest Ubuntunode image version,ubuntu-gke-1604-xenial-v20180122.

    • Kernel upgraded from 4.4 to 4.13
    • Security fixes for Spectre and Meltdown
    • Support forAlias IPs

    Scheduled auto-upgrades

    Clusters running the following Kubernetes versions will be automatically upgraded as follows, according to the rollout schedule:

    • Clusters runningKubernetes 1.6.13-gke.1 and1.7.x will be upgraded toKubernetes 1.7.12-gke.0.

    This upgrade applies to cluster masters.

    Rollout schedule

    The rollout schedule is now included inUpgrades.

    New Features

    Beginning with Kubernetes version 1.9.x on Google Kubernetes Engine, you can now performhorizontal pod autoscaling based on custom metrics from Stackdriver Monitoring (in addition to the default scaling based on CPU utilization). For more information, seeScaling an Application and thecustom metrics autoscaling tutorial.

    Known Issues

    Beginning with Kubernetes version 1.9.x, automatic firewall rules have changed such that workloads in your Google Kubernetes Engine cluster cannot communicate with other Compute Engine VMs that are on the same network, butoutside the cluster. This change was made for security reasons.

    You can replicate the behavior of older clusters (1.8.x and earlier) bysetting a new firewall rule on your cluster.

    January 31, 2018

    New Features

    PodSecurityPolicies are now available in Beta.

    Version updates

    Kubernetes Engine cluster versions have been updated as detailed in the following sections. Seeversioning and upgrades for a full list of the Kubernetes versions you can run on your Kubernetes Engine masters and nodes.

    New default version for new clusters

    The following versions are now default according to this week's rollout schedule:

    Kubernetes version 1.7.12-gke.0 is now the default version for new zonal clusters.

    Kubernetes version 1.8.6-gke.0 is now the default version for new regional clusters.

    New versions available for upgrades and new clusters

    The following versions are now available according to this week's rollout schedule:

    • Kubernetes 1.8.7-gke.0
    • Kubernetes 1.9.2-gke.0 clusters are now available for whitelisted early-access users. Non-whitelisted users can specify version 1.9.2-gke.0 inAlpha Clusters.

    Rollout schedule

    The rollout schedule is now included inUpgrades.

    January 16, 2018

    Version updates

    Kubernetes Engine cluster versions have been updated as detailed in the following sections. Seeversioning and upgrades for a full list of the Kubernetes versions you can run on your Kubernetes Engine masters and nodes.

    New versions available for upgrades and new clusters

    The following versions are now available according to this week's rollout schedule:

    • Kubernetes 1.9.1 clusters are now available for whitelisted early-access users. Non-whitelisted users can specify version 1.9.1 inAlpha Clusters.

    Scheduled auto-upgrades

    Clusters running the following Kubernetes versions will be automatically upgraded as follows, according to the rollout schedule:

    • Clusters runningKubernetes 1.6.x will be upgraded to1.7.11-gke.1.

    This upgrade applies to cluster masters.

    Rollout schedule

    The rollout schedule is now included inUpgrades.

    January 10, 2018

    New Features

    You can now run Container Engine clusters in regioneurope-west4 (Netherlands).

    You can now run Container Engine clusters in regionnorthamerica-northeast1 (Montréal).

    January 9, 2018

    Version updates

    Kubernetes Engine cluster versions have been updated as detailed in the following sections. Seeversioning and upgrades for a full list of the Kubernetes versions you can run on your Kubernetes Engine masters and nodes.

    New default version for new clusters

    Kubernetes version 1.7.11-gke.1 is now the default version for new clusters, available according to this week's rollout schedule.

    Scheduled auto-upgrades

    Clusters running the following Kubernetes versions will be automatically upgraded as follows, according to the rollout schedule:

    • Clusters runningKubernetes 1.6.x will be upgraded to1.6.13-gke.1.
    • Clusters runningKubernetes 1.7.x will be upgraded to1.7.11-gke.1.
    • Clusters runningKubernetes 1.8.x will be upgraded to1.8.5-gke.0

    This upgrade applies to cluster masters and, ifnode auto-upgrades are enabled, all cluster nodes.

    New versions available for upgrades and new clusters

    The following versions are now available for new clusters and opt-in master and node upgrades according to this week's rollout schedule:

    • Kubernetes 1.8.6-gke.0
    • Kubernetes 1.7.12-gke.0

    Versions no longer available

    The following versions are no longer available for new clusters or clusterupgrades:

    • Kubernetes 1.8.4-gke.1

    Rollout schedule

    The rollout schedule is now included inUpgrades.

    January 2, 2018

    Version updates

    Kubernetes Engine cluster versions have been updated as detailed in the following sections. Seeversioning and upgrades for a full list of the Kubernetes versions you can run on your Kubernetes Engine masters and nodes.

    New default version for new clusters

    Kubernetes version 1.7.11-gke.1 is now the default version for new clusters, available according to this week's rollout schedule.

    New versions available for upgrades and new clusters

    The following versions are now available for new clusters and opt-in master and node upgrades according to this week's rollout schedule:

    • Kubernetes 1.8.5-gke.0

    Versions no longer available

    The following versions are no longer available for new clusters or clusterupgrades:

    • Kubernetes 1.6.x (all versions)
    • Kubernetes 1.7.8
    • Kubernetes 1.7.9

    Rollout schedule

    The rollout schedule is now included inUpgrades.

    December 14, 2017

    Version updates

    Kubernetes Engine cluster versions have been updated as detailed in the following sections. Seeversioning and upgrades for a full list of the Kubernetes versions you can run on your Kubernetes Engine masters and nodes.

    New versions available for upgrades and new clusters

    The following versions are now available for new clusters and opt-in master and node upgrades according to this week's rollout schedule:

    • Kubernetes 1.8.4-gke.1
    • Kubernetes 1.7.11-gke.1
    • Kubernetes 1.6.13-gke.1

    These version updates change the defaultnode image for Kubernetes Engine nodes toContainer-Optimized OS versioncos-stable-63-10032-71-0-p.

    Versions no longer available

    The following versions areno longer available for new clusters or opt-in master and node upgrades:

    • Kubernetes 1.8.4-gke.0
    • Kubernetes 1.7.11-gke.0
    • Kubernetes 1.6.13-gke.0

    Rollout schedule

    DateAvailable zones
    2017-12-14europe-west2-a,us-east1-d
    2017-12-15asia-east1-a,asia-northeast1-a,asia-south1-a,asia-southeast1-a,australia-southeast1-a,europe-west1-c,europe-west3-a,southamerica-east1-a,us-central1-b,us-east4-b,us-west1-a
    2017-12-18asia-east1-c,asia-northeast1-b,asia-south1-b,asia-southeast1-b,australia-southeast1-b,europe-west1-b,europe-west2-b,europe-west3-b,southamerica-east1-b,us-central1-f,us-east1-c,us-east4-c,us-west1-b
    2017-12-19asia-east1-b,asia-northeast1-c,asia-south1-c,australia-southeast1-c,europe-west1-d,europe-west2-c,europe-west3-c,southamerica-east1-c,us-central1-a,us-central1-c,us-east1-b,us-east4-a,us-west1-c

    December 5, 2017

    New Features

    Regional Clusters are now available in Beta.

    December 1, 2017

    Version updates

    Kubernetes Engine cluster versions have been updated as detailed in the following sections. Seeversioning and upgrades for a full list of the Kubernetes versions you can run on your Kubernetes Engine masters and nodes.

    New Features

    Audit Logging is now available in Beta.

    November 28, 2017

    Version updates

    Kubernetes Engine cluster versions have been updated as detailed in the following sections. Seeversioning and upgrades for a full list of the Kubernetes versions you can run on your Kubernetes Engine masters and nodes.

    New versions available for upgrades and new clusters

    The following versions are now available for new clusters and opt-in master and node upgrades according to this week's rollout schedule:

    • Kubernetes 1.8.4-gke.0
    • Kubernetes 1.7.11-gke.0
    • Kubernetes 1.6.13-gke.0

    Rollout schedule

    DateAvailable zones
    2017-11-28europe-west2-a,us-east1-d
    2017-11-29asia-east1-a,asia-northeast1-a,asia-south1-a,asia-southeast1-a,australia-southeast1-a,europe-west1-c,europe-west3-a,southamerica-east1-a,us-central1-b,us-east4-b,us-west1-a
    2017-11-30asia-east1-c,asia-northeast1-b,asia-south1-b,asia-southeast1-b,australia-southeast1-b,europe-west1-b,europe-west2-b,europe-west3-b,southamerica-east1-b,us-central1-f,us-east1-c,us-east4-c,us-west1-b
    2017-12-1asia-east1-b,asia-northeast1-c,asia-south1-c,australia-southeast1-c,europe-west1-d,europe-west2-c,europe-west3-c,southamerica-east1-c,us-central1-a,us-central1-c,us-east1-b,us-east4-a,us-west1-c

    Other Updates

    Container-Optimized OS versionm63 is now available for use as a Google Kubernetes Enginenode image.

    November 13, 2017

    Version updates

    Kubernetes Engine cluster versions have been updated as detailed in the following sections. Seeversioning and upgrades for a full list of the Kubernetes versions you can run on your Kubernetes Engine masters and nodes.

    New versions available for upgrades and new clusters

    The following versions are now available for new clusters and opt-in master and node upgrades according to this week's rollout schedule:

    • Kubernetes 1.7.10-gke.0
    • Kubernetes 1.8.3-gke.0

    Other Updates

    Container Engine is now namedKubernetes Engine. See theGoogle Cloud blog post.

    Kubernetes Engine'skubectl version has been updated from 1.8.2 to 1.8.3.

    November 7, 2017

    Version updates

    Container Engine cluster versions have been updated as detailed in the following sections. Seeversioning and upgrades for a full list of the Kubernetes versions you can run on your Container Engine masters and nodes.

    New versions available for upgrades and new clusters

    The following versions are now available for new clusters and opt-in master and node upgrades according to this week's rollout schedule:

    • Kubernetes 1.8.2-gke.0

    Rollout schedule

    DateAvailable zones
    2017-11-07europe-west2-a,us-east1-d
    2017-11-08asia-east1-a,asia-northeast1-a,asia-south1-a,asia-southeast1-a,australia-southeast1-a,europe-west1-c,europe-west3-a,southamerica-east1-a,us-central1-b,us-east4-b,us-west1-a
    2017-11-09asia-east1-c,asia-northeast1-b,asia-south1-b,asia-southeast1-b,australia-southeast1-b,europe-west1-b,europe-west2-b,europe-west3-b,southamerica-east1-b,us-central1-f,us-east1-c,us-east4-c,us-west1-b
    2017-11-10asia-east1-b,asia-northeast1-c,asia-south1-c,australia-southeast1-c,europe-west1-d,europe-west2-c,europe-west3-c,southamerica-east1-c,us-central1-a,us-central1-c,us-east1-b,us-east4-a,us-west1-c

    New Features

    Added an option to thegcloud container clusters create command:--enable-basic-auth. This option allows you to create a cluster with basic authorization enabled.

    Added options to thegcloud container clusters update command:--enable-basic-auth,--username, and--password. These options allows you to enable or disable basic authorization and change the username and password for an existing cluster.

    October 31, 2017

    Version updates

    Container Engine cluster versions have been updated as detailed in the following sections. Seeversioning and upgrades for a full list of the Kubernetes versions you can run on your Container Engine masters and nodes.

    New versions available for upgrades and new clusters

    The following versions are now available for new clusters and opt-in master and node upgrades according to this week's rollout schedule:

    • Kubernetes 1.7.9-gke.0

    Scheduled auto-upgrades

    Clusters running the following Kubernetes versions will be automatically upgraded as follows, according to the rollout schedule:

    • Clusters runningKubernetes 1.6.x will be upgraded to1.6.11-gke.0.
    • Clusters runningKubernetes 1.7.x will be upgraded to1.7.8-gke.0.
    • Clusters runningKubernetes 1.8.x will be upgraded to1.8.1-gke.1

    This upgrade applies to cluster masters and, ifnode auto-upgrades are enabled, all cluster nodes.

    New default version for new clusters

    Kubernetes version 1.7.8-gke.0 is now the default version for new clusters, available according to this week's rollout schedule.

    Rollout schedule

    DateAvailable zones
    2017-10-31europe-west2-a,us-east1-d
    2017-11-1asia-east1-a,asia-northeast1-a,asia-south1-a,asia-southeast1-a,australia-southeast1-a,europe-west1-c,europe-west3-a,southamerica-east1-a,us-central1-b,us-east4-b,us-west1-a
    2017-11-2asia-east1-c,asia-northeast1-b,asia-south1-b,asia-southeast1-b,australia-southeast1-b,europe-west1-b,europe-west2-b,europe-west3-b,southamerica-east1-b,us-central1-f,us-east1-c,us-east4-c,us-west1-b
    2017-11-3asia-east1-b,asia-northeast1-c,asia-south1-c,australia-southeast1-c,europe-west1-d,europe-west2-c,europe-west3-c,southamerica-east1-c,us-central1-a,us-central1-c,us-east1-b,us-east4-a,us-west1-c

    New Features

    You can now run Container Engine clusters in regionasia-south1 (Mumbai).

    Fixes

    Clusters using theContainer-Optimized OSnode image versioncos-stable-61 can be affected by Docker daemon crashes and restarts and become unable to schedule pods.

    To mitigate this issue, clusters running Kubernetes versions 1.6.x, 1.7.x, and 1.8.x are slated to automatically upgrade to versions 1.6.11-gke.0, 1.7.8-gke.0, and 1.8.1-gke.1 respectively. These versions have been remapped to use thecos-stable-60-9592-90-0 node image.

    Automatic upgrades must be enabled for this workaround to take effect. If your cluster does not have auto-upgrades enabled, you must manually upgrade your cluster to the appropriate version to employ the workaround.

    Known Issues

    October 24, 2017

    Version updates

    Container Engine cluster versions have been updated as detailed in the following sections. Seeversioning and upgrades for a full list of the Kubernetes versions you can run on your Container Engine masters and nodes.

    New versions available for upgrades and new clusters

    Kubernetes version 1.8.1 is now generally available, according to this week's rollout schedule. See theGoogle Cloud blog post on Container Engine 1.8 for more information on the Kubernetes capabilties highlighted in this release.

    Rollout schedule

    DateAvailable zones
    2017-10-24europe-west2-a,us-east1-d
    2017-10-25asia-east1-a,asia-northeast1-a,asia-southeast1-a,australia-southeast1-a,europe-west1-c,europe-west3-a,southamerica-east1-a,us-central1-b,us-east4-b,us-west1-a
    2017-10-26asia-east1-c,asia-northeast1-b,asia-southeast1-b,australia-southeast1-b,europe-west1-b,europe-west2-b,europe-west3-b,southamerica-east1-b,us-central1-f,us-east1-c,us-east4-c,us-west1-b
    2017-10-27asia-east1-b,asia-northeast1-c,australia-southeast1-c,europe-west1-d,europe-west2-c,europe-west3-c,southamerica-east1-c,us-central1-a,us-central1-c,us-east1-b,us-east4-a,us-west1-c

    New Features

    You can now runCronJobs on your Container Engine cluster. CronJob is a Beta feature in Kubernetes version 1.8.

    You can now view the status of your cluster's nodes using the Google Cloud console.

    The Google Cloud console browser-integrated cloud shell can now automatically generate commands for thekubectl command-line interface.

    You can now edit your cluster's workloads when viewing them with the Google Cloud console.

    Known Issues

    Kubernetes Third-party Resources, previously deprecated, have been removed in version 1.8. These resources will cease to function on clusters upgrading to version 1.8.1 or later.

    Audit Logging, a beta feature in Kubernetes 1.8, is currently not enabled on Container Engine.

    Horizontal Pod Autoscaling with Custom Metrics, a beta feature in Kubernetes 1.8, is currently not enabled on Container Engine.

    Other Updates

    Beta features in the Container Engine API (andgcloud command-line interface) are now exposed via the newv1beta1API surface. To use beta features on Container Engine, you must configure thegcloud command-line interface to use the Beta API surface to rungcloud beta container commands. SeeAPI organization for more information.

    October 10, 2017

    Version updates

    Container Engine cluster versions have been updated as detailed in the following sections. Seeversioning and upgrades for a full list of the Kubernetes versions you can run on your Container Engine masters and nodes.

    New versions available for upgrades and new clusters

    The following Kubernetes versions are now available for new clusters and opt-in master upgrades for existing clusters, according to this week's rollout schedule:

    • 1.7.8
    • 1.6.11

    Clusters running Kubernetes version 1.6.11 can safely upgrade to Kubernetes versions 1.7.x.

    Rollout schedule

    DateAvailable zones
    2017-10-10europe-west2-a,us-east1-d
    2017-10-11asia-east1-a,asia-northeast1-a,asia-southeast1-a,australia-southeast1-a,europe-west1-c,europe-west3-a,southamerica-east1-a,us-central1-b,us-east4-b,us-west1-a
    2017-10-12asia-east1-c,asia-northeast1-b,asia-southeast1-b,australia-southeast1-b,europe-west1-b,europe-west2-b,europe-west3-b,southamerica-east1-b,us-central1-f,us-east1-c,us-east4-c,us-west1-b
    2017-10-13asia-east1-b,asia-northeast1-c,australia-southeast1-c,europe-west1-d,europe-west2-c,europe-west3-c,southamerica-east1-c,us-central1-a,us-central1-c,us-east1-b,us-east4-a,us-west1-c

    Other Updates

    Clusters running Kubernetes versions 1.7.8 and 1.6.11 have upgraded the version ofContainer-Optimized OS running on cluster nodes from versioncos-stable-60-9592-84-0 tocos-stable-61-9765-66-0. See therelease notes for more details.

    This upgrade updates the node's Docker version from 1.13 to 17.03. See theDocker documentation for details on feature deprecations.

    October 3, 2017

    Version updates

    Container Engine cluster versions have been updated as detailed in the following sections. Seeversioning and upgrades for a full list of the Kubernetes versions you can run on your Container Engine masters and nodes.

    New versions available for upgrades and new clusters

    Kubernetes version 1.8.0-gke.0 is now available for early access partners andalpha clusters only. To try out v1.8.0-gke.0,sign up for the early access program.

    Scheduled master auto-upgrades

    Cluster masters running Kuberenetes versions 1.7.x will be automatically upgraded to Kubernetes v1.7.6-gke.1 according to this week's rollout schedule.

    Rollout schedule

    DateAvailable zones
    2017-10-03europe-west2-a,us-east1-d
    2017-10-04asia-east1-a,asia-northeast1-a,asia-southeast1-a,australia-southeast1-a,europe-west1-c,europe-west3-a,southamerica-east1-a,us-central1-b,us-east4-b,us-west1-a
    2017-10-05asia-east1-c,asia-northeast1-b,asia-southeast1-b,australia-southeast1-b,europe-west1-b,europe-west2-b,europe-west3-b,southamerica-east1-b,us-central1-f,us-east1-c,us-east4-c,us-west1-b
    2017-10-06asia-east1-b,asia-northeast1-c,australia-southeast1-c,europe-west1-d,europe-west2-c,europe-west3-c,southamerica-east1-c,us-central1-a,us-central1-c,us-east1-b,us-east4-a,us-west1-c

    New Features

    You can now rotate your username for basic authorization on existing clusters, or disable basic authorization by providing an empty username.

    Fixes

    Kubernetes 1.7.6-gke.1: Fixed a regression influentd.

    Kubernetes 1.7.6-gke.1: Updated thekube-dns add-on to patchdnsmasq vulnerabilities announced on October 2. For more information on the vulnerability, see the associatedKubernetes Security Announcement.

    Known Issues

    Kubernetes 1.8.0-gke.0 (early access and alpha clusters only): Clusters created with a subnetwork with an automatically-generated name that contains a hash (e.g. "default-38b01f54907a15a7") might encounter issues where theirinternal load balancers fail to sync.

    This issue also affects clusters that runlegacy networks.

    Container Engine clusters can enter a bad state if you convert your automatically-configured network to a manually-configured one. In this state,internal load balancers might fail to sync, and node pool upgrades might fail.

    September 27, 2017

    New Features

    You can now configure amaintenance window for your Container Engine clusters. You can use the maintenance window feature to designate specific spans of time for scheduled maintenance and upgrades to your master and nodes. Maintenance window is abeta feature on Container Engine.

    Container Engine'snode auto upgrade feature is now generally available.

    The Ubuntunode image is now generally available for use on your Container Engine cluster nodes.

    September 25, 2017

    Version updates

    Container Engine cluster versions have been updated as detailed in the following sections. Seeversioning and upgrades for a full list of the Kubernetes versions you can run on your Container Engine masters and nodes.

    Scheduled master auto-upgrades

    Cluster masters running Kuberenetes versions 1.7.x will be automatically upgraded to Kubernetes v1.7.5 according to this week's rollout schedule.

    Cluster masters running Kuberenetes versions 1.6.x will be automatically upgraded to Kubernetes v1.6.10 according to this week's rollout schedule.

    Rollout schedule

    DateAvailable zones
    2017-09-25europe-west2-a,us-east1-d
    2017-09-26asia-east1-a,asia-northeast1-a,asia-southeast1-a,australia-southeast1-a,europe-west1-c,europe-west3-a,southamerica-east1-a,us-central1-b,us-east4-b,us-west1-a
    2017-09-27asia-east1-c,asia-northeast1-b,asia-southeast1-b,australia-southeast1-b,europe-west1-b,europe-west2-b,europe-west3-b,southamerica-east1-b,us-central1-f,us-east1-c,us-east4-c,us-west1-b
    2017-09-28asia-east1-b,asia-northeast1-c,australia-southeast1-c,europe-west1-d,europe-west2-c,europe-west3-c,southamerica-east1-c,us-central1-a,us-central1-c,us-east1-b,us-east4-a,us-west1-c

    Fixes

    Kubernetes v1.7.5: Fixed an issue with Kubernetes v1.7.0 to v1.7.4 in whichcontroller-manager could become unhealthy and enter a repair loop.

    Kubernetes v1.6.10: Fixed an issue in which a Google Cloud Load Balancer could enter a persistently bad state if an API call failed while the ingress controller was starting.

    September 18, 2017

    Version updates

    Container Engine cluster versions have been updated as detailed in the following sections. Seeversioning and upgrades for a full list of the Kubernetes versions you can run on your Container Engine masters and nodes.

    New default version for new clusters

    Kubernetes v1.7.5 is the default version for new clusters, available according to this week's rollout schedule below.

    New versions available for upgrades and new clusters

    The following Kubernetes versions are now available for new clusters and opt-in master upgrades for existing clusters:

    • 1.7.6
    • 1.6.10

    New versions available for node upgrades and downgrades

    The following Kubernetes versions are now available fornode upgrades and downgrades:

    • 1.7.6
    • 1.6.10

    Rollout schedule

    DateAvailable zones
    2017-09-19europe-west2-a,us-east1-d
    2017-09-20asia-east1-a,asia-northeast1-a,asia-southeast1-a,australia-southeast1-a,europe-west1-c,europe-west3-a,southamerica-east1-a,us-central1-b,us-east4-b,us-west1-a
    2017-09-21asia-east1-c,asia-northeast1-b,asia-southeast1-b,australia-southeast1-b,europe-west1-b,europe-west2-b,europe-west3-b,southamerica-east1-b,us-central1-f,us-east1-c,us-east4-c,us-west1-b
    2017-09-22asia-east1-b,asia-northeast1-c,australia-southeast1-c,europe-west1-d,europe-west2-c,europe-west3-c,southamerica-east1-c,us-central1-a,us-central1-c,us-east1-b,us-east4-a,us-west1-c

    New Features

    Starting in Kubernetes version 1.7.6, the available resources on cluster nodes have been updated to account for the CPU and memory requirement of Kubernetes node daemons. See theNode documentation in thecluster architecture overview for more information.

    You can nowset a cluster network policy on your Container Engine clusters running Kubernetes version 1.7.6 or later.

    Other Updates

    The deprecatedcontainer-vm node image type has been removed from the list of valid Container Engine node images. Existing clusters and node pools will continue to function, but you can no longer create new clusters and node pools that run thecontainer-vm node image.

    Clusters that use the deprecatedcontainer-vm as a node image cannot be upgraded to Kubernetes v1.7.6 or later.

    September 12, 2017

    Version updates

    Container Engine cluster versions have been updated as detailed in the following sections. Seeversioning and upgrades for a full list of the Kubernetes versions you can run on your Container Engine masters and nodes.

    New versions available for upgrades and new clusters

    The following Kubernetes versions are now available for new clusters and opt-in master upgrades for existing clusters:

    • 1.7.5
    • 1.6.9
    • 1.6.7

    Scheduled master auto-upgrades

    Cluster masters running Kubernetes versions 1.6.x will be upgraded toKubernetes v1.6.9 according to this week's rollout schedule.

    Rollout schedule

    DateAvailable zones
    2017-09-12europe-west2-a,us-east1-d
    2017-09-13asia-east1-a,asia-northeast1-a,asia-southeast1-a,australia-southeast1-a,europe-west1-c,europe-west3-a,southamerica-east1-a,us-central1-b,us-east4-b,us-west1-a
    2017-09-14asia-east1-c,asia-northeast1-b,asia-southeast1-b,australia-southeast1-b,europe-west1-b,europe-west2-b,europe-west3-b,southamerica-east1-b,us-central1-f,us-east1-c,us-east4-c,us-west1-b
    2017-09-17asia-east1-b,asia-northeast1-c,australia-southeast1-c,europe-west1-d,europe-west2-c,europe-west3-c,southamerica-east1-c,us-central1-a,us-central1-c,us-east1-b,us-east4-a,us-west1-c

    New Features

    You can now useIP aliases with an existing subnetwork when creating a cluster. IP aliases are a Beta feature in Google Kubernetes Engine version 1.7.5.

    September 05, 2017

    Version updates

    Container Engine cluster versions have been updated as detailed in the following sections. Seeversioning and upgrades for a full list of the Kubernetes versions you can run on your Container Engine masters and nodes.

    New default version for new clusters

    Kubernetes v1.6.9 is the default version for new clusters, available according to this week's rollout schedule.

    New versions available for upgrades and new clusters

    Kubernetes v1.7.5 is now available for new clusters and opt-in master upgrades.

    Versions no longer available

    The following Kubernetes versions areno longer available for new clusters or upgrades to existing cluster masters:

    • 1.7.3
    • 1.7.4

    Rollout schedule

    DateAvailable zones
    2017-09-05europe-west2-a,us-east1-d
    2017-09-06asia-east1-a,asia-northeast1-a,asia-southeast1-a,australia-southeast1-a,europe-west1-c,europe-west3-a,southamerica-east1-a,us-central1-b,us-east4-b,us-west1-a
    2017-09-07asia-east1-c,asia-northeast1-b,asia-southeast1-b,australia-southeast1-b,europe-west1-b,europe-west2-b,europe-west3-b,southamerica-east1-b,us-central1-f,us-east1-c,us-east4-c,us-west1-b
    2017-09-08asia-east1-b,asia-northeast1-c,australia-southeast1-c,europe-west1-d,europe-west2-c,europe-west3-c,southamerica-east1-c,us-central1-a,us-central1-c,us-east1-b,us-east4-a,us-west1-c

    Other Updates

    Container Engine'skubectl version has been updated from 1.7.4 to 1.7.5.

    You can now run Container Engine clusters in regionsouthamerica-east1 (São Paulo).

    August 28, 2017

    • Kubernetesv1.7.4is available for new clusters and opt-in master upgrades.

    • Kubernetesv1.6.9is available for new clusters and opt-in master upgrades.

    • Clusters with a master version of v1.6.7 andNodeAuto-Upgrades enabled will havenodes upgraded to v1.6.7.

    • Clusters with a master version of v1.7.3 andNodeAuto-Upgrades enabled will havenodes upgraded to v1.7.3.

    • Starting at version v1.7.4, when Cloud Monitoring is enabled for a cluster,container system metrics will startto be pushed by Heapster to Stackdriver Monitoring API. The metrics remainfree, though Stackdriver Monitoring API quota will be affected.

    • Clusters running Kubernetes v1.6.9 and v1.7.4 have updated node images:

      • The COS node image was upgraded fromcos-stable-59-9460-73-0 tocos-stable-60-9592-84-0. Please see theCOS image releasenotes for details.
        • The new COS image includes an upgrade of Docker, from v1.11.2 tov1.13.1. This Docker upgrade contains many stability and performancefixes. A full list of the Docker features that have been deprecatedbetween v1.11.2 and v1.13.1 is available onDocker'swebsite.
        • Three features in Docker v1.13.1 are disabled by default in the COSm60 image, but are planned to be enabled in a later node imagerelease: live-restore, shared PID namespaces and overlay2.
        • Known issue: Docker v1.13.1 supportsHEALTHCHECK,which was previously ignored by Docker v1.11.2 on COS m59. Kubernetessupports more powerful liveness/readiness checks for containers, andit currently does not surface or consume theHEALTHCHECK statusreported by Docker. We encourage users to disableHEALTHCHECK inDocker images to reduce unnecessary overhead, especially ifperformance degradation is observed after node upgrade.Note thatHEALTHCHECK could be inherited from the base image.
      • Ubuntu node image was upgraded fromubuntu-gke-1604-xenial-v20170420-1toubuntu-gke-1604-xenial-v20170816-1.
        • This patch release is based on Ubuntu 16.04.3 LTS.
        • It includes a fix for the Stackdriver Logging issues inubuntu-gke-1604-xenial-v20170420-1.
        • Known issue:Alias IPs is not supported.
    • Known Issues upgrading to v1.7:

    There is a known issue with StatefulSets in 1.7.X that causes StatefulSet podsto become unavailable in DNS upon upgrade. We are currently recommending thatyou not upgrade to 1.7.X if you are using DNS with StatefulSets. A fix isbeing prepared. Additional information can be found here:https://github.com/kubernetes/kubernetes/issues/48327

    • Known Issues running Docker v1.13:

    Docker v1.13.1 supportsHEALTHCHECK,which was previously ignored by Docker v1.11.2 on COS m59. Kubernetes supportsmore powerful liveness/readiness checks for containers, and it currently doesnot surface or consume theHEALTHCHECK status reported by Docker. Weencourage users to disableHEALTHCHECK in Docker images to reduceunnecessary overhead, especially if performance degradation is observed afternode upgrade. Note thatHEALTHCHECK could be inherited from the base image.

    August 21, 2017

    • When using IP aliases, you can now represent service CIDR blocks by using asecondary range instead of a subnetwork. This means you can use IP aliaseswithout specifying the--create-subnetwork option.
    • Cluster etcd fragmentation/compaction fixes.

    • Known Issues upgrading to v1.7.3:

    There is a known issue with StatefulSets in 1.7.X regarding annotations, sowe are currently recommending that you not upgrade to 1.7.X if you are usingthem. A fix is being prepared. Additional information can be found here:https://github.com/kubernetes/kubernetes/issues/48327

    August 14, 2017

    • Cluster masters running Kubernetes versions 1.7.X will be upgraded tov1.7.3according to the following schedule:

      • 2017-08-15: europe-west2-a us-east1-d
      • 2017-08-16: asia-east1-a asia-northeast1-a asia-southeast1-a australia-southeast1-a europe-west1-c europe-west3-a us-central1-b us-east4-b us-west1-a
      • 2017-08-17: asia-east1-c asia-northeast1-b asia-southeast1-b australia-southeast1-b europe-west1-b europe-west2-b europe-west3-b us-central1-f us-east1-c us-east4-c us-west1-b
      • 2017-08-18: asia-east1-b asia-northeast1-c australia-southeast1-c europe-west1-d europe-west2-c europe-west3-c us-central1-a us-central1-c us-east1-b us-east4-a us-west1-c
    • You can now specify a minimum CPU size/class for Alpha clusters by usingthe--min-cpu-platform flag withgcloud alpha container commands.

    • Cluster resize commands (gcloud alpha container clusters resize orgcloudbeta container clusters resize) now safely drain nodes before removal.

    • Updated Google Container Engine's kubectl from version 1.7.2 to 1.7.3.

    • Added--logging-service flag togcloud beta container clusters update.This flag controls the enabling and disabling of Stackdriver Logging integration.Use--logging-service=logging.googleapis.com to enable and--logging-service=noneto disable.

    • Modified the--scopes flag ingcloud beta container clusters create andgcloud beta container node-pools create commands to default tologging.write,monitoring and support passing an empty list.

    August 7, 2017

    • Kubernetesv1.7.3is available for new clusters and opt-in master upgrades.

    • Kubernetesv1.6.8is available for new clusters and opt-in master upgrades.

    • Cluster masters running Kubernetes version v1.6.6 or older will be upgraded tov1.6.7according to the following schedule:

      • 2017-08-08: europe-west2-a us-east1-d
      • 2017-08-09: asia-east1-a asia-northeast1-a asia-southeast1-a australia-southeast1-a europe-west1-c europe-west3-a us-central1-b us-east4-b us-west1-a
      • 2017-08-10: asia-east1-c asia-northeast1-b asia-southeast1-b australia-southeast1-b europe-west1-b europe-west2-b europe-west3-b us-central1-f us-east1-c us-east4-c us-west1-b
      • 2017-08-11: asia-east1-b asia-northeast1-c australia-southeast1-c europe-west1-d europe-west2-c europe-west3-c us-central1-a us-central1-c us-east1-b us-east4-a us-west1-c
    • Node pools can now be created with an initial node count of 0.

    • Cloud monitoring can only be enabled in clusters that have monitoring scopeenabled in all node pools.

    • Known Issues upgrading to v1.6.7:

      • Kubernetes 1.6.7 includes version 0.9.5 of the Google Cloud Ingress Controller. This version contains afix for a bug that caused the controller to incorrectly synchronize Google Cloud URL Maps. Changes tothe ingress resource may not have caused the Google Cloud URL Map to update. Using the fixed controllerwill ensure maps reflect the host and path rules. To avoid potential disruption, validate thatall ingress objects contain the desiredhost orpath rules.

    August 3, 2017

    • Users with access to KubernetesSecret objectscan no longer view the secrets' values in Google Container Engine UI.The recommended way to access them is with thekubectl tool.

    August 1, 2017

    • The VM firewall rule (e.g.cluster-<hash>-vms) for non-legacy auto-modenetworks now includes both the primary and reserved VM ranges (10.128/9)if the primary range lies outside of the reserved range.

    • You can now use the beta Ubuntu node image with clusters running Kubernetesversion 1.6.4 or higher.

    • You can now run Container Engine clusters in regioneurope-west3 (Frankfurt).

    July 26, 2017

    July 25, 2017

    • Kubernetesv1.7.2is available for new clusters and opt-in master upgrades.

    • Known Issues upgrading to v1.7.2:

    • Kubernetesv1.6.7is the default version for new clusters, released according to the followingschedule:

      • 2017-07-25: us-east1-d europe-west2-a
      • 2017-07-26: europe-west1-c us-central1-b us-west1-a asia-east1-a asia-northeast1-a asia-southeast1-a us-east4-b australia-southeast1-a
      • 2017-07-27: us-central1-f europe-west1-b asia-east1-c us-east1-c us-west1-b asia-northeast1-b asia-southeast1-b us-east4-c australia-southeast1-b europe-west2-b
      • 2017-07-28: us-central1-a us-central1-c europe-west1-d asia-east1-b us-east1-b us-east4-a us-west1-c australia-southeast1-c europe-west2-c asia-northeast1-c
    • gcloud beta container clusters create now supportsenabling authorized networks for Kubernetes Mastervia--enable-master-authorized-networks and--master-authorized-networksflags.

    • gcloud beta container clusters update now supportsconfiguring authorized networks for Kubernetes Master via--enable-master-authorized-networks,--no-enable-master-authorized-networks, and--master-authorized-networksflags.

    • gcloud container clusters create now allows the Kubernetes Dashboard to bedisabled for a new cluster via the--disable-addons=KubernetesDashboardflag.

    • gcloud container clusters update now allows the Kubernetes Dashboard to bedisabled on existing clusters via the--update-addons=KubernetesDashboard=DISABLED flag.

    July 18, 2017

    • Kubernetesv1.7.1is available for new clusters and opt-in master upgrades.

    • Cluster masters running Kubernetes version v1.7.0 will be upgraded tov1.7.1according to the following schedule:

      • 2017-07-18: us-east1-d europe-west2-a
      • 2017-07-19: europe-west1-c us-central1-b us-west1-a asia-east1-a asia-northeast1-a asia-southeast1-a us-east4-b australia-southeast1-a
      • 2017-07-20: us-central1-f europe-west1-b asia-east1-c us-east1-c us-west1-b asia-northeast1-b asia-southeast1-b us-east4-c australia-southeast1-b europe-west2-b
      • 2017-07-21: us-central1-a us-central1-c europe-west1-d asia-east1-b us-east1-b us-east4-a us-west1-c australia-southeast1-c europe-west2-c asia-northeast1-c
    • Container Engine now respects KubernetesPod Disruption Budgets,making stateful workloads more stable during upgrades. This also reducesdisruptions during node auto-upgrades.

    • gcloud container clusters get-credentials now correctly respects theHOMEDRIVE/HOMEPATH and USERPROFILE environment variables when generating thekubectl config file on Windows.

    • Known Issues with v1.7.1:

      • Google Cloud Internal Load Balancers created through Kubernetes services (aBeta feature in 1.7) have an issue that causes health-checks to failpreventing them from functioning. This will be fixed in a future patchrelease.

      • Services of type=LoadBalancer in clusters that have nodes runningKubernetes v1.7 may fail Google Cloud Load Balancer health checks. However, the LoadBalancers will continue to forward traffic to backends. This issue will befixed in future patch release and may require special upgrade actions.

    July 13, 2017

    • New views available in Google Container Engine UI, allowing cross-clusteroverview and inspection of various Kubernetes Objects. This new UI will berolling out in the coming week:
      • Workloads: inspect and diagnose your pods and their controllers.
      • Discovery and load balancing: view details of your services, ingresses and load balancers.
      • Configuration: survey all config maps and secrets your containers are using.
      • Storage: browse all storage classes, persistent volumes and claims that your clusters use.

    July 11, 2017

    • Kubernetesv1.7.0This version will be available for new clusters and opt-in master upgradesaccording to the following planned schedule:

      • 2017-07-11: europe-west2-a
      • 2017-07-12: europe-west1-c us-central1-b us-west1-a asia-east1-a asia-northeast1-a asia-southeast1-a us-east4-b australia-southeast1-a
      • 2017-07-13: us-central1-f europe-west1-b asia-east1-c us-east1-c us-west1-b asia-northeast1-b asia-southeast1-b us-east4-c australia-southeast1-b europe-west2-b
      • 2017-07-14: us-central1-a us-central1-c europe-west1-d asia-east1-b us-east1-b us-east4-a us-west1-c australia-southeast1-c europe-west2-c asia-northeast1-c
    • Kubernetes 1.7 is being made available as an optional version for clusters.Please see therelease announcementfor more details on new features.

    • You can now use HTTP re-encryption through Google Cloud Load Balancing toallow HTTPS access from the Google Cloud Load Balancer to your service backend. Thisfeature ensures that your data is fully encrypted in all phases of transit,even after it enters Google's global network.

    • Support for all-private IP (RFC-1918) addresses is generally available. Theseaddresses allow you create clusters and access resources in all-private IPranges, and extends your ability to use Container Engine clusters withexisting networks.

    • Support for external source IP preservation is now generally available.This feature allows applications to be fully aware of client IP addressesfor Kubernetes services you expose.

    • Cluster autoscaler now supports for scaling node pools to 0 or 1, for whenyou don't need capacity.

    • Cluster autoscaler can now use a pricing-based expander, which applies additionalcost-based constraints to let you use auto-scaling in the most cost-effectivemanner. This is default as of 1.7.0 and is not user-configurable.

    • Cluster autoscaler now supports balanced scale-outs of similar node groups.This is useful for clusters that span multiple zones.

    • You can now use API Aggregation to extend the Kubernetes API with custom APIs.For example, you can now add existing API solutions such as service catalog,or build your own.

    • The following new features are available on Alpha clusters running Kubernetesversion 1.7:

      • Local storage
      • External webhook admission controllers
    • Known Issues with v1.7.0:

      • Kubelet certificate rotation is not enabled for Alpha clusters. This issuewill be fixed in a future release.
      • Kubernetes services with network load balancers using static IP will cause the kube-controller-manager to crash loop, leading to multiple master repairs. See issue#48848 for more details. This issue will be fixed in a future release.

    July 10, 2017

    June 26, 2017

    • Known Issues with v1.6.6A bug in the version of fluentd bundled with Kubernetesv1.6.6causes JSON-formatted logs to be exported as plain text. This issue will befixed in v1.6.7. Meanwhile v1.6.6 will remain available as an optionalversion for new cluster creation and opt-in master upgrades, but will not bemade the default. See issue#48018 for moredetails.
    • There will be no release for the week of July 3rd, since this is a holidayin the US. The next release is planned for the week of July 10th.

    June 20, 2017

    • You can now usev1.6.6for creating new clusters.
    • Original plan to upgrade container cluster masters to 1.6 this week has been postponed due to a bug in the GLBC ingress controllerthat causes unintentional overwrites of manual health check edits (Seeknown issues for v1.6.4).This bug is fixed in 1.6.6.
    • DeleteNodepool now drains all nodes in the pool before deletion.
    • You can now run Container Engine clusters in regionaustralia-southeast1 (Sydney).

    June 13, 2017

    • v1.5.7will no longer be available for new clusters and master upgrades.
    • All cluster masters will be upgraded tov1.6.4in the week of 2017-06-19.

    June 5, 2017

    • Cluster masters running Kubernetes versions v1.6.0 - v1.6.3 will beupgraded tov1.6.4according to the following schedule:
      • 2017-06-05: us-east1-d asia-northeast1-c
      • 2017-06-06: europe-west1-c us-central1-b us-west1-a asia-east1-a asia-northeast1-a asia-southeast1-a us-east4-b australia-southeast1-a europe-west2-a
      • 2017-06-07: us-central1-f europe-west1-b asia-east1-c us-east1-c us-west1-b asia-northeast1-b asia-southeast1-b us-east4-c australia-southeast1-b europe-west2-b
      • 2017-06-08: us-central1-a us-central1-c europe-west1-d asia-east1-b us-east1-b us-east4-a us-west1-c australia-southeast1-c europe-west2-c
    • After 2017-06-12, v1.5.7 will no longer be available for new clusters andmaster upgrades.
    • You can now run Container Engine clusters in region europe-west2 (London).

    June 1, 2017

    May 30, 2017

    • Kubernetesv1.6.4is the default version for new clusters, released according to the followingschedule:
      • 2017-05-30: us-east1-d asia-northeast1-c
      • 2017-05-31: europe-west1-c us-central1-b us-west1-a asia-east1-a asia-northeast1-a asia-southeast1-a us-east4-b australia-southeast1-a europe-west2-a
      • 2017-06-01: us-central1-f europe-west1-b asia-east1-c us-east1-c us-west1-b asia-northeast1-b asia-southeast1-b us-east4-c australia-southeast1-b europe-west2-b
      • 2017-06-02: us-central1-a us-central1-c europe-west1-d asia-east1-b us-east1-b us-east4-a us-west1-c australia-southeast1-c europe-west2-c

    May 24, 2017

    • Kubernetesv1.6.4is available for new clusters and opt-in master upgrades.
    • v1.6.1 is no longer available for container cluster node upgrades/downgrades.
    • The default cluster version for new clusters will be changed toKubernetesv1.6.4in the week of May 29th.
    • Kubernetesv1.6.3was skipped due to known issues that have beenfixed inv1.6.4.

    May 17, 2017

    • You can now create clusters with more than 500 nodes in zoneseurope-west1-b andus-central1-a.
    • Fixed the known issue with Container Engine's IP Rotation feature where the cluster SSH firewall rule was not being updated.
    • Container Engine integration withGoogle Cloud Labelsis now available in Beta. For more information, seeCluster Labeling.

    May 12, 2017

    May 10, 2017

    • Cluster masters running Kubernetes versions v1.5.6 and below will beupgraded tov1.5.7according to the following schedule:
      • 2017-05-09: us-east1-d asia-northeast1-c
      • 2017-05-10: europe-west1-c us-central1-b us-west1-a asia-east1-a asia-northeast1-a asia-southeast1-a us-east4-b
      • 2017-05-11: us-central1-f europe-west1-b asia-east1-c us-east1-c us-west1-b asia-northeast1-b asia-southeast1-b us-east4-c
      • 2017-05-12: us-central1-a us-central1-c europe-west1-d asia-east1-b us-east1-b us-east4-a us-west1-c
    • v1.6.0 is no longer available for container cluster nodeupgrades/downgrades.

    Known Issues

    • A known issue with Container Engine'sIP Rotationfeature can cause it to break Kubernetes features that depend on the proxyendpoint (such askubectl exec,kubectl logs), as well as cluster metricsexports into Stackdriver. This issue only affects your cluster if you ranCompleteIPRotation, and have also disabled the default SSHfirewall rule for cluster nodes. There is a simple manual fix; seeIP Rotation known issuesfor details.

    May 3, 2017

    May 2, 2017

    • Kubernetesv1.5.7is the default version for new clusters. This version will be available fornew clusters and opt-in master upgrades according to the following plannedschedule:
      • 2017-05-02: us-east1-d asia-northeast1-c
      • 2017-05-03: europe-west1-c us-central1-b us-west1-a asia-east1-a asia-northeast1-a asia-southeast1-a us-east4-b
      • 2017-05-04: us-central1-f europe-west1-b asia-east1-c us-east1-c us-west1-b asia-northeast1-b asia-southeast1-b us-east4-c
      • 2017-05-05: us-central1-a us-central1-c europe-west1-d asia-east1-b us-east1-b us-east4-a us-west1-c
    • Cluster masters running Kubernetes versions v1.6.0 and v.1.6.1 will be upgraded tov1.6.2.

    April 26, 2017

    • Kubernetesv1.6.2This version will be available for new clusters and opt-in master upgrades.
    • You can create a cluster with HTTP basic authentication disabled by passingan empty username:gcloud container clusters create CLUSTER_NAME --username=""This feature only works with version 1.6.0 and later.
    • Fixed a bug where SetMasterAuth would fail silently on clusters belowv1.6.0. SetMasterAuth is only allowed for clusters at v1.6.0 and above.
    • Fixed a bug for clusters at v1.6.0 and above where fluentd pods weremistakenly created on all nodes when logging was disabled.
    • gcloudkubectl version is now 1.6.2 instead of 1.6.0.

    April 12, 2017

    • Kubernetesv1.6.1This version will be available for new clusters and opt-in master upgradesaccording to the following planned schedule:
      • 2017-04-12: us-east1-d asia-northeast1-c
      • 2017-04-13: europe-west1-c us-central1-b us-west1-a asia-east1-aasia-northeast1-a
      • 2017-04-14: us-central1-f europe-west1-b asia-east1-c us-east1-cus-west1-b asia-northeast1-b
      • 2017-04-17: us-central1-a us-central1-c europe-west1-d asia-east1-bus-east1-b
    • Kubernetesv1.5.6is still the default version for new clusters.
    • Container engine hosted masters will be upgraded to v1.5.6 according to theplanned schedule mentioned above.
    • Known issue:
      • gcloud container clusters update --set-password (or --generate-password), for setting or rotating your cluster admin password, does not work on clusters running Kubernetes version 1.5.x or earlier. Please use this method only on clusters running Kubernetes version 1.6.x or later.

    April 4, 2017

    • Kubernetesv1.6.0This version will be available for new clusters and opt-in master upgradesaccording to the following planned schedule:
      • 2017-04-04: us-east1-d asia-northeast1-c
      • 2017-04-05: europe-west1-c us-central1-b us-west1-a asia-east1-aasia-northeast1-a
      • 2017-04-06: us-central1-f europe-west1-b asia-east1-c us-east1-cus-west1-b asia-northeast1-b
      • 2017-04-07: us-central1-a us-central1-c europe-west1-d asia-east1-bus-east1-b
    • Kubernetesv1.5.6is still the default version for new clusters.
    • Container-Optimized OSis now generally available. You can create or upgrade clusters and nodepools that use Container-Optimized OS by specifyingimageType values ofeitherCOS orGCI.
    • A new system daemon, node problem detector, is introduced in Kubernetes v1.6on COS node images. It detects node problems (e.g. kernel/network/containerruntime issues) and reports them as node conditions and events.
    • Starting in 1.6, a default StorageClass instance with the gce-pd provisioneris installed. All unbound PVCs that don't specify a StorageClass willautomatically use the default provisioner, which is different behavior fromprevious releases and can be disabled by modifying the default StorageClassand removing the "storageclass.beta.kubernetes.io/is-default-classannotation". This feature replaces alpha dynamic provisioning, but thealpha annotation will still be allowed and will retain the same behavior.
    • gcloud container clusters create|get-credentials will now configurekubectl to use the credentials of the active gcloud account by default,instead of using application default credentials. This requires kubectl1.6.0 or higher. You can update kubectl by runninggcloud components update kubectl.If you prefer to use application default credentials to authenticate kubectlto Google Container Engine clusters, you can revert to the previous behaviorby setting thecontainer/use_application_default_credentials property:
      • gcloud config set container/use_application_default_credentials true
      • export CLOUDSDK_CONTAINER_USE_APPLICATION_DEFAULT_CREDENTIALS=true
    • Google Cloud CLI kubectl version updating to 1.6.0.
    • New clusters launched at 1.6.0 will use be using etcd3 in the master.Existing cluster masters will be automatically updated to use etcd3 in afuture release.
    • Starting in 1.6,RBACcan be used to grant permissions for users and Service Accounts to thecluster's API. To help transition to using RBAC, the cluster's legacyauthorization permissions are enabled by default, allowing KubernetesService Accounts full access to the API like they had in previous versionsof Kubernetes. An option will be rolled out soon to allow the legacyauthorization mode to be disabled in order to take full advantage of RBAC.
    • You can now use gcloud to set or rotate the admin password for Containerclusters by running
      • gcloud container clusters update --set-password
      • gcloud container clusters update --generate-password
    • During node upgrades, Container Engine will now verify and recreate theManaged Instance Group for a node pool (at size 0) if required.

    March 29, 2017

    • Kubernetesv1.5.6is the default version for new clusters. This version will be available fornew clusters and opt-in master upgrades according to the following plannedschedule:

      • 2017-03-29: us-east1-d asia-northeast1-c
      • 2017-03-30: europe-west1-c us-central1-b us-west1-a asia-east1-aasia-northeast1-a
      • 2017-03-31: us-central1-f europe-west1-b asia-east1-c us-east1-cus-west1-b asia-northeast1-b
      • 2017-04-03: us-central1-a us-central1-c europe-west1-d asia-east1-bus-east1-b
    • Cluster and node pool create requests will return a 4xx error (instead of5xx) if an invalid service account is specified.

    • Return a more accurate error message for cluster requests if the Container APIis not enabled.

    March 20, 2017

    • Update Google Container Engine's kubectl from version 1.5.3 to 1.5.4.
    • Container engine hosted masters will be upgraded to v1.5.4 according to thefollowing planned schedule:
      • 2017-03-23: us-east1-d asia-northeast1-c
      • 2017-03-24: europe-west1-c us-central1-b us-west1-a asia-east1-aasia-northeast1-a
      • 2017-03-27: us-central1-f europe-west1-b asia-east1-c us-east1-cus-west1-b asia-northeast1-b
      • 2017-03-28: us-central1-a us-central1-c europe-west1-d asia-east1-bus-east1-b

    March 16, 2017

    • Kubernetesv1.5.4is the default version for new clusters.
    • Added--enable-autorepair flag togcloud beta container clusters create,gcloud beta container node-pools create andgcloud beta containernode-pools update

    March 6, 2017

    • Container Engine node auto-repair now available in Beta. For more information,see https://cloud.google.com/kubernetes-engine/docs/node-auto-repair
    • Google Cloud console now allows enabling automatic repairfor new clusters and node pools.

    March 1, 2017

    • Container Engine hosted masters

      • running v1.4 will be upgraded to v1.4.9.
      • running v1.5 will be upgraded to v1.5.3.

      according to the following planned schedule:

      • 2017-03-02: us-east1-d asia-northeast1-c
      • 2017-03-03: europe-west1-c us-central1-b us-west1-a asia-east1-aasia-northeast1-a
      • 2017-03-06: us-central1-f europe-west1-b asia-east1-c us-east1-cus-west1-b asia-northeast1-b
      • 2017-03-07: us-central1-a us-central1-c europe-west1-d asia-east1-bus-east1-b

    February 23, 2017

    • Kubernetesv1.5.3is the default version for new clusters.
    • Google Cloud CLI kubectl version updating to 1.5.3.

    February 14, 2017

    • It is no longer necessary to disable the HttpLoadBalancing add-on when youcreate a cluster without adding the compute read/write scope to nodes.Previously, when you created a cluster without adding thecompute read/write scope, you were required to disable HttpLoadBalacing.

    January 31, 2017

    • Google Cloud CLI kubectl version updating to 1.5.2.

    January 26, 2017

    • Kubernetesv1.5.2is the default version for new clusters.

    • The Google Cloud CLI and kubectl 1.5+ support usinggcloud credentials for authentication.Currently,gcloud container clusters create andgcloud container clustersget-credentials configure kubectl to useApplication DefaultCredentialsto authenticate to Container Clusters. If these differ from the Identity and Access Management (IAM) role thatthe Google Cloud CLI is using, kubectl requests can fail authentication(#30617). With Googlegcloud CLI 140.0.0 and kubectl 1.5+, the Google Cloud CLI can configure kubectl to use itsown credentials. This means that if, e.g., thegcloud command-line is configured to use aservice account, kubectl will authenticate as the same service account.

      To enable using the Google Cloud CLI's own credentials, set thecontainer/use_application_default_credentials property to false:

      exportCLOUDSDK_CONTAINER_USE_APPLICATION_DEFAULT_CREDENTIALS=false# orgcloudconfigsetcontainer/use_application_default_credentialsfalse

      The current default behavior is to continue using application defaultcredentials. The Google Cloud CLI credentials will be made the default for kubectlconfiguration (viagcloud container clusters create|get-credentials) in afuture release.

    January 17, 2017

    January 10, 2017

    • Rollout of Kubernetes v1.5 as the default for new clusters is postponed untilv1.5.2 to fix known issues with v1.5.1.

    • Fixed an issue where Node Upgrades would fail if one of the nodes was notregistered with the Master.

    • Google Cloud CLI kubectl version updating to 1.5.1.

    Known Issues with Kubernetes v1.5.1

    • #39680 Defining a podwith a resource request of 0 will cause Controller Manager to crashloop.

    • #38322 Kubelet canevict or refuse to admit critical pods (kube-proxy, static pods) when undermemory pressure.

    January 4, 2017

    • Default cluster version for new clusters will be changed toKubernetes v1.5.1in the week of January 9th.

    January 3, 2017

    • Google Cloud console now allows setting newly createdclusters and node pools to automatically upgrade when a new Kubernetes versionbecomes available.Seedocumentation for details.

    December 14, 2016

    • Kubernetesv1.4.7is the default version for new clusters.
    • Kubernetesv1.5.1is available for new clusters.
    • Node pools can now opt in to automatically upgrade when a new Kubernetesversion becomes available.Seedocumentation for details.
    • Node pool upgrades can now be rolled back using thegcloud alpha container node-pools rollback <pool-name> command.Seegcloud alpha container node-pools rollback --help for more details.

    December 7, 2016

    • Google Cloud console now allows choosing betweenContainer-VM Image (GCI) and the deprecated container-vm when adding new nodepools to existing clusters.To learn more about image types, clickhere.

    December 5, 2016

    • Container Engine hosted masters running v1.4 will be upgraded tov1.4.6.

    November 29, 2016

    • Increase master disk size in large Google Container Engine clusters. This is needed as in large clusters etcd needs much more IOPS.

    • Change thegcloud container list-tags command to support user-specified filters on occurrences and exposes a column summarizing vulnerability information.

    November 15, 2016

    November 8, 2016

    • Container Engine hosted masters running v1.4 have been upgraded tov1.4.5.

    • Container Engine hosted masters running v1.3 will be upgraded to v1.4.5according to the following planned schedule:

      • 2016-11-09: us-east1-d
      • 2016-11-10: asia-east1-a, asia-northeast1-a, europe-west1-c, us-central1-b, us-west1-a
      • 2016-11-11: asia-east1-c, asia-northeast1-b, europe-west1-b, us-central1-f, us-east1-c, us-west1-b
      • 2016-11-14: asia-east1-b, asia-northeast1-c, europe-west1-d, us-central1-a, us-central1-c, us-east1-b

    November 7, 2016

    November 2, 2016

    November 1, 2016

    • Kubernetesv1.4.5is the default version for new clusters.

    • Kubernetes v1.4.5 and v1.3.10 include fixes for CVE-2016-5195 (Dirty Cow),which is a Linux kernel vulnerability that allows privilege escalation. Ifyour clusters are running nodes with lower versions, we strongly encourage youto upgrade them to a version of Kubernetes that includes a node image that isnot vulnerable, such as Kubernetes 1.3.10 or 1.4.5. To upgrade a cluster, seehttps://cloud.google.com/kubernetes-engine/docs/clusters/upgrade.

    • Upgrade operations can now be cancelled usinggcloud alpha containeroperations cancel <operation_id>. Seegcloud alpha container operationscancel --help for more details.

    October 17, 2016

    • Kubernetesv1.4.3is the default version for new clusters.

    • Reminder that the base OS image for nodes has changed in the 1.4 release. Aset of known issues have been identified and have been documentedhere.If you suspect that your application or workflow is having problems with newclusters, you may select the old ContainerVM by following the opt-outinstructions documentedhere.

    • Rewrote the node upgrade logic to make it less disruptive by waiting for the node toregister with the Kubernetes master before upgrading the next node.

    • Added support for new clusters and node-pools to use preemptibleVM instances by using the--preemptible flag. Seegcloud beta container clusters create --help andgcloud beta container node-pools create --help for more details.

    October 10, 2016

    • Kubernetesv1.4.1is becoming the default version for new clusters.

    • Reminder that the base OS image for nodes has changed in the 1.4 release. Aset of known issues have been identified and have been documentedhere.If you suspect that your application or workflow is having problems with newclusters, you may select the old ContainerVM by following the opt-outinstructions documentedhere.

    • Fix a bug ingcloud beta container images list-tags.

    • Add support for kubernetes labels on new clusters and nodepools by passing--node-labels=label1=value1,label2=value2.... Seegcloud container clusters create --help andgcloud container nodepools create --help for more details andexamples.

    • Update kubectl to version 1.4.1.

    October 5, 2016

    • Can now specify the cluster-version when creating Google Container Engine clusters.

    • Update kubectl to version 1.4.0.

    • Introduce 1.3.8 as a valid cluster version. 1.3.8 fixes a log rotation leak on the master.

    September 27, 2016

    • Kubernetesv1.4.0is becoming the default version for new clusters.

    • Container-VM Image (GCI), which was introduced earlier this year, is now the defaultImageType for new clusters and node-pools. The old container-vm is now deprecated; itwill be supported for a limited time. To learn more about how to use GCI, clickhere.

    • Can now create temporary clusters with all kubernetes alpha features enabledvia

      gcloud alpha container clusters create --enable-kubernetes-alpha

      Seedocumentation for details.

    • Can now add custom kubernetes labels on new clusters and nodepools. via

      gcloud alpha container clusters create --node-labels=key1=value1,key2=value2...

      Seegcloud alpha container clusters create --help for details.

    Known Issues with v1.4.0 masters and older nodes

    • init-containers are now supported on Container Engine, but only when master and nodesare running 1.4.0 or higher. Other configurations are not supported.

    • Customers manually upgrading masters to 1.4 should be aware that thelowest node version supported with it is 1.2.

    September 20, 2016

    • Container Engine hosted masters will be upgraded to v1.3.7 in zones according to the following planned schedule:

      • 2016-09-21: us-east1-d
      • 2016-09-22: asia-east1-a, europe-west1-c, us-central1-b, us-west1-a
      • 2016-09-23: asia-east1-c, europe-west1-b, us-central1-f, us-east1-c, us-west1-b
      • 2016-09-26: asia-east1-b, europe-west1-d, us-central1-a, us-central1-c, us-east1-b
    • Google Cloud CLI kubectl version updated to v1.3.7

    September 15, 2016

    • Kubernetesv1.3.7is the default version for new clusters.

    • Container Engine hosted masters have been upgraded to v1.3.6.

    • Known Issues with v1.3.6 fixed in v1.3.7

      • #32415 Fixes a bugin kubelet hostport logic which flushes KUBE-MARK-MASQiptables chain.

      • #30790 Fixesthe panic that occurs in the federation controller manager whenregistering a Container Engine cluster to the federation.

    September 6, 2016

    • Cluster update to add node locations (API:rest/v1/projects.zones.clusters/update,CLI:gcloud beta container clusters update --additional-zones) will now waitfor all nodes to be healthy before marking operation completed (DONE).

    August 30, 2016

    • Kubernetes v1.3.5is the default version for new clusters.

    • Known Issues with v1.3.5 fixed in v1.3.6

      • #27653 Volume manager should be more robust across restarts.

      • #29997 loadBalancerSourceRanges does not work on Container Engine.

    • Known Issues with older versions fixed in v1.3.6

      • #31219 Graceful termination fails if terminationGracePeriodSeconds > 2

      • #30828 Netsplit causes pods to get stuck in NotReady for < 1.2 nodes

      • #29358 Google Compute Engine PD Detach fails if node no longerexists.

    • cluster.master_auth.password is no longer required in aclusters.createrequest. If a password is not specified for a cluster, one will be generated.

    • Google Cloud CLI kubectl version updated to v1.3.5

    • Image Type selection forgcloud container commands is now GA. Can now usegcloud container clusters create --image-type=...gcloud container clusters upgrade --image-type=...

    August 17, 2016

    • Kubernetes v1.3.5is the default version for new clusters.

    • Google Cloud CLI changed thecontainer/use_client_certificate property default value tofalse. This makes thegcloud container clusters create andgcloud container clusters get-credentials commands configurekubectl to use Google OAuth2 credentials by default instead of the legacy client certificate.

    August 8, 2016

    • Kubernetes v1.3.4is the default version for new clusters.

    • Google Cloud CLI kubectl version updated to v1.3.3.

    July 29, 2016

    July 22, 2016

    • Kubernetes v1.3.2is the default version for new clusters.

    • 1.3.0 clusters have been upgraded to 1.3.2 to pick up the fix forbad route creation.

    • Fixed the issues where clusters with a non-default master auth usernamewere unable to authenticate using HTTP Basic Auth.

    • The DNS Replica Auto-Sizer now creates a minimum of 2 replicas except forsingle node clusters.

    • Google Cloud CLI kubectl version updated to v1.2.5.

    • Google Cloud console now supports CIDR ranges withmask sizes from /8 to /19 on cluster creation.

    • Google Cloud console now supports specifying additionalzones on cluster creation.

    • Google Cloud console now supports creating clusters withup to 2000 nodes (across multiple node pools).

    • Google Cloud console now supports specifying a local SSDcount on cluster creation and while creating and editing node pools.

    • Known Issues

      • #29051 PVC Volume not detached if pod deleted via namespace.deletion.

      • #29358 Google Compute Engine PD Detach fails ifnode no longer exists.

      • #28616 Mounting (only 'default-token') volume takes a long timewhen creating a batch of pods (parallelization issue).

      • #28750 Error while tearing down pod, "device or resource busy"on service account secret.

    July 11, 2016

    • Kubernetes v1.3.0is becoming the default version for new clusters.

    • Existing Google Container Engine cluster masters were upgraded toKubernetes v1.2.5over the previous week.

    • Improved error messages when a cluster is already being operated on.

    • Now supports creating clusters and node pools with local SSDs attached tonodes. SeeContainer Cluster Operationsfor examples.

    • Cluster autoscaling is now available for clusters running v1.3.0. Autoscalingoptions can be specified on cluster create and update. SeeContainer Cluster Operationsfor examples.

    • Existing single-zone clusters can now be updated to multi-zone clusters byrunninggcloud beta container clusters update --additional zones. SeeContainer Cluster Operationsfor examples.

    • Known issues:

      • Scaling v1.3.0 clusters after creation (including via clusterautoscaling) can cause bad routes to be created with colliding targetCIDRs. Bad routes can be detected and manually fixed via following

        * 1. List routes with duplicate destination ranges gcloud compute routes list --filter="name ~ gke-$CLUSTER_NAME" --format='value(destRange)' | uniq -d

        If the above returns any values, the bad routes can be fixed by deleting one of the target instances. A new one will be automatically recreated with a working route.

        * 2. Replace $DUPE_RANGE with a destination range from 1. gcloud compute routes list --filter="destRange:$DUPE_RANGE"* 3. Delete one of the target instances listed by 2. gcloud compute instances delete $TARGET_INSTANCE
      • kubectl authorization for v1.3.0 clusters fails if a the cluster iscreated with a non-default master auth username (gcloud containerclusters create --username ...). This can be worked around byauthenticating with the cluster certificate instead by running

        kubectl config unset users.gke_$PROJECT_$ZONE_$NAME.username

        on the machine from which you want to runkubectl, where$PROJECT,$ZONE,$NAME are the cluster's project id, zone and name,respectively.

    July 1, 2016

    June 20, 2016

    • Google Cloud console supports creation and deletion ofnode pools.

    • (Breaking change) The--wait flag for thegcloud container clusterscommand group is now deprecated; please use the--async flag instead.

    June 13, 2016

    • Bug fixes.

    June 7, 2016

    • Fixed a bug wherekubectl for the wrong architecture was installed onWindows. We now install 32- and 64-bit.

    • Google Cloud console supports resizing and upgrading nodepools.

    June 3, 2016

    • Bug fixes.

    May 27, 2016

    • gcloud container clusters update command is now available for updatingcluster settings of an existing container cluster.

    • Thegcloud container node-pools commands are now available for creatingdeleting, describing and listing node pools of a cluster.

    • Google Cloud console supports listing node pools. Listednode pools can also be upgraded/downgraded to supported Kubernetes versions.

    May 18, 2016

    • gcloud alpha container commands (e.g. create) now support specifyingalternate ImageTypes, such as the newly-available BetaContainer-VM Image.To try it out, update to the latest gcloud (gcloud components install alpha ;gcloud components update) and then create a new cluster:gcloud alphacontainer clusters create --image-type=GCI $NAME. Support for ImageTypes inGoogle Cloud console will follow at a later date.

    • Thegcloud container clusters list command now sorts the clustersbased on zone and then on cluster name.

    • Thegcloud container clusters create command now allows specifying--max-nodes-per-pool (default 1000) to create multiple node pools forlarge clusters.

    May 16, 2016

    • Container Engine hosted masters have been upgraded to v1.2.4.

    • Google Cloud CLIkubectl version updated to v.1.2.4.

    • CreateCluster calls now accept multiple NodePool objects.

    May 6, 2016

    • Container Engine hosted masters have been upgraded to v1.2.3.

    • Google Cloud CLIkubectl version updated to v1.2.3

    April 29, 2016

    • Kubernetesv1.2.3is the default version for new clusters.

    • gcloud container clusters resize now allows specifying a node poolvia--node-pool.

    April 21, 2016

    • Can now create a multi-zone cluster, which is a cluster whose nodes spanmultiple zones, enabling higher availability of applications running in thecluster. More details on multi-zone clusters can be found athttp://kubernetes.io/docs/admin/multiple-zones/. The ability to convertexisting clusters to be multi-zone will be coming soon.

    • gcloud container clusters create now allows specifying multiple zones withina region for your cluster's nodes to be created in by using the--additional-zones flag.

    • Fixed bug that causedkubectl component to be missing fromgcloud components list on Windows.

    • Google Cloud CLIkubectl version updated to v1.2.2

    April 13, 2016

    • Known issue: the "bastion route"workaround for accessing services from outside of a kubernetes cluster nolonger works with 1.2.0 - 1.2.2 nodes, due to a change in kube-proxy. If youare using this workaround, we recommend not upgrading nodes to 1.2.x at thistime. This will be addressed in a future patch release.

    April 11, 2016

    • Kubernetesv1.2.2is the default version for new clusters.

    • gcloud alpha container clusters update now allows enabling/disablingaddons for Container Engine clusters via--update-addons flag.

    • gcloud container clusters create now supports disablingHPA and Ingress controller addons via--disable-addons flag.

    • Google Cloud console supports "Google Kubernetes Engine master upgrade" option,which allows proactive upgrade of cluster masters. Note this is the samefunctionality available viagcloud container clusters upgrade --master.

    April 4, 2016

    • Kubernetesv1.2.1is the default version for new clusters.

    March 29, 2016

    • The API Discovery Doc and Client Libraries have been updated.

    • gcloud container clusters create|get-credentials will warn|fail respectivelyif the HOME env var isn't set. The variable is required to store kubectlcredentials (kubeconfig).

    • Google Cloud CLIkubectl component is now available for Windows.

    March 21, 2016

    • Kubernetesv1.2.0is the default version for new clusters. This update contains significantchanges from v1.1, described in detail atreleases-1.2.0.Major changes include

      • Increased cluster scale by 400% to 1000 nodes with 30,000 pods percluster.
      • Kubelet supports 100 pods per node with 4x reduced system overhead.
      • Deployment and DaemonSet API now Beta. Job and HorizontalPodAutoscalerAPIs moved from Beta to GA.
      • Ingress supports HTTPS.
      • Kube-Proxy now defaults to an iptables-based proxy.
      • Docker v1.9.1.
      • Dynamic configuration for applications via ConfigMap API providesalternative to baking in commandline flags when building container.
      • New kubernetes GUI that enables the same functionality as CLI.
      • Graceful node shutdown viakubectl drain command to gracefully evictpods from nodes.
    • Access scopesservice.managementandservicecontrol are nowenabled by default for new Container Engine clusters

    • Clusters created without compute read/write node scopes must also disableHttpLoadBalancing.Note that disabling compute read/write is only possible via the raw API, not theGoogle Cloud CLI or theGoogle Cloud console.

    • ClusterUpdates to clusters whose node scopes do not have compute read/writemust also specify an AddonsConfig withHttpLoadBalancingdisabled.

    • Google Cloud CLIkubectl version updated to 1.2.0.

    March 16, 2016

    • CreateCluster will now succeed if the kubernetes API reports at least 99% ofnodes have registered and are healthy within a startup deadline.

    • gcloud container clusters create prints a warning if cluster creationfinished with > 99% but < 100% of nodes registered/healthy.

    March 2, 2016

    • Container Engine hosted master upgrades from v1.1.7 to v1.1.8 werecompleted this week.

    February 26, 2016

    • Kubernetesv1.1.8is the default version for new clusters.

    • DeleteCluster will fail fast with an error if there are backend services thattarget the cluster's node group, as existence of such services will blockdeletion of the nodes.

    • You can now self-initiate an upgrade of a cluster's hosted master to thelatest supported Kubernetes version by runninggcloud container clusters upgrade --master. This lets you access versionsahead of automatic Container Engine hosted master upgrades.

    February 10, 2016

    • Container Engine hosted master upgrades from v1.1.3, v1.1.4 to v1.1.7 werecompleted this week.

    • Google Cloud CLIkubectl version is 1.1.7.

    January 28, 2016

    • Kubernetesv1.1.7is the default version for new clusters.

    January 15, 2016

    • Kubernetesv1.1.4is the default version for new clusters.

    • Can now rungcloud container clusters resize to resize Container Engine clusters.

    • gcloud container clustersdescribe andlist now notify the user when anode upgrade is available.

    • Google Cloud CLIkubectl version is 1.1.3.

    January 5, 2016

    • Fixed an issue whereGoogle Cloud console incorrectlydisallowed users from creating clusters with Cloud Monitoring enabled.

    • Fixed an issue where users could not create clusters in domain-scoped projects.

    December 8, 2015

    • Kubernetesv1.1.3is the default version for new clusters.

    • Added support for custom machine types.

    • Create cluster now checks that the network for the cluster has a route to thedefault internet gateway. If no such route exists, the request returns with anerror immediately, instead of timing out waiting for the nodes to register.

    • gcloud container clusters upgradenow prompts for confirmation.

    December 3, 2015

    • The Google Container Engine v1beta1 API, which was previously deprecated, isnow disabled.

    • Container Engine hosted masters were upgraded to v1.1.2 this week, exceptfor clusters with nodes older than v1.0.1, which will be upgraded once v1.1.3is available.

    November 30, 2015

    • Kubernetesv1.1.2is the default version for new clusters.

    • Container Engine now supports manual-subnet networks.Subnetworks are an Alpha feature of Google Compute Engine and you must bewhitelisted to use them. See theSubnetworksdocumentation for whitelist information.

      Once whitelisted, thesubnetwork is specified in the cluster createrequest. In the REST API, this is specified as the value of thesubnetwork field of thecluster object;when usinggcloud container commands, pass a--subnetwork flag togcloud container clusters create.

    • Improved reliability of cluster creation and deletion.

    November 18, 2015

    November 12, 2015

    The release documented below is being rolled out over the next few days.

    • Clusters can now be created with up to 250 nodes.

    • The Google Compute Engine load balancer controller addon is added by defaultto new clusters.Learn more.

    • Kubernetesv1.1.1is the default version for new clusters.

      Important Note: The packagedkubectl is version 1.0.7, consequentlynew Kubernetes 1.1 APIs like autoscaling will not be available viakubectluntil next week's push of thekubectl binary.

      Users who want access before then can manually download a 1.1kubectl from:

      And thenchmod a+x kubectl; cp kubectl $(which kubectl) to install it.

    • Kubernetes v0.19.3 and v0.21.4 are no longer supported for nodes.

    • New clusters using thef1-micro machine type must contain at least threenodes. This ensures that there is enough memory in the cluster to run morethan just a couple of very small pods.

    • kubectl version is 1.0.7.

    November 4, 2015

    • Kubernetesv1.0.7is the default version for new clusters.

    • Existing clusters will have their masters upgraded from v1.0.6 tov1.0.7 over the coming week.

    • Added support forsubnetworks(Alpha).

    October 27, 2015

    • Added adetail field to operation objects to show progress details forlong-running operations (such as cluster updates).

    • Better categorization of errors caused by projects not being fullyinitialized with the default service accounts.

    October 19, 2015

    • The--container-ipv4-cidr flag has been deprecated in favor of--cluster-ipv4-cidr.

    • The current node count of Container Engine clusters is available from the RESTAPI.

    • Metrics in Cloud Monitoring are now available with a much shorter delay.

    • Cluster names now only need to be unique within each zone, not within theentire project.

    • Error messages involving regular expressions have more useful, human-readablehints.

    October 12, 2015

    • You can now specify custom metadata to be added to the nodes when creatinga cluster with theREST API.

    September 25, 2015

    • Cluster self links now contain the project ID rather than the project number.

    • kubectl version is 1.0.6.

    September 18, 2015

    • Kubernetesv1.0.6is the default version for new clusters.

    • Existing clusters will have their masters upgraded from v1.0.4 tov1.0.6 over the coming week.

    September 4, 2015

    • Fixed a bug where aCreateCluster request would be rejected if it containedaClusterApiVersion. Since the field is output-only, it is now silentlyignored.

    August 31, 2015

    • To avoid creating clusters without any space for non-system containers, thereare now limits on clusters consisting of f1-micro instances:

      • A single-node f1-micro cluster must disable both logging and monitoring.
      • A two-node f1-micro cluster must disable at least one of logging andmonitoring.

    August 26, 2015

    Google Container Engine is out of beta.

    • Allgcloud beta container commands are now in thegcloud containercommand group instead.

    • You can now use the Google Container Engine API to enable or disable GoogleCloud Monitoring on your cluster. Use thedesiredMonitoringService field of the cluster update method.When updating this field, the Kubernetes apiserver will be see a brief outageas the master is updated.

    August 14, 2015

    • Kubernetesv1.0.3is the default version for new clusters.

    • Thecompute anddevstorage.read_only auth scopes are no longer requiredand are no longer automatically added server-side to new clusters. Thegcloud command and Google Cloud console still add these scopes on theclient side when creating new clusters; the REST API does not.

    • Listing container clusters in a non-existent zone now results in a404: Not Found error instead of an empty list.

    • Theget-credentials command has moved togcloud beta container clusters get-credentials. Runninggcloud beta container get-credentials prints an error redirecting to the newlocation.

    • The newgcloud beta container get-server-config command returns:

      • The default Kubernetes version currently used for new clusters.
      • The list of supported versions for node upgrades(viagcloud beta container clusters upgrade).

    August 4, 2015

    • Kubernetesv1.0.1is the default version for new clusters.

    • kubectl version is 1.0.1.

    • Removed the v1beta1 API discovery doc in preparation for deprecation.

    • Thegcloud alpha container commands target the Container Engine v1 API. Theoptions forgcloud alpha container clusters create have been updatedaccordingly:

      • --user renamed--username.
      • --cluster-api-version removed. Cluster version not selectablein v1 API; new clusters always created at latest supported version.
      • --image option removed. Source image not selectable in v1 API;clusters are always created with latest supported ContainerVM image.Note that using an unsupported image (i.e. not ContainerVM) wouldresult in an unusable cluster in most cases anyway.
      • Added--no-enable-cloud-monitoring to turn off cloud monitoring(on by default).
      • Added--disk-size option for specifying boot disk size of node VMs.

    July 27, 2015

    • A firewall rule is now created at the time of cluster creation to make nodeVMs accessible via SSH. This ensures that the Kubernetes proxy functionalityworks.

    • Updated the admission controllers list to match therecommended list for v1.0.

    • Disabled the--source-image option in the v1beta1 API. Attempting torungcloud alpha container clusters create --source-image now returns anerror.

    • Removed the option to create clusters in the 172.16.0.0/12 private IP block.

    July 24, 2015

    Upgrade to Kubernetes v1 - Action Required

    Users must upgrade their configuration files to the v1 Kubernetes APIbefore August 5th, 2015. This applies to any Beta Container Engine clustercreated before July 21st.

    Google Container Engine will upgrade container cluster masters beginning onAugust 5th, to use the v1 Kubernetes API. If you'd like to upgradeprior, pleasesign up for an early upgrade.

    This upgrade removes support for the v1beta3 API. All configuration filesmust be formatted according to the v1 specification to ensure that yourcluster remains functional. The v1 API represents the production-ready set ofAPIs for Kubernetes and Container Engine.

    Some helpful resources are:

    If your configuration files already use the v1 specification, no action isrequired.

    July 15, 2015

    • Kubernetesv0.21.2is the default version for new clusters.

    • Existing masters running versions 0.19.3 or higher will be upgraded to 0.21.2.Customers shouldupgrade their container clusters attheir convenience. Clusters running versions older than 0.19.3 can not beupdated.

    • Thekubectl version is now 0.20.2.

    July 10, 2015

    • Kubernetesv0.21.1is the default version for new clusters.

    • Thekubectl version is now 0.20.1.

    Known issue:

    • Therolling-update command will fail when usingkubectl v0.20.1 withclusters running v0.19.3 of the Kubernetes API. To resolve the issue, specify--api-version=v1beta3 as a flag to therolling-update command:

      kubectl rolling-update --api-version=v1beta3 --image=<foo> ...

      To find your version ofkubectl:

      kubectl version

      To find your cluster version:

      gcloud container clusters describe CLUSTER_NAME

    June 25, 2015

    • The GoogleContainer Engine REST API hasbeen updated to v1.

    • The REST API returns a more accurate error message when the region is out ofquota.

    • gcloud container clusters create supports specifying disk size fornodes with the--disk-size flag.

    June 22, 2015

    • Google Container Engine is now in Beta.

    • Kubernetes master VMs are no longer created for new clusters. They are now runas a hosted service. There is no Compute Engine instance charge for thehosted master. Read more aboutpricing details.

    • Kubernetesv0.19.3is the default version for new clusters.

    • For projects with default regional Compute Engine CPUs quota, containerclusters are limited to 3 per region.

    • Documentation updated to usegcloud beta command group.

    • Documentation updated to useapiVersion: v1 in all samples.

    Known issue:

    • kubectl exec is broken for cluster version 0.19.3.

    June 10, 2015

    • Documentation updated to use v1beta3.

    • Kubernetesv0.18.2is the default version for new clusters.

    June 3, 2015

    • Kubernetesv0.18.0is the default version for new clusters.

    • Clusters launched with 0.18.0 and above are deployed using Managed InstanceGroups.

    • New clusters can no longer be created at v0.16.0.

    • Fixed a race condition that could cause routes to be leaked on clusterdeletion.

    • Fail faster and with a helpful message if the project is lacking specificresource quota to create a functioning cluster.

    Google Cloud CLI:

    • Thegcloud alpha container clusters create command always setskubectl'scurrent context to the newly created cluster.

    • Theclusters create andget-credentials commands look for and writekubectl configuration to aKUBECONFIG environment variable. This matchesthe behavior ofkubectl config * commands.

    • Thegcloud alpha container kubectl command is disabled. Use simplykubectlinstead.

    May 22, 2015

    • Kubernetesv0.17.1is the default version for new clusters.

    • Kubernetes v0.16.0 is still supported. However, new clusters can no longer becreated at Kubernetes v0.17.0 due to the bug listed below.

    • Fixes a bug that was preventing containers from accessing the Google ComputeEngine metadata service.

    • Kubernetes service DNS names are now suffixed with.<namespace>.svc.cluster.local instead of.<namespace>.kubernetes.local.

    kubectl 0.17.0 notes:

    • Updatedkubectl cluster-info to show v1beta3 addresses.

    • Addkubectl log --previous support to view last terminated container log.

    • Added displaying external IPs tokubectl cluster-info.

    • Print container statuses inkubectl get pods.

    • Addkubectl_label to custom functions in bash completion.

    • ChangeIP toIP(S) in service columns forkubectl get.

    • AddedTerminationGracePeriod field to PodSpec andgrace-period flag tokubectl stop.

    May 13, 2015

    • Kubernetesv0.17.0is the default version for new clusters.

    • New clusters can no longer be created at Kubernetes version 0.15.0.

    • Standalonekubectl works with Container Engine created clusters withoutneeding to set theKUBECONFIG env var.

    • gcloud alpha container kubectl is deprecated. The command still works, butprints a warning with directions for usingkubectl directly.

    • Added a new command,gcloud alpha container get-credentials. The commandfetches cluster auth and updates the localkubectl command.

    • gcloud alpha container kubectl andclusters delete|describe print morehelpful error messages when the cluster cannot be found due to an incorrectzone flag/default.

    • gcloud alpha container clusters create exits with non-zero returncode ifcluster create succeeded but cert data could not be fetched.

    kubectl 0.16.1 notes:

    • Improvements tokubectl rolling-update.

    • Default globalkubeconfig location changed to~/.kube/config from~/.kube/.kubeconfig.

    • kubectl delete now stops resourcesby default (deletes child resources, e.g. pods managed by replicationcontroller).

    • Flag word separators- and_ made equivalent.

    • Recognize.yml extension for schema files.

    • kubectl get pods now prints containerstatuses.

    • Simplified loading rules forkubeconfig (seekubectl config --help fordetails).

    • --flatten and--minify options forkubectl config view.

    • Various bugfixes.

    May 8, 2015

    • Master VMs are now created with a data persistent disk to store importantcluster data, leaving the boot disk for the OS / software.

    May 2, 2015

    • Kubernetesv0.16.0is the default version for new clusters.

    • Clusters that don't have nginx will use bearer token auth instead of basicauth.

    • KUBE_PROXY_TOKEN added tokube-env metadata.

    April 22, 2015

    • A CIDR can now be requested during cluster creation when using theGoogle Cloud CLI or the REST API. For the Google Cloud CLI, use the--container-ipv4-cidr flag. If not set, the server will choose aCIDR for the cluster.

    • Standalonekubectl instructions are now available fromgcloud alpha container kubectl --help.

    • When fetching cluster credentials after creating a cluster using theGoogle Cloud CLI, you'll never have to enter the passphrase for your SSHkey more than once.

    • Thegcloud alpha container clusters ... commands default tohuman-readable (table) output.

    April 16, 2015

    Container Engine:

    • Kubernetesv0.15.0is the default version for new clusters. v0.14.2 is still supported.

    • The Kubernetes v1beta3 API is now enabled for new clusters.

    • New clusters can no longer be created at kubernetes version 0.13.2.

    Google Cloud CLI:

    • Thekubectl version is now v0.14.1.

    • The deprecatedgcloud alpha container pods|services|replicationcontrollerscommands have been removed. Usegcloud alpha container kubectl instead.

    April 9, 2015

    Container Engine:

    • Kubernetesv0.14.2is the default version for new clusters.

    • New clusters can no longer be created at kubernetes version 0.14.1.

    • Cluster creation is more reliable.

    • Clusters created via theGoogle Cloud console will pre-fill the clustername with a project-unique name instead of a zone-unique name.

    • API endpoint no longer included incluster list.

    April 2, 2015

    Container Engine:

    • Kubernetes v0.14.1 is the default version for new clusters.

    • New clusters can no longer be created at version 0.11.0.

    • Container Engine's cluster firewall no longer specifies target-tags. Thisallows pods to make outgoing connections by default (in the private network).

    Google Cloud CLI:

    • Clusters created by the Google Cloud CLI now automatically send logs toGoogle Cloud Logging unless explicitly disabled using the--no-enable-cloud-logging flag. Logs are visible in thelogs section of the Google Cloud console once yourproject has enabled the Google Cloud Logging API.

    • You can now access Container Engine clusters with standalonekubectl(i.e. withoutgcloud alpha container) after setting an environmentvariable, which is printed after successfulcluster creation and/or the first time accessing a cluster withgcloud alpha container kubectl.

    • Gcloud will always try to fetch certificate files for the cluster if they aremissing. "WARNING: No certificate files found in..." will resolve itself on asubsequentgcloud alpha container kubectl command run if the cluster ishealthy.

    • Known issue:container commands are included in thealpha component, butthe kubernetes client (kubectl) is still installed with thepreviewcomponent, so users will need both.

    April 1, 2015

    • All Container Engine commands have moved fromgcloud preview togcloud alpha. Rungcloud components update alpha to installthis command group. Documentation has been updated to use thealphacommands.

    March 25, 2015

    • Kubernetes v0.13.2 is the default version for new clusters.

    • Thekubectl version is now v0.13.1.

    • Updated tocontainer-vm-v20150317, which starts up more reliably.

    • The default boot disk size for cluster nodes has been increased from 10GB to100GB.

    February 25, 2015

    Google Cloud CLI:

    • Thekubectl wrapper commands(gcloud preview container pods|services|replicationcontrollers) have beendeprecated in favor of usinggcloud preview container kubectl directly.Calling the deprecated commands prints the equivalentkubectl command.

    • Thekubectl version has been bumped to 0.11.0.

    • Fixed a bug that preventedkubectl update with--patch from working.

    • Thekubectl command now automatically tries refetching the configurationif the command fails with a stale configuration error.

    February 19, 2015

    Google Container Engine:

    • Kubernetes v0.11.0 is the default version for new clusters.

    • Removed support for creating clusters at Kubernetes v0.9.2.

    • Nodes now use the container-vm-v20150129 image.

    Google Cloud CLI:

    • Pods created withgcloud preview container pods create no longer bind toa host port. As a result the scheduler can assign more than one pod to eachhost.

    • The version ofkubectl used by thegcloud preview container kubectlcommand is 0.10.1.

    February 12, 2015

    • Kubernetes v0.10.1 is the default version for new clusters.

    • Removed support for creating clusters at Kubernetes v0.10.0.

    • Improved API enablement flow and error messages when first visiting theContainer Engine page of theGoogle Cloud console.

    February 5, 2015

    Google Container Engine:

    • Kubernetes v0.10.0 is the default version for new clusters.

    • Removed support for creating clusters at Kubernetes version 0.8.1.

    Google Cloud CLI:

    • Thegcloud preview container kubectl command is upgraded to version 0.9.1:

      • kubectl create handles bulk creation from file or directory.
      • Thecreateall command has been removed.
      • Added thekubectl rollingupdate command, which runs controlled updatesof replicated pods.
      • Added thekubectl run-container command, which simplifies creation of a(optionally replicated) pod from an image.
      • Added thekubectl stop command to cleanly shut down a replicationcontroller.
      • Addedkubectl config ... commands for managing config for multipleclusters/users. (Note: this is not yet compatible withgcloud previewcontainer kubectl).

      Refer to thekubectl reference documentation for moredetails.

    January 29, 2015

    • Kubernetes v0.9.2 is the default version for new clusters.

    • Removed support for creating clusters at v0.7.1. Existing clusters at thisversion can still be used and deleted.

    • SkyDNS is supported for serviceson clusters using v0.9.2 onwards.

    January 21, 2015

    • Improved error messages during pod creation when the source image is invalid.

    • Fixed a bug affecting Compute Engine routes whosedestRange fields are plainIP addresses.

    • Improved the reliability of cluster creation when provisioning is slow.

    January 15, 2015

    • Kubernetes v0.8.1 is the default version for newly created clusters. Ourv0.8.1 support includes changes on the 0.8 branch at 0.8.1.

    • Removed support for creating clusters at Kubernetes v0.8.0.Existing clusters at this version can still be used and deleted.

    • Service accounts and auth scopes can be added to node instances at the timeof creation for all pods to use.

    • The command line interface now renders multiple error messages acrossnewlines and tabs, instead of using a comma separator.

    • Machine type information has been fixed in the cluster details page of theGoogle Cloud console.

    January 8, 2015

    • Kubernetes v0.8.0 is the default version for newly created clusters.Kubernetes v0.7.1 is also supported. Refer to theKubernetes release notesfor information about each release. Our v0.7.1 support includes changes on the0.7 branch at 0.7.1. Our v0.8.0 support includes changes in the 0.7.2 and0.8.0 releases.

    • Removed support for creating clusters at Kubernetes v0.6.1 and v0.7.0.Existing clusters at these versions can still be used and deleted.

    • Thepods|services|replicationcontrollers create commands now validatethe resource type when creating with--config-file. This fixes the knownissue in the December 12, 2014 release.

    December 19, 2014

    • Kubernetes v0.7.0 is the default version for newly created clusters.

    • Removed support for creating clusters at Kubernetes v0.4.4 and v0.5.5.Existing clusters at these versions can still be used and deleted.

    December 12, 2014

    Known issues:

    • Thepods|services|replicationcontrollers create commands do not validatethe resource type when creating with--config-file. The command createsthe resource specified in the configuration file, regardless of the commandgroup specified. For example, callingpods create and passing a serviceconfiguration file creates a service instead of failing.

    Updates:

    • Kubernetes v0.6.1 is the default version for newly created clusters.

    • Google Container Engine now reserves a /14 CIDR range for new clusters.Previously, a /16 was reserved.

    • New clusters created with Kubernetes v0.4.4 now use thebackports-debian-7-wheezy-v20141108 image. This replaces the previousbackports-debian-7-wheezy-v20141021 image.

    • New clusters created with Kubernetes v0.5.5 or v0.6.1 now use thecontainer-vm image, instead of the Debian backports image.

    • TheService Operationsdocumentation has been updated to describe thecreateExternalLoadBalanceroption.

    • A newgcloud preview container kubectl command has been added to the CLI.This is a pass-through command to call the native Kuberneteskubectlclient with arbitrary commands, using the Google Cloud CLI to handle authentication.

    • The--cluster-name flag in all CLI commands has been renamed to--cluster.

    • Newdescribe andlist support for cluster operations.

    December 5, 2014

    • The syntax for creating a pod with the Google Container Engine command lineinterface has changed. The name of the pod is now specified as the value ofa--name flag. See thePod Operations page for details.

    • Clusters and Operations returned by the API now include aselfLink field andOperations also include atargetLink field, which contain the full URL ofthe given resource.

    • Added support for Kubernetes v0.4.4 and Kubernetes v0.5.5. The defaultversion is now v0.4.4. Refer to theKubernetes release notesfor information about each release. Our v0.4.4 support includes changes on the0.4 branch from 0.4.2 through 0.4.4. Our v0.5.5 support includes changes onthe 0.5 branch through 0.5.5.

    • Removed support for creating clusters at Kubernetes v0.4.2. Existing clustersat this version can still be used and deleted.

    November 20, 2014

    Updates to thegcloud preview container commands:

    • New error message that catches cluster creation failure due to missingdefault network.

    • Specify default zone and cluster :

      gcloud config set compute/zone ZONEgcloud config set container/cluster CLUSTER_NAME

      There is currently a bug preventing the default cluster name from workingif the local configuration cache is missing. If you see a stack tracewhen omitting--cluster-name, repeat the command once with the flagspecified. Subsequent commands can omit the flag.

    • The default cluster name is set to the value of the new cluster when acluster is successfully created.

    • Thegcloud preview container clusters list command lists clusters acrossall zones if no--zone flag is specified. Thelist command ignores anydefault zone that may be set.

    Documentation updates:

    Google Cloud console updates:

    • Cluster error state information is available in the Google Cloud console.

    November 4, 2014

    (Updated November 10, 2014: Added two additional known issues with GoogleContainer Engine.)

    Google Container Engine is a new service that creates and managesKubernetes clusters for Google Cloud users.

    Container Engine is currently in Alpha state; it is suitable forexperimentation and is intended to provide an early view of the productionservice, but customers are strongly encouraged not to run production workloadson it.

    The underlying open source Kubernetes project is being actively developed bythe community and is not considered ready for production use. This version ofGoogle Container Engine is based onKubernetes public build v0.4.2.While the Kubernetes community is working hard to addresscommunity-reported issues as they are reported, there are some known issues inthe v0.4.2 release that will be addressed in v0.5 and that will be incorporatedinto Google Container Engine in the coming days.

    Known issues with the Kubernetes 0.4.2 release

    1. (Issue #1730)External health checks that use in-container scripts (exec) do notwork. Process, HTTP and TCP health checks work properly.Health checks that use in-container shell execution are not functioning;they always report Unknown. This is a result of the transition todocker exec introduced in Docker version 1.3. At this time process-levelhealth checks, TCP socket health checks, and HTTP level health checks arefunctional. This has been addressed in v0.5 and will be available shortly.

    2. (Issue #1712)Pod update operations fails.In v0.4.2, pod update functionality is not implemented, and a call to theupdate API returns an unimplemented error. Pods must be updated by tear downand recreate. This will be implemented in v0.5.

    3. (Issue #974)Silent failure on internal service port number collision:Each Kubernetes service needs a unique network port assignment. Currently ifyou try to create a second service with a port number that conflicts with anexisting service, the operation succeeds but the second service will notreceive network traffic. This has been fixed, and will be available in v0.5.

    4. (Issue #1161)External service load balancing. The current Kubernetes design includesa model that does a 1:1 mapping between an externally-exposed port number atthe cluster level, and a service. This means that only a single externalservice can exist on a given port. For now this is a hard limitation of theservice.

    Known issues with Google Container Engine

    In addition to issues with the underlying Kubernetes project, there are someknown issues with the Google Container Engine tools and API that will beaddressed in subsequent releases.

    1. Kubecfg binary conflicts: During the Google Cloud SDKinstallation, kubecfg v0.4.1 is installed and placed on the path by theGoogle Cloud CLI. Depending on your $PATH variable, this version mayconflict with other installed versions from the open source Kubernetesproduct.

    2. Containers are assigned private IPs in the range10.40.0.0/16 to 10.239.0.0/16.If you have changed your default network settings from 10.240.0.0/16,clusters may create successfully, but fail during operation.

    3. All Container Engine nodes are started with and require project levelread-write scope. This is temporarily required to support the dynamicmounting of PD-based volumes to nodes. In future releases nodes will revertto default read-only project scope.

    4. Windows is not currently supported. Thegcloud preview containercommand is built on top of the Kubernetes client'skubecfg binary, which isnot yet available on Windows.

    5. The default network is required. Container Engine relies on the existenceof the default network, and tries to create routes that use it. If you don'thave a default network, Container Engine cluster creation will fail.

      To recreate it:

      1. Go to theNetworks page inthe Google Cloud console and select your project.
      2. ClickNew network.
      3. Enter the following values:
        • Name:default
        • Address range:10.240.0.0/16
        • Gateway:10.240.0.1
      4. ClickCreate.

      Next, recreate the firewall rules:

      1. Clickdefault in theAll networks list.
      2. ClickCreate new next toFirewall rules.
      3. Enter the following values:
        • Name:default-allow-internal
        • Source IP ranges:10.240.0.0/16
        • Protocols & ports:tcp:1-65535; udp:1-65535; icmp
      4. ClickCreate.
      5. Create a second firewall rule with the following values:
        • Name:default-allow-ssh
        • Source IP ranges:0.0.0.0/0
        • Protocols & ports:tcp:22

    Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

    Last updated 2026-02-19 UTC.