Deploy a stateful MySQL cluster on GKE
This document is intended for database administrators, cloud architects, andoperations professionals interested in deploying a highly available MySQLtopology on Google Kubernetes Engine.
Follow this tutorial to learn how to deploy aMySQL InnoDB Cluster and aMySQL InnoDB ClusterSet, in addition toMySQL Router middleware on your GKE cluster, and how to perform upgrades.
Objectives
In this tutorial, you will learn how to:- Create and deploy a stateful Kubernetes service.
- Deploy a MySQL InnoDB Cluster for high availability.
- Deploy Router middleware for database operation routing.
- Deploy a MySQL InnoDB ClusterSet for disaster tolerance.
- Simulate a MySQL cluster failover.
- Perform a MySQL version upgrade.
The following sections describe the architecture of the solution you willbuild in this tutorial.
MySQL InnoDB Cluster
In your regional GKE cluster, using a StatefulSet, you deploy a MySQL database instance with the necessary naming and configuration to create a MySQL InnoDB Cluster. To provide fault tolerance and high availability, you deploy three database instance Pods. This ensures that the majority of Pods on different zones are available at any given time for a successful primary election using a consensus protocol, and makes your MySQL InnoDB Cluster tolerant of single zonal failures.
Once deployed, you designate one Pod as the primary instance to serve both read and write operations. The other two Pods are secondary read-only replicas. If the primary instance experiences an infrastructure failure, you can promote one of these two replica Pods to become the primary.
In a separate namespace, you deploy three MySQL Router Pods to provide connection routing for improved resilience. Instead of directly connecting to the database service, your applications connect to MySQL Router Pods. Each Router Pod is aware of the status and purpose of each MySQL InnoDB Cluster Pod, and routes application operations to the respective healthy Pod. The routing state is cached in the Router Pods and updated from the cluster metadata stored on each node of the MySQL InnoDB Cluster. In the case of an instance failure, the Router adjusts the connection routing to a live instance.
MySQL InnoDB ClusterSet
You can create a MySQL InnoDB ClusterSet from an initial MySQL InnoDB Cluster.This lets you increase disaster tolerance if the primary cluster is no longeravailable.
If the MySQL InnoDB Cluster primary instance is no longer available, you can promotea replica cluster in the ClusterSet to primary. When using MySQL Routermiddleware, your application does not need to track the health of the primarydatabase instance. Routing is adjusted to send connections to the new primary after the election has occurred. However, it is your responsibility to ensure that applications connecting to your MySQL Router middleware follow best practices for resilience, so that connections are retried if an error occurs during cluster failover.
Costs
In this document, you use the following billable components of Google Cloud:
To generate a cost estimate based on your projected usage, use thepricing calculator.
When you finish the tasks that are described in this document, you can avoid continued billing by deleting the resources that you created. For more information, seeClean up.
Before you begin
Set up your project
- Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
In the Google Cloud console, on the project selector page, clickCreate project to begin creating a new Google Cloud project.
Roles required to create a project
To create a project, you need the Project Creator (
roles/resourcemanager.projectCreator), which contains theresourcemanager.projects.createpermission.Learn how to grant roles.Verify that billing is enabled for your Google Cloud project.
Enable the GKE API.
Roles required to enable APIs
To enable APIs, you need the Service Usage Admin IAM role (
roles/serviceusage.serviceUsageAdmin), which contains theserviceusage.services.enablepermission.Learn how to grant roles.In the Google Cloud console, on the project selector page, clickCreate project to begin creating a new Google Cloud project.
Roles required to create a project
To create a project, you need the Project Creator (
roles/resourcemanager.projectCreator), which contains theresourcemanager.projects.createpermission.Learn how to grant roles.Verify that billing is enabled for your Google Cloud project.
Enable the GKE API.
Roles required to enable APIs
To enable APIs, you need the Service Usage Admin IAM role (
roles/serviceusage.serviceUsageAdmin), which contains theserviceusage.services.enablepermission.Learn how to grant roles.
Set up roles
Make sure that you have the following role or roles on the project: role/storage.objectViewer, role/logging.logWriter, role/artifactregistry.Admin, roles/container.clusterAdmin, role/container.serviceAgent, roles/serviceusage.serviceUsageAdmin, roles/iam.serviceAccountAdmin
Check for the roles
In the Google Cloud console, go to theIAM page.
Go to IAM- Select the project.
In thePrincipal column, find all rows that identify you or a group that you're included in. To learn which groups you're included in, contact your administrator.
- For all rows that specify or include you, check theRole column to see whether the list of roles includes the required roles.
Grant the roles
In the Google Cloud console, go to theIAM page.
Go to IAM- Select the project.
- ClickGrant access.
In theNew principals field, enter your user identifier. This is typically the email address for a Google Account.
- In theSelect a role list, select a role.
- To grant additional roles, clickAdd another role and add each additional role.
- ClickSave.
Set up your environment
In this tutorial, you useCloud Shell to manage resources hosted onGoogle Cloud. Cloud Shell comes preinstalled with Docker andthekubectl and gcloud CLI.
To use Cloud Shell to set up your environment:
Set environment variables.
exportPROJECT_ID=PROJECT_IDexportCLUSTER_NAME=gkemulti-westexportCONTROL_PLANE_LOCATION=CONTROL_PLANE_LOCATIONReplace the following values:
- PROJECT_ID: your Google Cloudproject ID.
- CONTROL_PLANE_LOCATION: the Compute Engineregion of the control plane of yourcluster. For this tutorial, the region is
us-west1. Typically, you want a region that is close to you.
Set the default environment variables.
gcloudconfigsetprojectPROJECT_IDgcloudconfigsetcompute/regionCONTROL_PLANE_LOCATIONClone the code repository.
gitclonehttps://github.com/GoogleCloudPlatform/kubernetes-engine-samplesChange to the working directory.
cdkubernetes-engine-samples/databases/gke-stateful-mysql/kubernetes
Create a GKE cluster
In this section, you create aregional GKE cluster.Unlike a zonal cluster, a regional cluster's control plane is replicated intoseveral zones, so an outage in a single zone doesn't make the control planeunavailable.
To create a GKE cluster, follow these steps:
Autopilot
In Cloud Shell, create a GKE Autopilot cluster inthe
us-west1region.gcloudcontainerclusterscreate-auto$CLUSTER_NAME\--location=$CONTROL_PLANE_LOCATIONGet the GKE cluster credentials.
gcloudcontainerclustersget-credentials$CLUSTER_NAME\--location=$CONTROL_PLANE_LOCATIONDeploy a Service across three zones. This tutorial uses a Kubernetes Deployment. ADeployment is a Kubernetes API object that lets you run multiple replicas of Pods that are distributed among the nodes in a cluster..
apiVersion:apps/v1kind:Deploymentmetadata:name:prepare-three-zone-halabels:app:prepare-three-zone-haspec:replicas:3selector:matchLabels:app:prepare-three-zone-hatemplate:metadata:labels:app:prepare-three-zone-haspec:affinity:# Tell Kubernetes to avoid scheduling a replica in a zone where there# is already a replica with the label "app: prepare-three-zone-ha"podAntiAffinity:requiredDuringSchedulingIgnoredDuringExecution:-labelSelector:matchExpressions:-key:appoperator:Invalues:-prepare-three-zone-hatopologyKey:"topology.kubernetes.io/zone"containers:-name:prepare-three-zone-haimage:busybox:latestcommand:-"/bin/sh"-"-c"-"whiletrue;dosleep3600;done"resources:limits:cpu:"500m"ephemeral-storage:"10Mi"memory:"0.5Gi"requests:cpu:"500m"ephemeral-storage:"10Mi"memory:"0.5Gi"kubectlapply-fprepare-for-ha.yamlBy default, Autopilot provisions resources in two zones.The Deployment defined in
prepare-for-ha.yamlensures thatAutopilot provisions nodes across three zones in yourcluster, by settingreplicas:3,podAntiAffinitywithrequiredDuringSchedulingIgnoredDuringExecution, andtopologyKey: "topology.kubernetes.io/zone".Check the status of the Deployment.
kubectlgetdeploymentprepare-three-zone-ha--watchWhen you see three Pods in the ready state, cancel this command with
CTRL+C. The output is similar to the following:NAME READY UP-TO-DATE AVAILABLE AGEprepare-three-zone-ha 0/3 3 0 9sprepare-three-zone-ha 1/3 3 1 116sprepare-three-zone-ha 2/3 3 2 119sprepare-three-zone-ha 3/3 3 3 2m16sRun this script to validate that your Pods have been deployed acrossthree zones.
bash../scripts/inspect_pod_node.shdefaultEach line of the output corresponds to a Pod, and the second columnindicates the zone. The output is similar to the following:
gk3-gkemulti-west1-default-pool-eb354e2d-z6mv us-west1-b prepare-three-zone-ha-7885d77d9c-8f7qbgk3-gkemulti-west1-nap-25b73chq-739a9d40-4csr us-west1-c prepare-three-zone-ha-7885d77d9c-98fpngk3-gkemulti-west1-default-pool-160c3578-bmm2 us-west1-a prepare-three-zone-ha-7885d77d9c-phmhj
Standard
In Cloud Shell, create a GKE Standard cluster inthe
us-west1region.gcloudcontainerclusterscreate$CLUSTER_NAME\--location=$CONTROL_PLANE_LOCATION\--machine-type="e2-standard-2"\--disk-type="pd-standard"\--num-nodes="5"Get the GKE cluster credentials.
gcloudcontainerclustersget-credentials$CLUSTER_NAME\--location=$CONTROL_PLANE_LOCATION
Deploy MySQL StatefulSets
In this section, you deploy one MySQLStatefulSet. A StatefulSet is a Kubernetes controller that maintains a persistent unique identity for each of its Pods.
Each StatefulSetconsists of three MySQL replicas.
To deploy the MySQL StatefulSet, follow these steps:
Create a namespace for the StatefulSet.
kubectlcreatenamespacemysql1Create the MySQL secret.
apiVersion:v1kind:Secretmetadata:name:mysql-secrettype:Opaquedata:password:UGFzc3dvcmQkMTIzNDU2# Password$123456admin-password:UGFzc3dvcmQkMTIzNDU2# Password$123456kubectlapply-nmysql1-fsecret.yamlThe password is deployed with each Pod, and is used by management scriptsand commands for MySQL InnoDB Cluster and ClusterSet deploymentin this tutorial.
Create the StorageClass.
apiVersion:storage.k8s.io/v1kind:StorageClassmetadata:name:fast-storageclassprovisioner:pd.csi.storage.gke.iovolumeBindingMode:WaitForFirstConsumerreclaimPolicy:RetainallowVolumeExpansion:trueparameters:type:pd-balancedkubectlapply-nmysql1-fstorageclass.yamlThis storage class uses the
pd-balancedPersistent Disk type thatbalances performance and cost. ThevolumeBindingModefield is set toWaitForFirstConsumermeaning that GKE delays provisioningof a PersistentVolume until the Pod is created. This setting ensures thatthe disk is provisioned in the same zone where the Pod is scheduled.Deploy the StatefulSet of MySQL instance Pods.
apiVersion:apps/v1kind:StatefulSetmetadata:name:dbc1labels:app:mysqlspec:replicas:3selector:matchLabels:app:mysqlserviceName:mysqltemplate:metadata:labels:app:mysqlspec:topologySpreadConstraints:-maxSkew:1topologyKey:"topology.kubernetes.io/zone"whenUnsatisfiable:DoNotSchedulelabelSelector:matchLabels:app:mysqlaffinity:podAntiAffinity:requiredDuringSchedulingIgnoredDuringExecution:-labelSelector:matchExpressions:-key:appoperator:Invalues:-mysqltopologyKey:"kubernetes.io/hostname"containers:-name:mysqlimage:mysql/mysql-server:8.0.28command:-/bin/bashargs:--c->-/entrypoint.sh--server-id=$((20 + $(echo $HOSTNAME | grep -o '[^-]*$') + 1))--report-host=${HOSTNAME}.mysql.mysql1.svc.cluster.local--binlog-checksum=NONE--enforce-gtid-consistency=ON--gtid-mode=ON--default-authentication-plugin=mysql_native_passwordenv:-name:MYSQL_ROOT_PASSWORDvalueFrom:secretKeyRef:name:mysql-secretkey:password-name:MYSQL_ADMIN_PASSWORDvalueFrom:secretKeyRef:name:mysql-secretkey:admin-password-name:MYSQL_ROOT_HOSTvalue:'%'ports:-name:mysqlcontainerPort:3306-name:mysqlxcontainerPort:33060-name:xcomcontainerPort:33061resources:limits:cpu:"500m"ephemeral-storage:"1Gi"memory:"1Gi"requests:cpu:"500m"ephemeral-storage:"1Gi"memory:"1Gi"volumeMounts:-name:mysqlmountPath:/var/lib/mysqlsubPath:mysqlreadinessProbe:exec:command:-bash-"-c"-|mysql -h127.0.0.1 -uroot -p$MYSQL_ROOT_PASSWORD -e'SELECT 1'initialDelaySeconds:30periodSeconds:2timeoutSeconds:1livenessProbe:exec:command:-bash-"-c"-|mysqladmin -uroot -p$MYSQL_ROOT_PASSWORD pinginitialDelaySeconds:30periodSeconds:10timeoutSeconds:5updateStrategy:rollingUpdate:partition:0type:RollingUpdatevolumeClaimTemplates:-metadata:name:mysqllabels:app:mysqlspec:storageClassName:fast-storageclassvolumeMode:FilesystemaccessModes:-ReadWriteOnceresources:requests:storage:10Gikubectlapply-nmysql1-fc1-mysql.yamlThis command deploys the StatefulSet consisting of three replicas. In thistutorial, the primary MySQL cluster is deployed across three zones in
us-west1. The output is similar to the following:service/mysql createdstatefulset.apps/dbc1 createdIn this tutorial, the resource limits and requests are set to minimalvalues to save cost. When planning for a production workload,make sure to set these values appropriately for your organization's needs.
Verify the StatefulSet is created successfully.
kubectlgetstatefulset-nmysql1--watchIt can take about 10 minutes for the StatefulSet to be ready.
When all three pods are in a ready state, exit the command using
Ctrl+C. Ifyou seePodUnscheduleableerrors due to insufficient CPU or memory, wait afew minutes for the control plane to resize to accommodate the large workload.The output is similar to the following:
NAME READY AGEdbc1 1/3 39sdbc1 2/3 50sdbc1 3/3 73sTo inspect the placement of your Pods on the GKE cluster nodes,run this script:
bash../scripts/inspect_pod_node.shmysql1mysqlThe output shows the Pod name, the GKE node name, and thezone where the node is provisioned, and looks similar to the following:
gke-gkemulti-west-5-default-pool-4bcaca65-jch0 us-west1-b dbc1-0gke-gkemulti-west-5-default-pool-1ac6e8b5-ddjx us-west1-c dbc1-1gke-gkemulti-west-5-default-pool-1f5baa66-bf8t us-west1-a dbc1-2The columns in the output represent the hostname, cloud zone, and Pod name,respectively.
The
topologySpreadConstraintspolicy in the StatefulSet specification(c1-mysql.yaml) directs the scheduler to place the Pods evenly across thefailure domain (topology.kubernetes.io/zone).The
podAntiAffinitypolicy enforces the constraint that Pods are required tonot be placed on the same GKE cluster node (kubernetes.io/hostname). For theMySQL instance Pods, this policy results in the Pods being deployedevenly across the three zones in the Google Cloud region. This placement enableshigh availability of the MySQL InnoDB Cluster by placing each database instancein a separate failure domain.
Prepare the primary MySQL InnoDB Cluster
To configure a MySQL InnoDB Cluster, follow these steps:
In theCloud Shell terminal, set the group replicationconfigurations for the MySQL instances to be added to your cluster.
bash../scripts/c1-clustersetup.shPOD_ORDINAL_START=${1:-0}POD_ORDINAL_END=${2:-2}foriin$(seq${POD_ORDINAL_START}${POD_ORDINAL_END});doecho"Configuring pod mysql1/dbc1-${i}"cat<<' EOF'|kubectl-nmysql1exec-idbc1-${i}--bash-c'mysql -uroot -proot --password=${MYSQL_ROOT_PASSWORD}'INSTALLPLUGINgroup_replicationSONAME'group_replication.so';RESETPERSISTIFEXISTSgroup_replication_ip_allowlist;RESETPERSISTIFEXISTSbinlog_transaction_dependency_tracking;SET@@PERSIST.group_replication_ip_allowlist='mysql.mysql1.svc.cluster.local';SET@@PERSIST.binlog_transaction_dependency_tracking='WRITESET';EOFdoneThe script will remotely connect into each of the three MySQL instances toset and persist the following environment variables:
group_replication_ip_allowlist: allows the instance within the clusterto connect to any instance in the group.binlog_transaction_dependency_tracking='WRITESET': allows parallelizedtransactions which won't conflict.
In MySQL versions earlier than 8.0.22, use
group_replication_ip_whitelistinstead ofgroup_replication_ip_allowlist.Open a second terminal, so that you do not need to create a shell for each Pod.
Connect to MySQL Shell on the Pod
dbc1-0.kubectl-nmysql1exec-itdbc1-0--\/bin/bash\-c'mysqlsh --uri="root:$MYSQL_ROOT_PASSWORD@dbc1-0.mysql.mysql1.svc.cluster.local"'Verify the MySQL group replication allowlist for connecting to other instances.
\sqlSELECT@@group_replication_ip_allowlist;The output is similar to the following:
+----------------------------------+| @@group_replication_ip_allowlist |+----------------------------------+| mysql.mysql1.svc.cluster.local |+----------------------------------+Verify the
server-idis unique on each of the instances.\sqlSELECT@@server_id;The output is similar to the following:
+-------------+| @@server_id |+-------------+| 21 |+-------------+Configure each instance for MySQL InnoDB Cluster usage and create anadministrator account on each instance.
\jsdba.configureInstance('root@dbc1-0.mysql.mysql1.svc.cluster.local',{password:os.getenv("MYSQL_ROOT_PASSWORD"),clusterAdmin:'icadmin',clusterAdminPassword:os.getenv("MYSQL_ADMIN_PASSWORD")});dba.configureInstance('root@dbc1-1.mysql.mysql1.svc.cluster.local',{password:os.getenv("MYSQL_ROOT_PASSWORD"),clusterAdmin:'icadmin',clusterAdminPassword:os.getenv("MYSQL_ADMIN_PASSWORD")});dba.configureInstance('root@dbc1-2.mysql.mysql1.svc.cluster.local',{password:os.getenv("MYSQL_ROOT_PASSWORD"),clusterAdmin:'icadmin',clusterAdminPassword:os.getenv("MYSQL_ADMIN_PASSWORD")});All instances must have the same username and password in order for theMySQL InnoDB Cluster to function properly. Each command producesoutput similar to the following:
...The instance 'dbc1-2.mysql:3306' is valid to be used in an InnoDB cluster.Cluster admin user 'icadmin'@'%' created.The instance 'dbc1-2.mysql.mysql1.svc.cluster.local:3306' is alreadyready to be used in an InnoDB cluster.Successfully enabled parallel appliers.Verify that the instance is ready to be used in a MySQL InnoDB Cluster.
dba.checkInstanceConfiguration()The output is similar to the following:
...The instance 'dbc1-0.mysql.mysql1.svc.cluster.local:3306' is valid to be used in an InnoDB cluster.{ "status": "ok"}Optionally, you can connect to each MySQL instance and repeat this command.For example, run this command to check the status on the
dbc1-1instance:kubectl-nmysql1exec-itdbc1-0--\/bin/bash\-c'mysqlsh --uri="root:$MYSQL_ROOT_PASSWORD@dbc1-1.mysql.mysql1.svc.cluster.local" \ --js --execute "dba.checkInstanceConfiguration()"'
Create the primary MySQL InnoDB Cluster
Next, create the MySQL InnoDB Cluster using the MySQL AdmincreateCluster command. Start with thedbc1-0 instance, which will bethe primary instance for the cluster, then add two additional replicas tothe cluster.
To initialize the MySQL InnoDB Cluster, follow these steps:
Create the MySQL InnoDB Cluster.
varcluster=dba.createCluster('mycluster');Running the
createClustercommand triggers these operations:- Deploy the metadata schema.
- Verify that the configuration is correct for Group Replication.
- Register it as the seed instance of the new cluster.
- Create necessary internal accounts, such as the replication user account.
- Start Group Replication.
This command initializes a MySQL InnoDB Cluster with the host
dbc1-0as theprimary. The cluster reference is stored in the cluster variable.The output looks similar to the following:
A new InnoDB cluster will be created on instance 'dbc1-0.mysql:3306'.Validating instance configuration at dbc1-0.mysql:3306...This instance reports its own address as dbc1-0.mysql.mysql1.svc.cluster.local:3306Instance configuration is suitable.NOTE: Group Replication will communicate with other instances using'dbc1-0.mysql:33061'. Use the localAddressoption to override.Creating InnoDB cluster 'mycluster' on'dbc1-0.mysql.mysql1.svc.cluster.local:3306'...Adding Seed Instance...Cluster successfully created. Use Cluster.addInstance() to add MySQLinstances.At least 3 instances are needed for the cluster to be able to withstandup to one server failure.Add the second instance to the cluster.
cluster.addInstance('icadmin@dbc1-1.mysql',{password:os.getenv("MYSQL_ROOT_PASSWORD"),recoveryMethod:'clone'});Add the remaining instance to the cluster.
cluster.addInstance('icadmin@dbc1-2.mysql',{password:os.getenv("MYSQL_ROOT_PASSWORD"),recoveryMethod:'clone'});The output is similar to the following:
...The instance 'dbc1-2.mysql:3306' was successfully added to the cluster.Verify the cluster's status.
cluster.status()This command shows the status of the cluster. The topology consists ofthree hosts, one primary and two secondary instances. Optionally, you can call
cluster.status({extended:1}).The output is similar to the following:
{ "clusterName": "mysql1", "defaultReplicaSet": { "name": "default", "primary": "dbc1-0.mysql:3306", "ssl": "REQUIRED", "status": "OK", "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", "topology": { "dbc1-0.mysql:3306": { "address": "dbc1-0.mysql:3306", "memberRole": "PRIMARY", "mode": "R/W", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.28" }, "dbc1-1.mysql:3306": { "address": "dbc1-1.mysql:3306", "memberRole": "SECONDARY", "mode": "R/O", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.28" }, "dbc1-2.mysql:3306": { "address": "dbc1-2.mysql:3306", "memberRole": "SECONDARY", "mode": "R/O", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.28" } }, "topologyMode": "Single-Primary" }, "groupInformationSourceMember": "dbc1-0.mysql:3306"}Optionally, you can call
cluster.status({extended:1})to obtain additionalstatus details.
Create a sample database
To create a sample database, follow these steps:
Create a database and load data into the database.
\sqlcreatedatabaseloanapplication;useloanapplicationCREATETABLEloan(loan_idINTunsignedAUTO_INCREMENTPRIMARYKEY,firstnameVARCHAR(30)NOTNULL,lastnameVARCHAR(30)NOTNULL,statusVARCHAR(30));Insert sample data into the database. To insert data, you must be connectedto the primary instance of the cluster.
INSERTINTOloan(firstname,lastname,status)VALUES('Fred','Flintstone','pending');INSERTINTOloan(firstname,lastname,status)VALUES('Betty','Rubble','approved');Verify that the table contains the three rows inserted in the previous step.
SELECT*FROMloan;The output is similar to the following:
+---------+-----------+------------+----------+| loan_id | firstname | lastname | status |+---------+-----------+------------+----------+| 1 | Fred | Flintstone | pending || 2 | Betty | Rubble | approved |+---------+-----------+------------+----------+2 rows in set (0.0010 sec)
Create a MySQL InnoDB ClusterSet
You can create a MySQL InnoDB ClusterSet to manage replication from your primarycluster to replica clusters, using a dedicated ClusterSet replication channel.
A MySQL InnoDB ClusterSet provides disaster tolerance for MySQL InnoDB Clusterdeployments by linking a primary MySQL InnoDB Cluster with one or more replicas ofitself in alternate locations, such as multiple zones and multiple regions.
If you closed MySQL Shell, create a new shell by running thiscommand in a new Cloud Shell terminal:
kubectl-nmysql1exec-itdbc1-0--\/bin/bash-c'mysqlsh \ --uri="root:$MYSQL_ROOT_PASSWORD@dbc1-0.mysql.mysql1.svc.cluster.local"'To create a MySQL InnoDB ClusterSet, follow these steps:
In your MySQL Shell terminal, obtain a cluster object.
\jscluster=dba.getCluster()The output is similar to the following:
<Cluster:mycluster>Initialize a MySQL InnoDB ClusterSet with the existing MySQL InnoDB Clusterstored in the cluster object as the primary.
clusterset=cluster.createClusterSet('clusterset')The output is similar to the following:
A new ClusterSet will be created based on the Cluster 'mycluster'.* Validating Cluster 'mycluster' for ClusterSet compliance.* Creating InnoDB ClusterSet 'clusterset' on 'mycluster'...* Updating metadata...ClusterSet successfully created. Use ClusterSet.createReplicaCluster() to add Replica Clusters to it.<ClusterSet:clusterset>Check the status of your MySQL InnoDB ClusterSet.
clusterset.status()The output is similar to the following:
{ "clusters": { "mycluster": { "clusterRole": "PRIMARY", "globalStatus": "OK", "primary": "dbc1-0.mysql:3306" } }, "domainName": "clusterset", "globalPrimaryInstance": "dbc1-0.mysql:3306", "primaryCluster": "mycluster", "status": "HEALTHY", "statusText": "All Clusters available."}Optionally, you can call
clusterset.status({extended:1})to obtain additionalstatus details, including information about the cluster.Exit MySQL Shell.
\q
Deploy a MySQL Router
You can deploy a MySQL Router to direct client application traffic to theproper clusters. Routing is based on the connection port of the applicationissuing a database operation:
- Writes are routed to the primary Cluster instance in the primary ClusterSet.
- Reads can be routed to any instance in the primary Cluster.
When you start a MySQL Router, it is bootstrapped against theMySQL InnoDB ClusterSet deployment. The MySQL Router instances connectedwith the MySQL InnoDB ClusterSet are aware of any controlled switchovers oremergency failovers and direct traffic to the new primary cluster.
To deploy a MySQL Router, follow these steps:
In the Cloud Shell terminal, deploy the MySQL Router.
kubectlapply-nmysql1-fc1-router.yamlThe output is similar to the following:
configmap/mysql-router-config createdservice/mysql-router createddeployment.apps/mysql-router createdCheck the readiness of the MySQL Router deployment.
kubectl-nmysql1getdeploymentmysql-router--watchWhen all three Pods are ready, the output is similar to the following:
NAME READY UP-TO-DATE AVAILABLE AGEmysql-router 3/3 3 0 3m36sIf you see a
PodUnschedulableerror in the console, wait a minute or twowhile GKE provisions more nodes. Refresh, and you should see3/3 OK.Start MySQL Shell on any member of the existing cluster.
kubectl-nmysql1exec-itdbc1-0--\/bin/bash-c'mysqlsh --uri="root:$MYSQL_ROOT_PASSWORD@dbc1-0.mysql"'This command connects to the
dbc1-0Pod, then starts a shell connected tothedbc1-0MySQL instance.Verify the router configuration.
clusterset=dba.getClusterSet()clusterset.listRouters()The output is similar to the following:
{ "domainName": "clusterset", "routers": { "mysql-router-7cd8585fbc-74pkm::": { "hostname": "mysql-router-7cd8585fbc-74pkm", "lastCheckIn": "2022-09-22 23:26:26", "roPort": 6447, "roXPort": 6449, "rwPort": 6446, "rwXPort": 6448, "targetCluster": null, "version": "8.0.27" }, "mysql-router-7cd8585fbc-824d4::": { ... }, "mysql-router-7cd8585fbc-v2qxz::": { ... } }}Exit MySQL Shell.
\qRun this script to inspect the placement of the MySQL Router Pods.
bash../scripts/inspect_pod_node.shmysql1|sortThe script shows the node and Cloud Zone placement of the all of the Pods inthe
mysql1namespace, where the output is similar to the following:gke-gkemulti-west-5-default-pool-1ac6e8b5-0h9v us-west1-c mysql-router-6654f985f5-df97qgke-gkemulti-west-5-default-pool-1ac6e8b5-ddjx us-west1-c dbc1-1gke-gkemulti-west-5-default-pool-1f5baa66-bf8t us-west1-a dbc1-2gke-gkemulti-west-5-default-pool-1f5baa66-kt03 us-west1-a mysql-router-6654f985f5-qlfj9gke-gkemulti-west-5-default-pool-4bcaca65-2l6s us-west1-b mysql-router-6654f985f5-5967dgke-gkemulti-west-5-default-pool-4bcaca65-jch0 us-west1-b dbc1-0You can observe that the MySQL Router Pods are distributed equally acrossthe zones; that is, not placed on the same node as a MySQL Pod, or onthe same node as another MySQL Router Pod.
Manage GKE and MySQL InnoDB Cluster upgrades
Updates for both MySQL and Kubernetes are released on a regular schedule. Followoperational best practices to update your software environment regularly. Bydefault, GKE manages cluster and node pool upgrades for you.Kubernetes and GKE also provide additional features tofacilitate MySQL software upgrades.
Plan for GKE upgrades
You can take proactive steps and set configurations to mitigate risk andfacilitate a smoother cluster upgrade when you are running stateful services,including:
Standard clusters: FollowGKE best practices for upgrading clusters. Choose an appropriateupgrade strategy to ensure the upgrades happen during the period of the maintenance window:
- Choosesurge upgrades if cost optimization is important and if your workloadscan tolerate a graceful shutdown in less than 60 minutes.
- Chooseblue-green upgrades if your workloads are less tolerant of disruptions, and atemporary cost increase due to higher resource usage is acceptable.
To learn more, seeUpgrade a cluster running a stateful workload.Autopilot clusters areautomatically upgraded, based on the release channel you selected.
Usemaintenance windowsto ensure upgrades happen when you intend them. Before the maintenance window,ensure your database backups are successful.
Before allowing traffic to the upgraded MySQL nodes, use Readiness Probes andLiveness Probes to ensure they are ready for traffic.
Create Probes that assess whether replication is in sync before acceptingtraffic. This can be done through custom scripts, depending on the complexityand scale of your database.
Set a Pod Disruption Budget (PDB) policy
When a MySQL InnoDB Cluster is running on GKE, there must be asufficient number of instances running at any time to meet the quorum requirement.
In this tutorial, given a MySQL cluster of three instances, two instances mustbe available to form a quorum. APodDisruptionBudget policy allows you tolimit the number of Pods that can be terminated at any given time. This isuseful for both steady state operations of your stateful services and forcluster upgrades.
To ensure that a limited number of Pods are concurrently disrupted, you set thePDB for your workload tomaxUnavailable: 1. This ensures that at any point inthe service operation, no more than one Pod is not running.
minAvailable value to ensure that a minimum numberof Pods are running. However, if usingminAvailable alone, toguarantee cluster availability, make sure thatthe value is increased if the size of the cluster increases. In contrast, themaxUnavailable value provides quorum protection for the cluster without anychanges; the tradeoff is that only one instance can be disrupted for upgrade ata time.The followingPodDisruptionBudget policy manifest sets the maximum unavailablePods to one for your MySQL application.
apiVersion:policy/v1kind:PodDisruptionBudgetmetadata:name:mysql-pdbspec:maxUnavailable:1selector:matchLabels:app:mysqlTo apply the PDB policy to your cluster, follow these steps:
Apply the PDB policy using
kubectl.kubectlapply-nmysql1-fmysql-pdb-maxunavailable.yamlView the status of the PDB.
kubectlgetpoddisruptionbudgets-nmysql1mysql-pdb-oyamlIn the
statussection of the output, see thecurrentHealthyanddesiredHealthyPods counts. The output is similar to the following:status:... currentHealthy: 3 desiredHealthy: 2 disruptionsAllowed: 1 expectedPods: 3...
Plan for MySQL binary upgrades
Kubernetes and GKE provide features to facilitate upgrades forthe MySQL binary. However, you need to perform some operations to prepare forthe upgrades.
Keep the following considerations in mind before you begin theupgrade process:
- Upgrades should first be carried out in a test environment. For productionsystems, you should perform further testing in a pre-production environment.
- For some binary releases, you cannot downgrade the version once an upgradehas been performed. Take the time to understand the implications of an upgrade.
- Replication sources can replicate to a newer version. However, copying from anewer to an older version is typically not supported.
- Make sure you have a complete database backup before deploying the upgraded version.
- Keep in mind the ephemeral nature of Kubernetes Pods. Any configuration statestored by the Pod that is not on the persistent volume will be lost when thePod is redeployed.
- For MySQL binary upgrades, use the same PDB, node pool update strategy, andProbes as described earlier.
In a production environment, you should follow these best practices:
- Create a container image with the new version of MySQL.
- Persist the image build instructions in a source control repository.
- Use an automated image build and testing pipeline such as Cloud Build, andstore the image binary in an image registry such as Artifact Registry.
To keep this tutorial simple, you will not build and persist a container image;instead, you use the public MySQL images.
Deploy the upgraded MySQL binary
To perform the MySQL binary upgrade, you issue a declarative command thatmodifies the image version of the StatefulSet resource. GKEperforms the necessary steps to stop the current Pod, deploy a new Pod withthe upgraded binary, and attach the persistent disk to the new Pod.
Verify that the PDB was created.
kubectlgetpoddisruptionbudgets-nmysql1Get the list of stateful sets.
kubectlgetstatefulsets-nmysql1Get the list of running Pods using the
applabel.kubectlgetpods--selector=app=mysql-nmysql1Update the MySQL image in the stateful set.
kubectl-nmysql1\setimagestatefulset/dbc1\mysql=mysql/mysql-server:8.0.30The output is similar to the following:
statefulset.apps/mysql image updatedCheck the status of the terminating Pods and new Pods.
kubectlgetpods--selector=app=mysql-nmysql1
Validate the MySQL binary upgrade
During the upgrade, you can verify the status of the rollout, the new Pods,and the existing Service.
Confirm the upgrade by running the
rollout statuscommand.kubectlrolloutstatusstatefulset/dbc1-nmysql1The output is similar to the following:
partitioned roll out complete: 3 new pods have been updated...Confirm the image version by inspecting the stateful set.
kubectlgetstatefulsets-owide-nmysql1The output is similar to the following:
NAME READY AGE CONTAINERS IMAGESdbc1 3/3 37m mysql mysql/mysql-server:8.0.30Check the status of the cluster.
kubectl-nmysql1\exec-itdbc1-0--\/bin/bash\-c'mysqlsh \ --uri="root:$MYSQL_ROOT_PASSWORD@dbc1-1.mysql.mysql1.svc.cluster.local" \ --js \ --execute "print(dba.getClusterSet().status({extended:1})); print(\"\\n\")"'For each cluster instance, look for the status and version values in the output.The output is similar to the following:
... "status": "ONLINE", "version": "8.0.30"...
Rollback the last app deployment rollout
Warning: Some binary versions cannot be downgraded. Understandthe implications and constraints before performing a binary upgrade.When you revert the deployment of an upgraded binary version, the rolloutprocess is reversed and a new set of Pods is deployed with the previous imageversion.
To revert the deployment to the previous working version, use therollout undocommand:
kubectlrolloutundostatefulset/dbc1-nmysql1The output is similar to the following:
statefulset.apps/dbc1 rolled backScale your database cluster horizontally
To scale your MySQL InnoDB Cluster horizontally, you add additional nodes to theGKE cluster node pool (only required if you are using Standard),deploy additional MySQL instances, then add each instance to the existingMySQL InnoDB Cluster.
Add nodes to your Standard cluster
This operation is not needed if you are using a Autopilot cluster.
To add nodes to your Standard cluster, follow the instructions below forCloud Shell or the Google Cloud console. For detailed steps,seeResize a node pool
gcloud
In Cloud Shell, resize the default node pool to eight instances in each managedinstance group.
gcloudcontainerclustersresize${CLUSTER_NAME}\--node-pooldefault-pool\--num-nodes=8Console
To add nodes to your Standard cluster:
- Open the
gkemulti-west1Cluster page in the Google Cloud console. - SelectNodes, and click ondefault pool.
- Scroll down toInstances groups.
- For each instance group, resize the
Number of nodesvalue from 5 to8 nodes.
Add MySQL Pods to the primary cluster
To deploy additional MySQL Pods to scale your cluster horizontally, follow these steps:
In Cloud Shell, update the number of replicas in the MySQL deploymentfrom three replicas to five replicas.
kubectlscale-nmysql1--replicas=5-fc1-mysql.yamlVerify the progress of the deployment.
kubectl-nmysql1getpods--selector=app=mysql-owideTo determine whether the Pods are ready, use the
--watchflag to watch thedeployment. If you are using Autopilot clusters and seePod Unschedulableerrors, this might indicate GKE isprovisioning nodes to accommodate the additional Pods.Configure the group replication settings for the new MySQL instances to addto the cluster
bash../scripts/c1-clustersetup.sh34The script submits the commands to the instances running on the Pods withordinals 3 through 4.
Open MySQL Shell.
kubectl-nmysql1\exec-itdbc1-0--\/bin/bash\-c'mysqlsh \ --uri="root:$MYSQL_ROOT_PASSWORD@dbc1-0.mysql"'Configure the two new MySQL instances.
dba.configureInstance('root:$MYSQL_ROOT_PASSWORD@dbc1-3.mysql',{password:os.getenv("MYSQL_ROOT_PASSWORD"),clusterAdmin:'icadmin',clusterAdminPassword:os.getenv("MYSQL_ADMIN_PASSWORD")});dba.configureInstance('root:$MYSQL_ROOT_PASSWORD@dbc1-4.mysql',{password:os.getenv("MYSQL_ROOT_PASSWORD"),clusterAdmin:'icadmin',clusterAdminPassword:os.getenv("MYSQL_ADMIN_PASSWORD")});The commands check if the instance is configured properly for MySQL InnoDB Clusterusage and perform the necessary configuration changes.
Add one of the new instances to the primary cluster.
cluster=dba.getCluster()cluster.addInstance('icadmin@dbc1-3.mysql',{password:os.getenv("MYSQL_ROOT_PASSWORD"),recoveryMethod:'clone'});Add a second new instance to the primary cluster.
cluster.addInstance('icadmin@dbc1-4.mysql',{password:os.getenv("MYSQL_ROOT_PASSWORD"),recoveryMethod:'clone'});Obtain the ClusterSet status, which also includes the Cluster status.
clusterset=dba.getClusterSet()clusterset.status({extended:1})The output is similar to the following:
"domainName": "clusterset","globalPrimaryInstance": "dbc1-0.mysql:3306","metadataServer": "dbc1-0.mysql:3306","primaryCluster": "mycluster","status": "HEALTHY","statusText": "All Clusters available."Exit MySQL Shell.
\q
Clean up
To avoid incurring charges to your Google Cloud account for the resources used in this tutorial, either delete the project that contains the resources, or keep the project and delete the individual resources.
Delete the project
The easiest way to avoid billing is to delete the project you created forthe tutorial.
Caution: Deleting a project has the following effects:- Everything in the project is deleted. If you used an existing project for the tasks in this document, when you delete it, you also delete any other work you've done in the project.
- Custom project IDs are lost. When you created this project, you might have created a custom project ID that you want to use in the future. To preserve the URLs that use the project ID, such as an
appspot.comURL, delete selected resources inside the project instead of deleting the whole project.
If you plan to explore multiple architectures, tutorials, or quickstarts, reusing projects can help you avoid exceeding project quota limits.
What's next
- Learn more about how theGoogle Cloud Observability MySQL integration collects performance metrics related to InnoDB.
- Learn more aboutbackup for GKE, a service for backing up and restoring workloads in GKE.
- ExplorePersistent Volumes in more detail.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-10-30 UTC.