Configuring a SQL Server failover cluster instance that uses Storage Spaces Direct


Microsoft SQL ServerAlways On Failover Cluster Instances (FCI) let you run a single SQL Server instance across multiple Windows Server Failover Cluster(WSFC) nodes. At any point in time, one of the cluster nodes actively hosts theSQL instance. In the event of a failure, WSFC automatically transfers ownershipof the instance's resources to another node.

SQL Server FCI requires data to be located on shared storage so that it can beaccessed across all WSFC nodes. This guide describes how you can deploy aSQL Server 2019 failover cluster instance that usesStorage Spaces Direct (S2D) for shared storage. S2D provides a software-based virtual SAN that can useCompute Engine VM data disks to store the SQL database.

The following diagram illustrates the deployment:

Architecture

Implementing a hyper-converged architecture, the VM instancesnode-1 andnode-2serve as WSFC nodes and also host the shared storage. A third VM instance,witness,is used to achieve a quorum in a failover scenario. The three VM instances aredistributed over three zones and share a common subnet.

Clients communicatewith the SQL Server instance over aninternal TCP load balancer.This load balancer uses a custom health check to determine which WSFC node iscurrently hosting the SQL instance and routes traffic to that instance.

The article assumes that you have already deployed Active Directory on Google Cloudand that you have basic knowledge of SQL Server, Active Directory, and Compute Engine.

Objectives

  • Deploy a WSFC comprising two SQL Server VM instances and a third VM instancethat acts as a file share witness.
  • Deploy a SQL Server FCI on the WSFC.
  • Verify that the cluster is working by simulating a failover.

Costs

This tutorial uses billable components of Google Cloud,including:

Use thepricing calculator to generate a cost estimate based on your projected usage.

Before you begin

To complete this guide, you need the following:

  • An Active Directory domain with at least one domain controller. You can createan Active Directory domainby using Managed Microsoft AD.Alternatively, you can deploy acustom Active Directory environment on Compute Engineandset up a private DNS forwarding zonethat forwards DNS queries to your domain controllers.
  • An Active Directory user that has permission to join computers to the domainand can log in by using RDP. If you're using Managed Microsoft AD, you can usethesetupadmin user.
  • A Google Cloud project and VPC with connectivity to your Active Directory domain controllers.
  • A subnet to use for the WSFC VM instances.

To complete the guide, you also need a Google Cloud project:

  1. Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
  2. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project.

    Go to project selector

  3. Make sure that billing is enabled for your Google Cloud project.

When you finish this tutorial, you can avoid continued billing by deleting theresources you created. For more information, seeCleaning up.

Preparing the project and network

To prepare your Google Cloud project and VPC for the deployment of SQL Server FCI,do the following:

  1. In the Google Cloud console, openCloud Shell by clicking theActivate Cloud ShellActivate Cloud Shell.button.

    Go to the Google Cloud console

  2. Initialize the following variables:

    VPC_NAME=VPC_NAMESUBNET_NAME=SUBNET_NAME

    Where:

    • VPC_NAME: name of your VPC
    • SUBNET_NAME: name of your subnet
  3. Set your defaultproject ID:

    gcloud config set projectPROJECT_ID

    ReplacePROJECT_ID with the ID of your Google Cloud project.

  4. Set your default region:

    gcloud config set compute/regionREGION

    ReplaceREGION with the ID of the region you want to deploy in.

Create firewall rules

To allow clients to connect to SQL Server, allow communication between the WSFCnodes, and to enable the load balancer toperform health checks,you need to create several firewall rules. To simplify the creation of thesefirewall rules, you usenetwork tags:

  • The 2 WSFC nodes are annotated with thewsfc-node tag.
  • All servers (including the witness) are annotated with thewsfc tag.

Create firewall rules that use these network tags:

  1. Return to your existing Cloud Shell session.
  2. Create firewall rules for the WSFC nodes:

    SUBNET_CIDR=$(gcloud compute networks subnets describe $SUBNET_NAME --format=value\('ipCidrRange'\))gcloud compute firewall-rules create allow-all-between-wsfc-nodes \  --direction=INGRESS \  --action=allow \  --rules=tcp,udp,icmp \  --enable-logging \  --source-tags=wsfc \  --target-tags=wsfc \  --network=$VPC_NAME \  --priority 10000gcloud compute firewall-rules create allow-sql-to-wsfc-nodes \  --direction=INGRESS \  --action=allow \  --rules=tcp:1433 \  --enable-logging \  --source-ranges=$SUBNET_CIDR \  --target-tags=wsfc-node \  --network=$VPC_NAME \  --priority 10000
  3. Create a firewall rule that allows health checks from theIP ranges of the Google Cloud probers:

    gcloud compute firewall-rules create allow-health-check-to-wsfc-nodes \  --direction=INGRESS \  --action=allow \  --rules=tcp \  --source-ranges=130.211.0.0/22,35.191.0.0/16 \  --target-tags=wsfc-node \  --network=$VPC_NAME \  --priority 10000
Note: Depending on how you've deployed Active Directory, you might need to createadditional firewall rules to allow servers to join the domain. SeeAccessing Managed Microsoft AD from within your VPCfor further details.

Create VM instances

You now deploy two VM instances for the failover cluster. At any point in time,only one of these VMs serves as the active FCI node while the other node servesas the failover node. The two VM instances must meet the following requirements:

  • They are located in the same region so that they can be accessed by an internalTCP load balancer.
  • Their guest agent isconfigured to use WSFC mode. In this mode, the guest agentignores the IP addresses of internal load balancers when configuring thelocal network interface. This behavior is necessary to prevent IP addressconflicts during WSFC failover events.

You use aSQL Server premium imagewhich has SQL Server 2019 preinstalled.

Note: If you plan to bring your own licenses for SQL Server by using thelicense mobility program,select Windows Server base images for these nodes and install SQL Serverusing your own product keys.

To provide a tie-breaking vote and achieve a quorum for the failover scenario,you deploy a third VM that serves as afile share witness.

  1. Return to your existing Cloud Shell session.
  2. Create aspecialize scriptfor the WSFC nodes. The script installs the necessary Windows feature and createsfirewall rules for WSFC and SQL Server:

    cat << "EOF" > specialize-node.ps1$ErrorActionPreference = "stop"# Install required Windows featuresInstall-WindowsFeature Failover-Clustering -IncludeManagementToolsInstall-WindowsFeature RSAT-AD-PowerShell# Open firewall for WSFCnetsh advfirewall firewall add rule name="Allow SQL Server health check" dir=in action=allow protocol=TCP localport=59997# Open firewall for SQL Servernetsh advfirewall firewall add rule name="Allow SQL Server" dir=in action=allow protocol=TCP localport=1433EOF
  3. Create the VM instances. On the two VMs that serve as S2D and WSFC nodes,attach additional data disks and enable the WSFC modeby setting the metadata keyenable-wsfc totrue:

    REGION=$(gcloud config get-value compute/region)PD_SIZE=50MACHINE_TYPE=n2-standard-8gcloud compute instances create node-1 \  --zone $REGION-a \  --machine-type $MACHINE_TYPE \  --subnet $SUBNET_NAME \  --image-family sql-ent-2019-win-2022 \  --image-project windows-sql-cloud \  --tags wsfc,wsfc-node \  --boot-disk-size 50 \  --boot-disk-type pd-ssd \  --boot-disk-device-name "node-1" \  --create-disk=name=node-1-datadisk-1,size=$PD_SIZE,type=pd-ssd,auto-delete=no \  --create-disk=name=node-1-datadisk-2,size=$PD_SIZE,type=pd-ssd,auto-delete=no \  --create-disk=name=node-1-datadisk-3,size=$PD_SIZE,type=pd-ssd,auto-delete=no \  --create-disk=name=node-1-datadisk-4,size=$PD_SIZE,type=pd-ssd,auto-delete=no \  --metadata enable-wsfc=true \  --metadata-from-file=sysprep-specialize-script-ps1=specialize-node.ps1gcloud compute instances create node-2 \  --zone $REGION-b \  --machine-type $MACHINE_TYPE \  --subnet $SUBNET_NAME \  --image-family sql-ent-2019-win-2022 \  --image-project windows-sql-cloud \  --tags wsfc,wsfc-node \  --boot-disk-size 50 \  --boot-disk-type pd-ssd \  --boot-disk-device-name "node-2" \  --create-disk=name=node-2-datadisk-1,size=$PD_SIZE,type=pd-ssd,auto-delete=no \  --create-disk=name=node-2-datadisk-2,size=$PD_SIZE,type=pd-ssd,auto-delete=no \  --create-disk=name=node-2-datadisk-3,size=$PD_SIZE,type=pd-ssd,auto-delete=no \  --create-disk=name=node-2-datadisk-4,size=$PD_SIZE,type=pd-ssd,auto-delete=no \  --metadata enable-wsfc=true \  --metadata-from-file=sysprep-specialize-script-ps1=specialize-node.ps1gcloud compute instances create "witness" \  --zone $REGION-c \  --machine-type n2-standard-2 \  --subnet $SUBNET_NAME \  --image-family=windows-2022 \  --image-project=windows-cloud \  --tags wsfc \  --boot-disk-size 50 \  --boot-disk-type pd-ssd \  --metadata sysprep-specialize-script-ps1="add-windowsfeature FS-FileServer"
    Note: Depending on yourperformance requirements,consider using a machine type larger thann2-standard-8 for the WSFC nodes.Note: For the purpose of this tutorial, and to fit within the default regionalSSD persistent disk quota, the size of the disks attached to each VM issmaller than it would be in a production environment. For better performanceand to accommodate a larger database, you would increase the size of each disk.For more information about choosing S2D drives, refer to theS2D documentation.
  4. To join the 3 VM instances to Active Directory, do the following for each ofthe 3 VM instances:

    1. Monitor the initialization process of the VM by viewing its serial port output:

      gcloud compute instances tail-serial-port-outputNAME

      ReplaceNAME with the name of the VM instance.

      Wait for a few minutes until you see the outputInstance setup finished,then press Ctrl+C. At this point, the VM instance is ready to be used.

    2. Create a username and passwordfor the VM instance

    3. Connect to the VM by using Remote Desktopand log in using the username and password created in the previous step.

    4. Right-click theStart button (or pressWin+X) and clickWindows PowerShell (Admin).

    5. Confirm the elevation prompt by clickingYes.

    6. Join the computer to your Active Directory domain and restart:

      Add-Computer -DomainDOMAIN -Restart

      ReplaceDOMAIN with the DNS name of your Active Directory domain.

      Wait for approximately 1 minute for the restart to complete.

Reserve cluster IP addresses

You now reserve two static IP addresses in your VPC network. The two addressesserve different purposes:

  • Load balancer IP: This IP address is used by clients to connect toSQL Server.
  • Cluster IP: This IP address is only used internally by WSFC.

To reserve the static IP addresses, do the following:

  1. Reserve a static IP for the internal load balancer and capture the address ina new environment variable namedLOADBALANCER_ADDRESS:

    gcloud compute addresses create wsfc \  --subnet $SUBNET_NAME \  --region $(gcloud config get-value compute/region)LOADBALANCER_ADDRESS=$(gcloud compute addresses describe wsfc \  --region $(gcloud config get-value compute/region) \  --format=value\(address\)) && \echo "Load Balancer IP: $LOADBALANCER_ADDRESS"

    Note the IP address, you need it later.

  2. Reserve another static IP address that you use as cluster IP:

    gcloud compute addresses create wsfc-cluster \  --subnet $SUBNET_NAME \  --region $(gcloud config get-value compute/region) && \CLUSTER_ADDRESS=$(gcloud compute addresses describe wsfc-cluster \    --region $(gcloud config get-value compute/region) \    --format=value\(address\)) && \echo "Cluster IP: $CLUSTER_ADDRESS"

    Note the IP address, you need it later.

Your project and VPC are now ready for the deployment of the WSFC and SQL Server.

Create a witness file share

To preparewitness to serve as file share witness, create a file share and grantyourself and the two WSFC nodes access to the file share:

  1. Connect towitnessby using Remote Desktop.Log in with your domain user account.
  2. Right-click theStart button (or pressWin+X) and clickWindows PowerShell (Admin).
  3. Confirm the elevation prompt by clickingYes.
  4. Create the witness folder and share the folder:

    New-Item "C:\QWitness" -Type directoryicacls C:\QWitness\ /grant 'node-1$:(OI)(CI)(M)'icacls C:\QWitness\ /grant 'node-2$:(OI)(CI)(M)'New-SmbShare `  -Name QWitness `  -Path "C:\QWitness" `  -Description "SQL File Share Witness" `  -FullAccess  $env:username,node-1$,node-2$

Deploying the failover cluster

You now use the VM instances to deploy a WSFC and SQL Server.

Deploy WSFC

You are now ready to create the failover cluster:

  1. Connect tonode-1by using Remote Desktop.Log in with your domain user account.
  2. Right-click theStart button (or pressWin+X) and clickWindows PowerShell (Admin).
  3. Confirm the elevation prompt by clickingYes.
  4. Create a new cluster:

    New-Cluster `  -Name windows-fci `  -Node node-1,node-2 `  -NoStorage `  -StaticAddressCLUSTER_ADDRESS

    ReplaceCLUSTER_ADDRESS with the cluster IP addressthat you created earlier.

    The command creates a computer accountwindows-fci in your Active Directorydomain.

  5. Return to the PowerShell session onwitness and grant the computer accountwindows-fci permission to access the file share:

    icacls C:\QWitness\ /grant 'windows-fci$:(OI)(CI)(M)'Grant-SmbShareAccess `  -Name QWitness `  -AccountName 'windows-fci$' `  -AccessRight Full `  -Force
  6. Return to the PowerShell session onnode-1 and configure the clusterto use the file share onwitness as a cluster quorum:

    Set-ClusterQuorum -FileShareWitness \\witness\QWitness
  7. Verify that the cluster was created successfully:

    Test-Cluster

    You might see some warnings that can be safely ignored:

    WARNING: System Configuration - Validate All Drivers Signed: The test reported some warnings..WARNING: Network - Validate Network Communication: The test reported some warnings..WARNING:Test Result:HadUnselectedTests, ClusterConditionallyApprovedTesting has completed for the tests you selected. You should review the warnings in the Report.  A cluster solution issupported by Microsoft only if you run all cluster validation tests, and all tests succeed (with or without warnings).

    You can also launch the Failover Cluster Manager MMC snap-in to review thecluster's health by runningcluadmin.msc.

  8. If you're using Managed AD, add the computer account used by WSFC to theCloud Service Domain Join Accounts group so that it can join computersto the domain:

    Install-WindowsFeature RSAT-ADDSAdd-ADGroupMember `  -Identity "Cloud Service Domain Join Accounts" `  -Members windows-fci$

Enabling Storage Spaces Direct

You now enable S2D and create a cluster shared volume which combines the threepersistent disks that you created earlier:

  1. Return to the PowerShell session onnode-1.
  2. Enable S2D:

    Enable-ClusterStorageSpacesDirect

    Optionally, if you want better disk performance, you can add SCSIlocal SSDs to yourS2D nodes in addition to standard SSD persistent disks. The local SSDs canserve as the S2D caching layer. Make the number of capacity drives(SSD persistent disks in our case) a multiple of the number of local SSDs.Run the following command instead for enabling S2D with caching:

    Enable-ClusterStorageSpacesDirect -CacheDeviceModel "EphemeralDisk"

    Accept the default when prompted to confirm. You might see some warnings that can be safely ignored:

    WARNING: 2021/04/08-13:12:26.159 Node node-1: No disks found to be used for cacheWARNING: 2021/04/08-13:12:26.159 Node node-2: No disks found to be used for cache
  3. Optionally, set the Cluster Shared Volume (CSV) in-memory cache to 2048 MBfor better read throughput:

    (Get-Cluster).BlockCacheSize = 2048
  4. Create a new volume that uses the cluster shared volume versions of ReFS and a64 KB cluster size:

    New-Volume `  -StoragePoolFriendlyName S2D* `  -FriendlyName FciVolume `  -FileSystem CSVFS_ReFS `  -UseMaximumSize `  -AllocationUnitSize 65536

Testing storage pool failover

Optionally, you can now test whether the storage pool failover works properly:

  1. Connect tonode-2by using Remote Desktop.Log in with your domain user account.
  2. Right-click theStart button (or pressWin+X) and selectRun
  3. Entercluadmin.msc and selectOK.
  4. In the left window pane, navigate toFailover Cluster Manager > windows-fci > Storage > Pools.

    You should see a pool namedCluster Pool 1 withOwner node set tonode-1.

  5. Return to Cloud Shell and resetnode-1 VM to simulate a failover:

    gcloud compute instances reset node-1 --zone $REGION-a
  6. Return to theFailover Cluster Manager onnode-2.

  7. Observe the status of the storage pool by repeatedly pressingF5 to refresh the view.

    After about 30 seconds, theowner node should automatically switch tonode-2.

Remove the default SQL Server installation

You now remove the default SQL Server installation from the two nodes and replaceit with a new FCI configuration.

For each of the two WSFC nodes,node-1 andnode-2, perform the following steps:

  1. Right-click theStart button (or pressWin+X) and clickWindows PowerShell (Admin).
  2. Confirm the elevation prompt by clickingYes.
  3. Remove the default SQL Server instance:

    C:\sql_server_install\Setup.exe /Action=Uninstall /FEATURES=SQL,AS,IS,RS /INSTANCENAME=MSSQLSERVER /Q
  4. Remove Microsoft OLE Driver:

    Get-Package -Name "Microsoft OLE*" | Uninstall-Package -Force
  5. Remove Microsoft ODBC Driver:

    Get-Package -Name "Microsoft ODBC*" | Uninstall-Package -Force
  6. Restart the computer:

    Restart-Computer
  7. Wait for approximately 1 minute for the restart to complete.

Install SQL Server FCI

Before you install the new FCI configuration, verify that thenode-1 is theactive node in the cluster:

  1. Reconnect tonode-1by using Remote Desktopand log in using your domain user.
  2. Right-click theStart button (or pressWin+X) and selectRun
  3. Entercluadmin.msc and selectOK.
  4. In the left window pane, navigate toFailover Cluster Manager > windows-fci.

    Verify that thecurrent host server is set tonode-1.

    If thecurrent host server is set tonode-2, right-clickwindows-fciin the left window pane and selectMore actions > Move core cluster resources > Select node… > node-1and clickOK.

  5. In the left window pane, navigate toFailover Cluster Manager > windows-fci > Storage > Pools.

    Verify that theowner node ofCluster Pool 1 is set tonode-1.

    If theowner node is set tonode-2, right-click the pool, selectMove > Select Node > node-1 and clickOK.

You now create a new SQL Server failover cluster installation onnode-1:

  1. Right-click theStart button (or pressWin+X) and clickWindows PowerShell (Admin).
  2. Confirm the elevation prompt by clickingYes.
  3. Create a domain user account for SQL server and the SQL agent and assigna password:

    Active Directory

    $Credential = Get-Credential -UserName sql_server -Message 'Enter password'New-ADUser `  -Name "sql_server" `  -Description "SQL Agent and SQL Admin account." `  -AccountPassword $Credential.Password `  -Enabled $true -PasswordNeverExpires $true

    Managed Microsoft AD

    $Credential = Get-Credential -UserName sql_server -Message 'Enter password'New-ADUser `  -Name "sql_server" `  -Description "SQL Agent and SQL Admin account." `  -AccountPassword $Credential.Password `  -Enabled $true -PasswordNeverExpires $true `  -Path "OU=Cloud,DOMAIN"

    ReplaceDOMAIN with the distinguished nameof your domain, for exampleDC=example,DC=org.

  4. Start the SQL Server setup:

    & c:\sql_server_install\setup.exe
  5. In the menu on the left, selectInstallation.

  6. SelectNew SQL Server failover cluster installation

  7. On theMicrosoft Update page, selectNext to start the installation.

  8. On theInstall Failover Cluster Rules page, you see a WarningMSCS cluster verification warnings andWindows firewall.You can ignore these warnings and selectNext.

  9. On theProduct Key page, keep the defaults and selectNext.

  10. On theLicense Terms page, review the terms and, if you accept, selectNext.

  11. On theFeature Selection page, selectDatabase Engine Services and selectNext.

  12. On theInstance Configuration page, entersql for the network name and the named instance, and selectNext.

  13. On theCluster Resource Group page, keep the defaults and selectNext.

  14. On theCluster Disk Selection page, enableCluster Virtual Disk (FciVolume)and disable all other disks. SelectNext.

  15. On theCluster Network Configuration page, configure the following settings,then selectNext:

    • DHCP: clear
    • IP address: enter the IP address of the internal load balancer.
  16. On theServer configuration page, configure the following settings forbothSQL Server Agent andSQL Server Database Engine:

    • Account name:DOMAIN\sql_server whereDOMAIN is the NetBIOS name of yourActive Directory domain
    • Password: Enter the password that you created earlier
  17. Select theCollation tab and select the collation that you want to use.Then clickNext.

  18. On theDatabase Engine Configuration page, selectAdd current user todesignate the current user as SQL Server administrator. Then selectNext.

  19. On theReady to Install page, review the settings, then selectInstall.

  20. After the installation completes, selectClose.

Your Active Directory domain now contains a computer accountsql that representsthe SQL Server instance and a corresponding DNS entry that points to the IP addressof the internal load balancer.

Now addnode-2 to the SQL Server failover cluster:

  1. Connect tonode-2by using Remote Desktopand log in using your domain user.
  2. Right-click theStart button (or pressWin+X) and clickWindows PowerShell (Admin).
  3. Confirm the elevation prompt by clickingYes.
  4. Start the SQL Server setup:

    & c:\sql_server_install\setup.exe
  5. In the menu on the left, selectInstallation.

  6. SelectAdd node to a SQL Server failover cluster.

  7. Follow the instructions of the installation wizard and accept the default settingsuntil you reach the pageService Accounts.

  8. On theService Accounts page, enter the password that you created earlierfor bothSQL Server Agent andSQL Server Database Engine. Then selectNext.

  9. On theReady to Install page, review the settings, then selectInstall.

  10. After the installation completes, selectClose.

Configure health checks

As a final step, configure the cluster to expose a health check endpoint thatcan be used by an internal load balancer:

  1. Return to the PowerShell session onnode-2
  2. Initialize a variable with the IP address of the load balancer.

    $LoadBalancerIP = 'IP_ADDRESS'

    ReplaceIP_ADDRESS with the IP address of thewsfcaddress that you reserved earlier.

  3. Configure the Failover Cluster to respond to the health check service:

    $SqlGroup = Get-ClusterGroup |  Where-Object {$_.Name.StartsWith("SQL Server")}$SqlIpAddress = Get-ClusterResource |  Where-Object {$_.Name.StartsWith("SQL IP Address")}$SqlIpAddress | Set-ClusterParameter -Multiple @{ 'Address'=$LoadBalancerIP; 'ProbePort'= 59997; 'SubnetMask'='255.255.255.255'; 'Network'= (Get-ClusterNetwork).Name; 'EnableDhcp'=0; }
  4. Restart the cluster resource:

    $SqlIpAddress | Stop-ClusterResource$SqlIpAddress | Start-ClusterResource
  5. Restart the cluster group:

    $SqlGroup | Stop-ClusterGroup$SqlGroup | Start-ClusterGroup

Create an internal load balancer

To provide a single endpoint for SQL Server clients, you now deploy aninternal load balancer.The load balancer uses a health check which ensures that traffic is directed tothe active node of the WSFC.

  1. Return to your existing Cloud Shell session.
  2. Create twounmanaged instance groups,one per zone, and add the two nodes to the groups:

    gcloud compute instance-groups unmanaged create wsfc-group-1 --zone $REGION-agcloud compute instance-groups unmanaged add-instances wsfc-group-1 --zone $REGION-a \  --instances node-1gcloud compute instance-groups unmanaged create wsfc-group-2 --zone $REGION-bgcloud compute instance-groups unmanaged add-instances wsfc-group-2 --zone $REGION-b \  --instances node-2
  3. Create a health check that the load balancer can use to determine which isthe active node.

    gcloud compute health-checks create tcp wsfc-healthcheck \  --check-interval="2s" \  --healthy-threshold=1 \  --unhealthy-threshold=2 \  --port=59997 \  --timeout="1s"

    The health check probes port59997, which is the port you previouslyconfigured asProbePort for the SQL Server IP address resource.

  4. Create a backend service and add the two instance groups:

    gcloud compute backend-services create wsfc-backend \  --load-balancing-scheme internal \  --region $(gcloud config get-value compute/region) \  --health-checks wsfc-healthcheck \  --protocol tcpgcloud compute backend-services add-backend wsfc-backend \  --instance-group wsfc-group-1 \  --instance-group-zone $REGION-a \  --region $REGIONgcloud compute backend-services add-backend wsfc-backend \  --instance-group wsfc-group-2 \  --instance-group-zone $REGION-b \  --region $REGION
  5. Create the internal load balancer:

    gcloud compute forwarding-rules create wsfc-sql \  --load-balancing-scheme internal \  --address $LOADBALANCER_ADDRESS \  --ports 1433 \  --network $VPC_NAME \  --subnet $SUBNET_NAME \  --region $REGION \  --backend-service wsfc-backend

Testing the failover cluster

You've completed the installation of the failover cluster, but you still have totest whether the cluster works correctly.

Prepare a client

Create a new VM instance which you can use to connect to the failover cluster:

  1. Return to your existing Cloud Shell session.
  2. Create a new VM instance:

    gcloud compute instances create sqlclient \  --zone $REGION-a \  --machine-type n2-standard-2 \  --subnet $SUBNET_NAME \  --image-family sql-ent-2019-win-2022 \  --image-project windows-sql-cloud \  --boot-disk-size 50 \  --boot-disk-type pd-ssd
  3. Monitor the initialization process of the VM by viewing its serial port output:

    gcloud compute instances tail-serial-port-output sqlclient

    Wait for a few minutes until you see the outputInstance setup finished, thenpress Ctrl+C. At this point, the VM instance is ready to be used.

  4. Create a username and passwordfor the VM instance

  5. Connect to the VM by using Remote Desktopand log in using the username and password created in the previous step.

  6. Right-click theStart button (or pressWin+X) and clickWindows PowerShell (Admin).

  7. Confirm the elevation prompt by clickingYes.

  8. Join the computer to your Active Directory domain:

    Add-Computer -DomainDOMAIN

    ReplaceDOMAIN with the DNS name of your Active Directory domain.

  9. Restart the computer:

    Restart-Computer

    Wait for approximately 1 minute for the restart to complete.

Run the test

Use thesqlclient VM to test that you can connect to the failover cluster andto verify that the failover works correctly:

  1. Connect tosqlclientby using Remote Desktopand log in using your domain user.
  2. Right-click theStart button (or pressWin+X) and clickWindows PowerShell.
  3. Connect to the SQL Server cluster by using TCP/IP and the DNS namesql and query thedm_os_cluster_nodes table:

    & "$env:ProgramFiles\Microsoft SQL Server\Client SDK\ODBC\170\Tools\Binn\SQLCMD.EXE" `   -S tcp:sql -E -Q "SELECT * FROM sys.dm_os_cluster_nodes"

    The output should look like this:

    NodeName                       status      status_description is_current_owner------------------------------ ----------- ------------------ ----------------NODE-1                                   0 up                                1NODE-2                                   0 up                                0(2 rows affected)

    Notice thatnode-1 is the current owner of the SQL Server failover cluster resource.

  4. Return to Cloud Shell and bring down the node-1 VM to test the failover scenario.

    gcloud compute instances stop node-1 --zone $REGION-a
  5. Repeat the query:

    & "$env:ProgramFiles\Microsoft SQL Server\Client SDK\ODBC\170\Tools\Binn\SQLCMD.EXE" `   -S tcp:sql -E -Q "SELECT * FROM sys.dm_os_cluster_nodes"

    The output should now look like this:

    NodeName                       status      status_description is_current_owner------------------------------ ----------- ------------------ ----------------NODE-1                                   1 down                              0NODE-2                                   0 up                                1(2 rows affected)

    Notice that despite the loss ofnode-1, the query succeeds, and shows thatnode-2 is now the current owner of the failover cluster.

Limitations

  • S2D is only supported for Windows Server 2016 and above.
  • With S2D, each disk only contains a partial view of the overall data. Sotaking a snapshot of a persistent disk won't be enough to back up your data.Usenative SQL backup instead.

Clean up

After you finish the tutorial, you can clean up the resources that you created so that they stop using quota and incurring charges. The following sections describe how to delete or turn off these resources.

Deleting the project

The easiest way to eliminate billing is to delete the project that you created for the tutorial.

To delete the project:

    Caution: Deleting a project has the following effects:
    • Everything in the project is deleted. If you used an existing project for the tasks in this document, when you delete it, you also delete any other work you've done in the project.
    • Custom project IDs are lost. When you created this project, you might have created a custom project ID that you want to use in the future. To preserve the URLs that use the project ID, such as anappspot.com URL, delete selected resources inside the project instead of deleting the whole project.

    If you plan to explore multiple architectures, tutorials, or quickstarts, reusing projects can help you avoid exceeding project quota limits.

  1. In the Google Cloud console, go to theManage resources page.

    Go to Manage resources

  2. In the project list, select the project that you want to delete, and then clickDelete.
  3. In the dialog, type the project ID, and then clickShut down to delete the project.

What's next

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-07-09 UTC.