Best practices for SQL Server instances Stay organized with collections Save and categorize content based on your preferences.
You can apply several best practices to optimize Compute Engineinstances that run Microsoft SQL Server. To learn how to set up ahigh-performance SQL Server instance, readCreating a high-performance SQL Server instance.
Use Workload Manager to evaluate and deploy SQL Server
SQL Server evaluation inWorkload Managerprovides you the ability to scan your SQL Server deployments with a set ofpredefined Google Cloud recommendations for optimal performance directly from theGoogle Cloud console. For more information, seeAgent for SQL Server setup instructions.
TheGuided Deployment Automation tool in Workload Managerlets you configure and deploy enterprise applications on Google Cloud. You can also useGuided Deployment Automation to configure a deployment for your workload,and then generate Terraform and Ansible infrastructure as code (IaC) that youcan export for further customization or use in an existing deployment pipeline. For more information, seeGuided Deployment Automation.
Configuring Windows
This section covers configuration topics about how to optimize the MicrosoftWindows operating system for SQL Server performance when running onCompute Engine.
Setting up Windows firewall
Best practice: Use the Windows Server Advanced Firewall, and specify the IPaddresses of your client computers.
The Windows Advanced Firewall is an important security component in WindowsServer. When you set up your SQL Server environment so that it can connect tothe database from other client machines, configure the firewall to allowincoming traffic:
netsh advfirewall firewall add rule name="SQL Access" ^dir=in action=allow ^program="%programfiles%\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\Binn\sqlservr.exe" ^remoteip=LOCAL_SUBNET
When you use this firewall rule, it is a good practice to specify the IP addressof your client machines. Specify a comma-delimited list of IP addresses withoutblank spaces for theremoteip parameter in place ofLOCAL_SUBNET. Also,note that the path for theprogram parameter might change depending on theversion of SQL Server that you use.
The SQL Server application image includes aSQL Server Windows firewall rule.This rule is fairly unrestricted, so consider disabling it before your systemgoes to production.
Tuning network connections
Best practice: Use the operating system's default network settings.
The default network settings on most operating systems are configured forconnections on small computers that are connected to moderately fast networks.Such settings are usually sufficient. Furthermore,conservative defaults make sure that network traffic doesn't overwhelmthe network and connected computers.
On Compute Engine, virtual machine (VM) instances are attached to anetwork designed by Google that offers high capacity and performance. The physicalservers running your Compute Engine instances are highly optimizedto take advantage of this network capacity. The virtual network drivers in yourinstances are also optimized, which makes the default values sufficient formost use cases.
Installing antivirus
Best practice: Follow the Microsoft guidance for antivirussoftware.
If you are running Windows, you should be running some antivirus software.Malware and software viruses present a significant risk to any system connectedto a network, and antivirus software is a simple mitigation step you can useto protect your data. However, if the antivirus software is not configuredcorrectly, it can negatively impact your database performance.Microsoft provides advice about how to choose antivirus software.
Optimizing for performance and stability
This section provides information about how to optimize SQL Server performanceon Compute Engine and describes operational activities to help keep itrunning smoothly.
Moving data files and log files to a new disk
Best practice: Use a separate SSD persistent disk for log and data files.
By default, the preconfigured image for SQL Server comes with everythinginstalled on the boot persistent disk, which mounts as the `C:` drive. Considerattaching a secondary SSD persistent disk and moving the log files and data files to the new disk.
Using a Local SSD to improve IOPS
Best practice: Create new SQL Server instances with one or morelocal SSDs to store thetempdb and Windows paging files.
The ephemeral nature of local SSD technology makes it a poor candidate foruse with your critical databases and important files. However thetempdband Windows paging file are both temporary files, so both are great candidatesto move to a local SSD. This offloads a significant number of I/O operationsfrom your SSD persistent disks. For more information about setting this up, seeSetting up TempDB.
Parallel query processing
Best practice: Set themax degree of parallelism to8.
The recommended default setting formax degree of parallelism is to match itto the number of CPUs on the server. However, there is a point where breaking aquery into 16 or 32 chunks, executing them all on different vCPUs and thenconsolidating it all back to a single result takes a lot more time than if onlyone vCPU had run the query. In practice, 8 works as a good default value.
Best practice: Monitor forCXPACKET waits and incrementally increasecost threshold for parallelism.
This setting goes hand in hand withmax degree of parallelism. Each unit representsa combination of CPU and I/O work required to perform a query with a serial executionplan before it is considered for a parallel execution plan. The default value is 5.Although we give no specific recommendation to change the default value, it isworth keeping an eye on and, if necessary, increasing it incrementally by 5 duringload testing. One key indicator that this value might need to be increased is thepresence ofCXPACKET waits. Although the presence ofCXPACKET waits does notnecessarily indicate that this setting should change, it's a good place to start.
Best practice: Monitor for different wait types, and adjust the globalparallel processing settings or set them at the individual database level.
Individual databases can have different parallelism needs. You can set thesesettings globally, and setMax DOP at the individual database level. Youshould observe your unique workloads, monitor for waits, and then adjust thevalues accordingly.
The SQLSkills site offers a useful performance guide that covers wait statistics inside the database.Following this guide can help you understand what is waiting and how to mitigatethe delays.
Handling transaction logs
Best practice: Monitor the growth of the transaction log on your system.Consider disabling autogrowth and setting your log file to a fixed size, basedon your average daily log accumulation.
One of the most overlooked sources of performance loss and intermittentslowdowns is the unmanaged growth of the transaction log. When your database isconfigured to use theFull recovery model, you can perform a restore to anypoint in time, but your transaction logs fill up faster. By default, when thetransaction log file is full, SQL Server increases the size of the file to addmore empty space to write more transactions and blocks all activity on thedatabase until it finishes. SQL Server grows each log file based off of itsMaximum File Size and theFile Growth setting.
When the file has reached its maximum size limit and cannot grow, the systemissues a9002 error and puts the database into read-only mode. If the file can grow, SQL Serverexpands the file size and zeroes out the empty space. The setting forFileGrowth defaults to 10% of the log file's current size. This is not a gooddefault setting for performance because the larger your file grows, the longerit takes to create the new, empty space.
Best practice: Schedule regular backups of the transaction log.
Regardless of the maximum size and growth settings, schedule regulartransaction log backups,which, by default, truncates old log entriesand lets the system reuse existing file space. This simplemaintenance task can help to avoid performance dips at your peak traffic times.
Optimizing Virtual Log Files
Best practice: Monitor Virtual Log File growth and take action to preventlog file fragmentation.
The physical transaction log file is segmented into Virtual Log Files (VLF). NewVLFs are created every time the physical transaction log file has to grow. Ifyou did not disable auto-growth, and growth is happening too frequently, toomany VLFs are created. This activity can result in log file fragmentation, whichis similar to disk fragmentation and can adversely affect performance.
SQL Server 2014 introduced a more efficient algorithm for determining how manyVLFs to create during auto-growth. Generally, if the growth is less than 1/8 thesize of the current log file, SQL Server creates one VLF within that newsegment. Previously, it would create 8 VLFs for growth between 64 MB and 1 GB,and 16 VLFs for growth over 1 GB. You can use the TSQL script below to checkhow many VLFs your database currently has. If it has thousands of files,consider manually shrinking and resizing your log file.
--Check VLFs substitute your database name belowUSEYOUR_DBDECLARE @vlf_count INTDBCC LOGINFOSET @vlf_count = @@ROWCOUNTSELECT VLFs = @vlf_count
You can read more about VLFs onBrent Ozar's website.
Avoiding index fragmentation
Best practice: Regularly defragment the indexes on your most heavilymodified tables.
The indexes in your tables can become fragmented, which can lead to poorperformance of any queries using these indexes. A regular maintenance scheduleshould include reorganizing the indexes on your most heavily modified tables.You can run the following Transact-SQL script for your database to show the indexes andtheir fragmentation percentage. You can see in the example results that thePK_STOCK index is 95% fragmented. In the following 'SELECT' statement,replace 'YOUR_DB' with the name of your database:
SELECT stats.index_id as id, name, avg_fragmentation_in_percentFROM sys.dm_db_index_physical_stats (DB_ID(N'YOUR_DB'), NULL, NULL, NULL, NULL) AS stats JOIN sys.indexes AS indx ON stats.object_id = indx.object_id AND stats.index_id = indx.index_id AND name IS NOT NULL;RESULTS-------------------------------Id name avg_fragmentation_in_percent-------------------------------1 ORDERS_I1 02 ORDERS_I2 01 ORDER_LINE_I1 0.011 PK_STOCK95.55298195570391 PK_WAREHOUSE0.8
When your indexes are too fragmented, you can reorganize them by using a basicALTER script. Here is an example script that prints theALTERstatements you can run for each of your tables' indexes:
SELECT'ALTER INDEX ALL ON ' + table_name + ' REORGANIZE;GO'FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_TYPE = 'BASE TABLE' AND TABLE_CATALOG = 'YOUR_DB'
Choose the tables from the result set that have the highest fragmentation, andthen execute those statements incrementally. Consider scheduling this or asimilar script as one of your regular maintenance jobs.
If you are restoring a database from another location to Google Cloud, you should rebuild, or at least reorganize, the indexes, then update usage. ReplaceYOUR_DB with the name of your database:dbcc updateusage (YOUR_DB) GO USEYOUR_DBGO
Formatting secondary disks
Best practice: Format secondary disks with a 64 KB allocation unit.
SQL Server stores data in units of storage calledextents.Extents are 64 KB insize and are made up of eight, contiguous memory pages that are also 8 KB insize. Formatting a disk with a 64 KB allocation unit lets SQL Server read andwrite extents more efficiently, which increases I/O performance from the disk.
To format secondary disks with a 64 KB allocation unit, run the followingPowerShell command, which searches for all new and uninitialized disks in asystem and formats the disks with the 64 KB allocation unit:
Get-Disk | Where-Object {$_.PartitionStyle -eq 'RAW'} | Initialize-Disk -PartitionStyle GPT -PassThru | New-Partition -AssignDriveLetter -UseMaximumSize | Format-Volume -FileSystem NTFS -AllocationUnitSize 65536 -Confirm:$FALSEBacking up
Best practice: Backup your data regularly using Google's backup and disasterrecovery solutions for optimal protection. We recommend backing up your data atleast once a day.
Google's backup and disaster recovery solutions provide the following benefitsfor Microsoft SQL Server:
- Efficient incremental forever backup with true point in-time recovery thathelps in performing backup in less time than conventional backups, while reducingthe impact on production servers. It also reduces both bandwidth and storage consumptionfor low Recovery Point Objective (RPO) and Total Cost of Ownership (TCO).
- Mount and migrate recoveries (M&M) for backups stored in Cloud Storage for low RTO.
- Comprehensive integration with SQL Server capabilities including support forSQL Server availability group clusters and multiple recovery options across scenarios.
- Central pane of management including dedicated monitoring, alerting and reportingcapabilities for all of your backups.
Learn more:
- Backup and DR Service Product Overview
- Feature list for Backup and DR Service
- Protect and recover Microsoft SQL Server databases
- Backup and DR Service Pricing
- Pricing Calculator
Monitoring
Best practice: Use Cloud Monitoring.
You caninstall the Cloud Monitoring agent for MicrosoftWindows to send several monitoring data points into the Cloud Monitoring system.
By usingdata collection capabilities,you can fine tune the information you want to monitor, and send it tothebuilt-in management data warehouse.The management data warehouse can run onthe same server you are monitoring, or the data can be streamed to another SQLServer instance running the warehouse.
Bulk-loading data
Best practice: Use a separate database to stage and transform bulk databefore moving it to production servers.
It's likely that you will need to load large amounts of data into your system atleast once, if not regularly. This is a resource-intensive operation, and youmight reach thepersistent disk IOPS limitwhen you do bulk loads.
There is an easy way to cut down on the disk I/O and CPU consumption of bulk loadoperations, with the added benefit of speeding up the execution time of yourbatch jobs. The solution is to create a completely separate database thatuses theSimple recovery model, and then use that database for staging andtransforming the bulk dataset before you insert it into your productiondatabase. You can also put this new database on a local SSD drive, if youhave enough space. Using a local SSD for the recovery database reduces theresource consumption of your bulk operations and the time required to complete the jobs.The final benefit is that your backup job for the production data won't have toback up all those bulk operations in the transaction log, and therefore it willbe smaller and run faster.
Validating your setup
Best practice: Test your configuration to validate that it performs asexpected.
Whenever you set up a new system, you should plan on validating theconfiguration and running some performance tests.This storedprocedure is a great resource for evaluatingyour SQL Server configuration. Take some time later to read about theconfiguration flags, and run the procedure.
Optimizing SQL Server Enterprise Edition
SQL Server Enterprise Edition has a long list of added capabilities overStandard Edition. If you are migrating an existing license toGoogle Cloud, there are some performance options that you mightconsider implementing.
Using compressed tables
Best practice: Enable table and index compression.
It might seem counterintuitive that compressing tables could make your systemperform faster, but, in most cases, that's what happens. The tradeoff is using asmall amount of CPU cycles to compress the data and eliminate the extra disk I/Orequired to read and write the bigger blocks. Generally, the less disk I/O yoursystem uses, the better its performance. Instructions for estimating andenabling table and index compression areon the MSDNwebsite.
Note: The table and index compression options are also available for SQL ServerStandard Edition 2016 SP1 and later.Enabling the buffer pool extension
Best practice: Use the buffer pool extension to speed data access.
The buffer pool is where the system storesclean pages. In simple terms, itstores copies of your data, mirroring what it looks like on disk. When the datachanges in memory, it's called adirty page. Dirty pages must be flushed to disk tosave the changes. When your database is larger than your available memory, thatputs pressure on the buffer pool, and clean pages might be dropped. When the cleanpages are dropped, the system must read from disk the next time it accesses thedropped data.
Thebuffer pool extension feature lets you push clean pages to a local SSD, instead of dropping them. This worksalong the same lines as virtual memory, which is to say, byswapping, andgives you access to the clean pages on the local SSD, which is faster thangoing to the regular disk to fetch the data.
This technique is not nearly as fast as having enoughmemory, but it can give you a modest increase in throughput when your availablememory is low. You can read more about buffer pool extensions and review somebenchmarking results onBrent Ozar'ssite.
Optimizing SQL Server Licensing
Simultaneous Multithreading (SMT)
Best practice: Set the number of threads per core to 1 for most SQL Serverworkloads
Simultaneous multithreading (SMT), commonly known as Hyper-Threading Technology(HTT) on Intel processors, is a feature that lets a single CPU core be logicallyshared as two threads. On Compute Engine, SMT is enabled on most VMs bydefault, which means that each vCPU in the VM runs on a single thread and eachphysical CPU core is shared by two vCPUs.
On Compute Engine, you can configure thenumber of threads per core,which effectively turns SMT off. When the number of threads per core is set to1, vCPUs do not share physical CPU cores. This configuration significantlyimpacts licensing costs for Windows Server and SQL Server. When the number ofthreads per core is set to 1, the number of vCPUs in a VM is halved, which alsohalves the number of Windows Server and SQL Server licenses required. This cansignificantly decrease the total workload cost.
However, configuring the number of threads per core also impacts workloadperformance. Applications that are written to be multi-threaded can takeadvantage of this feature by breaking up computing work into smallerparallelizable chunks that are scheduled across multiple logical cores. Thisparallelization of work often increases the overall system throughput by betterutilizing the available core resources. For example, when one thread is stalled,the other thread can utilize the core.
The exact performance impact of SMT on SQL Server depends on workloadcharacteristics and the hardware platform used because SMT implementationdiffers between hardware generations. Workloads with a high volume of smalltransactions, for example OLTP workloads, can often take advantage of SMT, andbenefit from a larger performance increase. In contrast, workloads that are lessparallelizable, for example OLAP workloads, benefit less from SMT. Althoughthese patterns have been noticed generally, consider evaluating the performanceimpact of SMT on a per workload basis to determine the impact of setting thenumber of threads per core to 1.
The most cost effective configuration for the majority of SQL Server workloadsinvolves setting the number of threads per core to 1. Any performance declinecan be offset by utilizing a larger VM. In most cases, the 50% decrease inlicensing cost is greater than the increased cost of the larger VM.
Example: Consider a SQL Server is deployed in then2-standard-16 configuration
By default, the number of cores visible in the operating system is 16, whichmeans that 16 vCPUs of Windows Server and 16 vCPUs of SQL Server licenses arerequired to run the server.
PS C:\> Get-WmiObject -Class Win32_processor | Select-Object NumberOfCores, @{Name="Thread(s) per core";Expression={$_.NumberOfLogicalProcessors/$_.NumberOfCores}}NumberOfCores Thread(s) per core------------- ------------------ 8 2After following thesteps to disable SMTon the SQL Server the new configuration is:
PS C:\> Get-WmiObject -Class Win32_processor | Select-Object NumberOfCores, @{Name="Thread(s) per core";Expression={$_.NumberOfLogicalProcessors/$_.NumberOfCores}}NumberOfCores Thread(s) per core------------- ------------------ 8 1Now that only 8 cores are visible in the operating system, the server onlyrequires 8 vCPUs for Windows Server and SQL Server to run.
What's next
- Creating a high-performance SQL Server instance
- Creating SQL Server instances
- Creating Windows instances
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-12-15 UTC.