Identify where latency occurs Stay organized with collections Save and categorize content based on your preferences.
This page shows you how to identify and troubleshoot latency issues in yourSpanner components.To learn more about possible latency points in aSpanner request, seeLatency points in a Spanner request.
You can measure and compare the request latencies between different componentsand the database to determine which component is causing the latency. Theselatencies includeEnd-to-end latency,Google Front End (GFE) latency,Spanner API request latency, andQuery latency.
Note: You can also use OpenTelemetry to capture and visualize end-to-end, GFE latency, and query latency. For more information, seeCapture custom client-side metrics using OpenTelemetry.In your client application that uses your service, confirm there's alatency increase from end-to-end latency. Check the following dimensionsfrom your client-side metrics. For more information,seeClient-side metrics descriptions.
client_name: the client library name and version.location: the Google Cloud region where the client-sidemetrics are published. If your application is deployed outsideGoogle Cloud, then the metrics are published to theglobalregion.method: the RPC method name—for example,spanner.commit.status: the RPC status—for example,OKorINTERNAL.
Group by these dimensions to see if the issue is limited to a specificclient, status, or method. For dual-region or multi-regional workloads, seeif the issue is limited to a specific client or Spanner region.
Check your client application health, especially the computinginfrastructure on the client side (for example, VM, CPU, or memoryutilization, connections, file descriptors, and so on).
Check latency in Spanner components by viewing theclient-side metrics:
a. Check end-to-end latency using the
spanner.googleapis.com/client/operation_latenciesmetric.b. CheckGoogle Front End (GFE) latency using the
spanner.googleapis.com/client/gfe_latenciesmetric.Check the following dimensions forSpanner metrics:
database: the Spanner database name.method: the RPC method name—for example,spanner.commit.status: the RPC status—for example,OKorINTERNAL.
Group by these dimensions to see if the issue is limited to a specificdatabase, status, or method. For dual-region or multi-regional workloads,check to see if the issue is limited to a specific region.
Check Spanner API request latency using the
spanner.googleapis.com/api/request_latenciesmetric.For more information, seeSpanner metrics.If you have high end-to-end latency, but low GFE latency, and a lowSpanner API request latency, the application code mighthave an issue. It could also indicate a networking issue between the clientand regional GFE. If your application has a performance issue that causessome code paths to be slow, then the end-to-end latency for each APIrequest might increase. There might also be an issue in the client computinginfrastructure that was not detected in the previous step.
If you have a high GFE latency, but a low Spanner API requestlatency, it might have one of the following causes:
Accessing a database from another region. This action can lead to high GFElatency and low Spanner API request latency. For example,traffic from a client in the
us-east1region that has an instance in theus-central1region might have a high GFE latency but a lowerSpanner API request latency.There's an issue at the GFE layer. Check theGoogle Cloud Status Dashboardto see if there are any ongoing networking issues in your region. If therearen't any issues, then open a support case and include this information sothat support engineers can help with troubleshooting the GFE.
Check the CPU utilization of the instance.If the CPU utilization of the instance is above the recommended level, youshould manually add more nodes, or set up auto scaling. For more information,seeAutoscaling overview.
Observe and troubleshoot potential hotspots or unbalanced access patternsusingKey Visualizerand try to roll back any application code changes that strongly correlatewith the issue timeframe.
Note: We recommend you followSchema design best practicesto ensure your access is balanced across Spanner computingresources.Check any traffic pattern changes.
CheckQuery insights andTransaction insights tosee if there might be any query or transaction performance bottlenecks.
Use procedures inOldest active queriesto see any expense queries that might cause a performance bottleneck andcancel the queries as needed.
Use procedures in the troubleshooting sections in the following topics totroubleshoot the issue further using Spanner introspectiontools:
What's next
- Now that you've identified the component that contains the latency, explorethe problem further using thebuilt-in client-side metrics.
- Learn how to usemetrics to diagnose latency.
- Learn how totroubleshoot Spanner deadline exceeded errors.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-12-15 UTC.