Improve query time with custom indexing Stay organized with collections Save and categorize content based on your preferences.
This document describes how to add indexedLogEntry fields to yourCloud Loggingbuckets to makequerying your logs data faster.
Overview
Query performance is critical to any logging solution. As workloads scale up andthe corresponding log volumes increase, indexing your most-used logs data canreduce query time.
To improve query performance, Logging automatically indexes thefollowingLogEntry fields:
- resource.type
- resource.labels.*
- logName
- severity
- timestamp
- insertId
- operation.id
- trace
- httpRequest.status
- labels.*
- split.uid
Besides those fields that Logging automatically indexes, youcan also direct a log bucket to index otherLogEntry fields bycreating a custom index for the bucket.
For example, suppose your query expressions often include the fieldjsonPayload.request.status. You could configure a custom index for a bucketthat includesjsonPayload.request.status; any subsequent query onthat bucket's data would reference the indexedjsonPayload.request.status data if the query expression includes that field.
By using the Google Cloud CLI or Logging API, you can add customindexes to existing or new log buckets. As you select additional fieldsto include in the custom index, note the following limitations:
- You can add up to 20 fields per custom index.
- After you configure or update a bucket's custom index, you must waitfor an hour for the changes to apply to your queries. This latency ensuresquery-result correctness and accepts logs that are written in the past.
- Logging applies custom indexing to data that is stored inlog buckets after the index was created or changed; changes to customindexes don't apply to logs retroactively.
Before you begin
Before you start configuring a custom index, do the following:
Verify that you're using the latest version of thegcloud CLI. For more information, seeManaging Google Cloud CLI components.
Note: The gcloud CLI examples in this document assume that you'veconfigure gcloud CLI is configured to update the correct project.If you want to specify the project in the command, then include the--projectflag and specify a Google Cloud project ID.Verify that you have an Identity and Access Management role with the following permissions:
For details about these roles, seeAccess control with IAM.
Define the custom index
For each field that you add to a bucket's custom index, you define twoattributes: a field path and a field type:
fieldPath: Describes the specific path to theLogEntryfield in your log entries. For example,jsonPayload.req_status.type: Indicates whether the field is of the string or integer type. Thepossible values areINDEX_TYPE_STRINGandINDEX_TYPE_INTEGER.
A custom index can be added either by creating a new bucket or byupdating an existing bucket. For more information about configuring buckets,seeConfigure log buckets.
To configure a custom index when creating a bucket, do the following:
gcloud
Use thegcloud logging buckets createcommand and set the--index flag:
gcloud logging buckets createBUCKET_NAME \--location=LOCATION \--description="DESCRIPTION" \--index=fieldPath=INDEX_FIELD_NAME,type=INDEX_TYPE
Example command:
gcloud logging buckets create int_index_test_bucket \--location=global \--description="Bucket with integer index" \--index=fieldPath=jsonPayload.req_status,type=INDEX_TYPE_INTEGER
API
To create a bucket, useprojects.locations.buckets.createin the Logging API. Prepare the arguments to the method as follows:
Set the
parentparameter to be the resource in whichto create the bucket:projects/PROJECT_ID/locations/LOCATIONThe variableLOCATION refers to theregion in which youwant your logs to be stored.
For example, if you want to create a bucket for project
my-projectinthe in theasia-east2region, yourparentparameter would look likethis:projects/my-project/locations/asia-east2Set the
bucketIdparameter; for example,my-bucket.In the
LogBucketrequest body, configure theIndexConfigobjectto create the custom index.Call
projects.locations.buckets.createto create the bucket.
To update an existing bucket to include a custom index, do the following:
gcloud
Use thegcloud logging buckets updatecommand and set the--add-index flag:
gcloud logging buckets updateBUCKET_NAME \--location=LOCATION \--add-index=fieldPath=INDEX_FIELD_NAME,type=INDEX_TYPE
Example command:
gcloud logging buckets update int_index_test_bucket \--location=global \--add-index=fieldPath=jsonPayload.req_status,type=INDEX_TYPE_INTEGER
API
Useprojects.locations.buckets.patchin the Logging API. In theLogBucket request body, configure theIndexConfig object toinclude theLogEntry fields that you want to index.
Delete a custom indexed field
To delete a field from a bucket's custom index, do the following:
gcloud
Use thegcloud logging buckets updatecommand and set the--remove-indexes flag :
gcloud logging buckets updateBUCKET_NAME \--location=LOCATION \--remove-indexes=INDEX_FIELD_NAME
Example command:
gcloud logging buckets update int_index_test_bucket \--location=global \--remove-indexes=jsonPayload.req_status
API
Useprojects.locations.buckets.patchin the Logging API. In theLogBucket request body,removeLogEntry fields from theIndexConfig object.
Update the custom indexed field's data type
If you need to fix the data type of a custom indexed field, do the following:
gcloud
Use thegcloud logging buckets updatecommand and set the--update-index flag:
gcloud logging buckets updateBUCKET_NAME \--location=LOCATION \--update-index=fieldPath=INDEX_FIELD_NAME,type=INDEX_TYPE
Example command:
gcloud logging buckets update int_index_test_bucket \--location=global \--update-index=fieldPath=jsonPayload.req_status,type=INDEX_TYPE_INTEGER
API
Useprojects.locations.buckets.patchin the Logging API. In theLogBucket request body, update theIndexConfig object to providethe correct data type for aLogEntry field.
Update a custom indexed field's path
If you need to fix the field path of a custom indexed field, do the following:
gcloud
Use thegcloud logging buckets updatecommand and set the--remove-indexes and--update-index flags:
gcloud logging buckets updateBUCKET_NAME \--location=LOCATION \--remove-indexes=OLD_INDEX_FIELD_NAME \--update-index=fieldPath=NEW_INDEX_FIELD_NAME,type=INDEX_TYPE
Example command:
gcloud logging buckets update int_index_test_bucket \--location=global \--remove-indexes=jsonPayload.req_status_old_path \--add-index=fieldPath=jsonPayload.req_status_new_path,type=INDEX_TYPE_INTEGER
API
Useprojects.locations.buckets.patchin the Logging API. In theLogBucket request body, update theIndexConfig object to providethe correct field path for aLogEntry field.
List all indexed fields for a bucket
To list a bucket's details, including its custom indexed fields, do thefollowing:
gcloud
Use thegcloud logging buckets describecommand:
gcloud logging buckets describeBUCKET_NAME \--location=LOCATION
Example command:
gcloud logging buckets describe indexed-bucket \--location global
API
Useprojects.locations.buckets.getin the Logging API.
Clear custom indexed fields
To remove all custom indexed fields from a bucket, do the following:
gcloud
Use thegcloud logging buckets updatecommand and add the--clear-indexes flag:
gcloud logging buckets updateBUCKET_NAME \--location=LOCATION \--clear-indexes
Example command:
gcloud logging buckets update int_index_test_bucket \--location=global \--clear-indexes
API
Useprojects.locations.buckets.patchin the Logging API. In theLogBucket request body, delete theIndexConfig object.
Query and view indexed data
To query the data included in custom indexed fields, restrict the scope of yourquery to the bucket that contains the custom indexed fields and specify theappropriatelog view:
gcloud
To read logs from a log bucket, use thegcloud logging read command and add aLOG_FILTER to includeyour indexed data:
gcloud logging readLOG_FILTER --bucket=BUCKET_ID --location=LOCATION --view=LOG_VIEW_ID
API
To read logs from a log bucket, use theentries.list method. SetresourceNames to specify the appropriate bucket and log view, and setfilter select your indexed data.
For detailed information about the filtering syntax, seeLogging query language.
Indexing and field types
How you configure custom field indexing can affect how logs are storedin log buckets and how queries are processed.
At write time
Logging attempts to use the custom index on data that isstored in log buckets after the index was created.
Indexed fields are typed, which has implications for the timestamp on thelog entry. When the log entry is stored in the log bucket,the log field is evaluated against the index type by using these rules:
- If a field's type is the same as the index's type, then the data is added tothe index verbatim.
- If the field's type is different than the index's type, thenLogging attempts to coerce it into the index's type (forexample, integer to string).
- If type coercion fails, the data isn't indexed. When type coercionsucceeds, the data is indexed.
At query time
Enabling an index on a field changes how you must query that field. Bydefault, Logging applies filter constraints to fields based onthe type of the data in each log entry that is being evaluated. When indexing isenabled, filter constraints on a field are applied based on the type of theindex. Adding an index on a field imposes a schema on that field.
When a custom index is configured for a bucket, schema matching behaviorsdiffer when both of these conditions are met:
- The source data type for a field doesn't match the index type for that field.
- The user applies a constraint on that field.
Consider the following JSON payloads:
{"jsonPayload": {"name": "A", "value": 12345}}{"jsonPayload": {"name": "B", "value": "3"}}Now apply this filter to each:
jsonPayload.value > 20
If thejsonPayoad.value field lacks custom indexing, thenLogging applies flexible-type matching:
For "A", Logging observes that the value of the "value" key isactually an integer, and that the constraint, "20", can be converted to aninteger. Logging then evaluates
12345 > 20and returns"true" because this is the case numerically.For "B", Logging observes that the value of the "value" keyis actually a string. It then evaluates
"3" > "20"and returns "true",since this is the case alphanumerically.
If the fieldjsonPayload.value is included in the custom index, thenLogging evaluates this constraint using the index instead of theusual Logging logic. The behavior changes:
- If the index is string-typed, then all comparisons are string comparisons.
- The "A" entry doesn't match, since "12345" isn't greater than "20"alphanumerically. The "B" entry matches, since the string "3" is greaterthan "20".
- If the index is integer-typed, then all comparisons are integer comparisons.
- The "B" entry doesn't match, since "3" isn't greater than "20"numerically. The "A" entry matches, since "12345" is greater than "20".
This behavior difference is subtle and should be considered when defining andusing custom indexes.
Filtering edge case
For thejsonPayload.value integer-type index, suppose a string value isfiltered:
jsonPayload.value = "hello"
If the query value can't be coerced to the index type, the index is ignored.
However, suppose for a string-type index, you pass an integer value:
jsonPayload.value > 50
Neither A nor B matches, as neither "12345" nor "3" is alphanumerically greaterthan "50".
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2026-02-18 UTC.