Configure the Ops Agent Stay organized with collections Save and categorize content based on your preferences.
This document provides details about the Ops Agent's default and customconfigurations. Read this document if any of the following applies to you:
You want to change the configuration of the Ops Agent to achieve thefollowing goals:
Turn off the built-in logging or metrics ingestion.
To turn off logging ingestion, seeExample logging
serviceconfigurations.To turn off host-metrics ingestion, seeExample metrics
serviceconfigurations.
Customize the file path of the log files that the agent collects logsfrom; seeLogging receivers.
Customize the structured log format that the agent uses to processthe log entries, by parsing the JSON or by using regular expressions;seeLogging processors.
Change the sampling rate for metrics; seeMetricsreceivers.
Customize which group or groups of metrics to enable. The agent collectsall system metrics, like
cpuandmemory, by default; seeMetrics processors.Customize how the agent rotates logs; seeLog-rotationconfiguration.
Collect metrics and logs from supported third-party applications.SeeMonitor third-party applicationsfor the list of supported applications.
Use thePrometheus receiverto collect custom metrics.
Use theOpenTelemetry Protocol (OTLP)receiverto collect custom metrics and traces.
You're interested in learning the technical details of the Ops Agent'sconfiguration.
Configuration model
The Ops Agent uses a built-in default configuration; you can't directlymodify this built-in configuration. Instead, you create a file of overridesthat are merged with the built-in configuration when the agent restarts.
The building blocks of the configuration are as follows:
receivers: This element describes what is collected by the agent.processors: This element describes how the agent can modify the collectedinformation.service: This element links receivers and processors together to createdata flows, calledpipelines. Theserviceelement contains apipelineselement, which can contain multiple pipelines.
The built-in configuration is made up of these elements, and you use thesame elements to override that built-in configuration.
Note: The Ops Agent sends logs to Cloud Logging and metrics toCloud Monitoring. You can't configure the agent to export logs or metricsto other services. You can, however, configure Cloud Loggingto export logs; for more information, seeRoute logs to supporteddestinations.Built-in configuration
The built-in configuration for the Ops Agent defines the defaultcollection for logs and metrics. The following shows the built-inconfiguration for Linux and for Windows:
Linux
By default, the Ops Agent collects file-basedsyslog logs and hostmetrics.
For more information about the metrics collected, seeMetrics ingested by the receivers.
logging:receivers:syslog:type:filesinclude_paths:-/var/log/messages-/var/log/syslogservice:pipelines:default_pipeline:receivers:[syslog]metrics:receivers:hostmetrics:type:hostmetricscollection_interval:60sprocessors:metrics_filter:type:exclude_metricsmetrics_pattern:[]service:pipelines:default_pipeline:receivers:[hostmetrics]processors:[metrics_filter]Windows
By default, the Ops Agent collects Windows event logs fromSystem,Application, andSecurity channels, as well as host metrics, IISmetrics, and SQL Server metrics.
For more information about the metrics collected, seeMetrics ingested by the receivers.
logging:receivers:windows_event_log:type:windows_event_logchannels:[System,Application,Security]service:pipelines:default_pipeline:receivers:[windows_event_log]metrics:receivers:hostmetrics:type:hostmetricscollection_interval:60siis:type:iiscollection_interval:60smssql:type:mssqlcollection_interval:60sprocessors:metrics_filter:type:exclude_metricsmetrics_pattern:[]service:pipelines:default_pipeline:receivers:[hostmetrics,iis,mssql]processors:[metrics_filter]These configurations are discussed in more detail inLogging configuration andMetrics configuration.
User-specified configuration
To override the built-in configuration, you add new configuration elementsto the user configuration file. Put your configuration forthe Ops Agent in the following files:
- For Linux:
/etc/google-cloud-ops-agent/config.yaml - For Windows:
C:\Program Files\Google\Cloud Operations\Ops Agent\config\config.yaml
Any user-specified configuration is merged with the built-in configurationwhen the agent restarts.
Note: If you make any configuration changes, then you mustrestart theagent to apply the updatedconfigurations.To override a built-in receiver, processor, or pipeline, redefine itin yourconfig.yaml file by declaring it with the same identifier.Starting with Ops Agent version 2.31.0,you can also configure the agent's log-rotation feature; for more information,seeConfigure log rotation in theOps Agent.
For example, the built-in configuration for metrics includes ahostmetricsreceiver that specifies a 60-second collection interval. To change thecollection interval for host metrics to 30 seconds, include a metrics receivercalledhostmetrics in yourconfig.yaml file that sets thecollection_interval value to 30 seconds, as shown in the following example:
metrics:receivers:hostmetrics:type:hostmetricscollection_interval:30sFor other examples of changing the built-in configurations, seeLogging configuration andMetrics configuration.You can also turn off the collection of logging or metric data. Thesechanges are described in the exampleloggingserviceconfigurations andmetricsservice configurations.
You can use this file to prevent the agent from collecting self logsand sending those logs to Cloud Logging. For more information, seeCollection of self logs.
You also configure the agent's log-rotation feature by using this file;for more information, seeConfigure log rotation in theOps Agent.
You can't configure the Ops Agent to export logsor metrics to services other than Cloud Logging and Cloud Monitoring.
Logging configurations
Thelogging configuration uses theconfiguration modeldescribed previously:
receivers: This element describes the data to collect from log files;this data is mapped into a <timestamp, record> model.processors: This optional element describes how the agent can modify thecollected information.service: This element links receivers and processors together to createdata flows, calledpipelines. Theserviceelement contains apipelineselement, which can include multiple pipeline definitions.
Each receiver and each processor can be used in multiple pipelines.
The following sections describe each of these elements.
The Ops Agent sends logs to Cloud Logging. You can't configure it toexport logs to other services. You can, however, configure Cloud Loggingto export logs; for more information, seeRoute logs to supporteddestinations.
Logging receivers
Thereceivers element contains a set of receivers, each identified byaRECEIVER_ID. A receiver describes how to retrieve the logs; for example,by tailing files, by using a TCP port, or from the Windows Event Log.
Structure of logging receivers
Each receiver must have an identifier,RECEIVER_ID, and include atypeelement. The valid types are:
files: Collect logs by tailing files on disk.fluent_forward(Ops Agent versions 2.12.0 and later):Collect logs sent via the Fluent Forward protocol over TCP.tcp(Ops Agent versions 2.3.0 and later):Collect logs in JSON format by listening to a TCP port.- Linux only:
syslog: Collect Syslog messages over TCP or UDP.systemd_journald(Ops Agent versions 2.4.0 and later): Collectsystemd journal logs from the systemd-journald service.
- Windows only:
windows_event_log: Collect Windows Event Logs using the Windows Event LogAPI.
- Third-party application log receivers
Thereceivers structure looks like the following:
receivers:RECEIVER_ID: type: files ...RECEIVER_ID_2: type: syslog ...
Depending on the value of thetype element, there might be otherconfiguration options, as follows:
filesreceivers:
Note: When specifying wildcards on Windows, you must useinclude_paths: Required. A list of filesystem paths to read bytailing each file. A wildcard (*) can be used in the paths; forexample,/var/log/*.log(Linux) orC:\logs\*.log(Windows).\as a separator.For a list of common Linux application log files, seeCommon Linux log files.
exclude_paths: Optional. A list of filesystem path patterns toexclude from the set matched byinclude_paths.record_log_file_path: Optional. If set totrue, then the path to thespecific file from which the log record was obtained appears in theoutput log entry as the value of theagent.googleapis.com/log_file_pathlabel. When using a wildcard, only the path of the file from which therecord was obtained is recorded.wildcard_refresh_interval: Optional. The interval at which wildcardfile paths ininclude_pathsare refreshed. Given as a time duration,for example,30s,2m. This property might be useful under highlogging throughputs where log files are rotated faster than the defaultinterval. If not specified, the default interval is 60 seconds.
fluent_forwardreceivers:listen_host: Optional. An IP address to listen on.The default value is127.0.0.1.listen_port: Optional. A port to listen on.The default value is24224.
syslogreceivers (for Linux only):transport_protocol: Supported values:tcp,udp.listen_host: An IP address to listen on.listen_port: A port to listen on.
tcpreceivers:format: Required. Log format. Supported value:json.listen_host: Optional. An IP address to listen on.The default value is127.0.0.1.listen_port: Optional. A port to listen on.The default value is5170.
windows_event_logreceivers (for Windows only):channels: Required. A list of Windows Event Log channels fromwhich to read logs.
Caution: Versionreceiver_version: Optional. Controls which Windows Event Log API to use.Supported values are1and2. The default value is1.1is supported for backwards compatibility only. Whenconfiguring a new receiver, use version2. Toread from channels under the "Applications and Services" category in EventViewer, you must use version2.render_as_xml: Optional. If set totrue, then all Event Log fields,except forjsonPayload.MessageandjsonPayload.StringInserts, arerendered as an XML document in a string field namedjsonPayload.raw_xml.The default value isfalse. This cannot be set totruewhenreceiver_versionis1.
Examples of logging receivers
Samplefiles receiver:
receivers:RECEIVER_ID:type:filesinclude_paths:[/var/log/*.log]exclude_paths:[/var/log/not-this-one.log]record_log_file_path:trueSamplefluent_forward receiver:
logName field as a dot-separated suffix (logName = "projects/PROJECT_ID/logs/RECEIVER_ID.TAG").receivers:RECEIVER_ID:type:fluent_forwardlisten_host:127.0.0.1listen_port:24224Samplesyslog receiver (Linux only):
receivers:RECEIVER_ID:type:syslogtransport_protocol:tcplisten_host:0.0.0.0listen_port:5140Sampletcp receiver:
receivers:RECEIVER_ID:type:tcpformat:jsonlisten_host:127.0.0.1listen_port:5170Samplewindows_event_log receiver (Windows only):
receivers:RECEIVER_ID:type:windows_event_logchannels:[System,Application,Security]Samplewindows_event_log receiver that overrides the built-in receiver to useversion2:
receivers:windows_event_log:type:windows_event_logchannels:[System,Application,Security]receiver_version:2Samplesystemd_journald receiver:
receivers:RECEIVER_ID:type:systemd_journaldSpecial fields in structured payloads
For processors and receivers that can ingest structured data (thefluent_forward andtcp receivers and theparse_json processor), you canset special fields in the input that will map to specific fields in theLogEntry object that the agent writes to the Logging API.
When the Ops Agent receives external structured log data, it placestop-level fields into theLogEntry'sjsonPayload field unless the fieldname is listed in the following table:
| Record field | LogEntry field |
|---|---|
Option 1 Option 2 | timestamp |
| receiver_id (not a record field) | logName |
logging.googleapis.com/httpRequest (HttpRequest) | httpRequest |
logging.googleapis.com/severity (string) | severity |
logging.googleapis.com/labels (struct of string:string) | labels |
logging.googleapis.com/operation (struct) | operation |
logging.googleapis.com/sourceLocation (struct) | sourceLocation |
logging.googleapis.com/trace (string) | trace |
logging.googleapis.com/spanId (string) | spanId |
Any remaining structured record fields remain part of thejsonPayloadstructure.
Common Linux log files
The following table lists common log files for frequently used Linuxapplications:
| Application | Common log files |
|---|---|
| apache | For information about Apache log files, see Monitoring third-party applications: Apache Web Server. |
| cassandra | For information about Cassandra log files, see Monitoring third-party applications: Cassandra. |
| chef | /var/log/chef-server/bookshelf/current |
| gitlab | /home/git/gitlab/log/application.log |
| jenkins | /var/log/jenkins/jenkins.log |
| jetty | /var/log/jetty/out.log |
| joomla | /var/www/joomla/logs/*.log |
| magento | /var/www/magento/var/log/exception.log |
| mediawiki | /var/log/mediawiki/*.log |
| memcached | For information about Memcached log files, see Monitoring third-party applications: Memcached. |
| mongodb | For information about MongoDB log files, see Monitoring third-party applications: MongoDB. |
| mysql | For information about MySQL log files, see Monitoring third-party applications: MySQL. |
| nginx | For information about nginx log files, see Monitoring third-party applications: nginx. |
| postgres | For information about PostgreSQL log files, see Monitoring third-party applications: PostgreSQL. |
| puppet | /var/log/puppet/http.log |
| puppet-enterprise | /var/log/pe-activemq/activemq.log |
| rabbitmq | For information about RabbitMQ log files, see Monitoring third-party applications: RabbitMQ. |
| redis | For information about Redis log files, see Monitoring third-party applications: Redis. |
| redmine | /var/log/redmine/*.log |
| salt | /var/log/salt/key |
| solr | For information about Apache Solr log files, see Monitoring third-party applications: Apache Solr. |
| sugarcrm | /var/www/*/sugarcrm.log |
| syslog | /var/log/syslog |
| tomcat | For information about Apache Tomcat log files, see Monitoring third-party applications: Apache Tomcat. |
| zookeeper | For information about Apache ZooKeeper log files, see Monitoring third-party applications: Apache ZooKeeper. |
Default ingested labels
Logs can contain the following labels by default in theLogEntry:
| Field | Sample Value | Description |
|---|---|---|
labels."compute.googleapis.com/resource_name" | test_vm | The name of the virtual machine from which this log originates. Written for all logs. |
labels."logging.googleapis.com/instrumentation_source" | agent.googleapis.com/apache_access | The value of the receivertype from which thus log originates, prefixed byagent.googleapis.com/. Written only by receivers from third-party integrations. |
Logging processors
The optionalprocessors element contains a set of processing directives, eachidentified by aPROCESSOR_ID. A processor describes how to manipulate theinformation collected by a receiver.
Each processor must have a unique identifier and include atype element. Thevalid types are:
parse_json: Parse JSON-formatted structured logs.parse_multiline: Parse multiline logs. (Linux only)parse_regex: Parse text-formatted logs via regex patterns to turn them intoJSON-formatted structured logs.exclude_logs: Exclude logs that match specified rules (starting in 2.9.0).modify_fields: Set/transform fields in log entries (starting in 2.14.0).
Theprocessors structure looks like the following:
processors:PROCESSOR_ID: type: parse_json ...PROCESSOR_ID_2: type: parse_regex ...
Depending on the value of thetype element, there are otherconfiguration options, as follows.
parse_json processor
Configuration structure
processors:PROCESSOR_ID:type:parse_jsontime_key:<field name within jsonPayload>time_format:<strptime format string>Theparse_json processor parses the input JSON into thejsonPayload fieldof theLogEntry. Other parts of theLogEntry can be parsed by settingcertainspecial top-level fields.
time_key: Optional. If the log entry provides a field with a timestamp,this option specifies the name of that field. The extracted value is usedto set thetimestampfield of the resultingLogEntryand is removed from the payload.If the
time_keyoption is specified, you must also specify thefollowing:time_format: Required iftime_keyis used. This optionspecifies the format of thetime_keyfield so it can berecognized and analyzed properly. For details of the format,see thestrptime(3)guide.
Example configuration
processors:PROCESSOR_ID:type:parse_jsontime_key:timetime_format:"%Y-%m-%dT%H:%M:%S.%L%Z"parse_multiline processor
Configuration structure
processors:PROCESSOR_ID:type:parse_multilinematch_any:-type:<type of the exceptions>language:<language name>match_any: Required. A list of one or more rules.type: Required. Only a single value is supported:language_exceptions: Allows the processor to concatenate exceptions into oneLogEntry, based on the value of thelanguageoption.
language: Required. Only a single value is supported:java: Concatenates common Java exceptions into oneLogEntry.python: Concatenates common Python exceptions into oneLogEntry.go: Concatenates common Go exceptions into oneLogEntry.
Example configuration
logging:receivers:custom_file1:type:filesinclude_paths:-/tmp/test-multiline28processors:parse_java_multiline:type:parse_multilinematch_any:-type:language_exceptionslanguage:javaextract_structure:type:parse_regexfield:messageregex:"^(?<time>[\d-]*T[\d:.Z]*)(?<severity>[^]*)(?<file>[^:]*):(?<line>[\d]*)-(?<message>(.|\\n)*)$"time_key:timetime_format:"%Y-%m-%dT%H:%M:%S.%L"move_severity:type:modify_fieldsfields:severity:move_from:jsonPayload.severityservice:pipelines:pipeline1:receivers:[custom_file1]processors:[parse_java_multiline,extract_structure,move_severity]In theextract_structure processor, thefield: message statement meansthat the regular expression is applied to the log entry'sjsonPayload.messagefield. By default, the files receiver places each line of the log file into alog entry with a single payload field calledjsonPayload.message.
Theextract_structure processor places extracted fields intosubfields of theLogEntry.jsonPayload field. Other statements in the YAMLfile cause two of the extracted fields,time andseverity, to be moved.Thetime_key: time statement pulls theLogEntry.jsonPayload.time field,parses the timestamp, and then adds theLogEntry.timestamp field.Themove_severity processor moves the severity field from theLogEntry.jsonPayload.severity field to theLogEntry.severity field.
Example log file:
2022-10-17T22:00:00.187512963Z ERROR HelloWorld:16 - javax.servlet.ServletException: Something bad happened at com.example.myproject.OpenSessionInViewFilter.doFilter(OpenSessionInViewFilter.java:60) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at com.example.myproject.ExceptionHandlerFilter.doFilter(ExceptionHandlerFilter.java:28) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at com.example.myproject.OutputBufferFilter.doFilter(OutputBufferFilter.java:33)Caused by: com.example.myproject.MyProjectServletException at com.example.myproject.MyServlet.doPost(MyServlet.java:169) at javax.servlet.http.HttpServlet.service(HttpServlet.java:727) at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1166) at com.example.myproject.OpenSessionInViewFilter.doFilter(OpenSessionInViewFilter.java:30) ... 27 common frames omittedThe agent ingests each line from the log file into Cloud Logging in thefollowing format:
{ "insertId": "...", "jsonPayload": { "line": "16", "message": "javax.servlet.ServletException: Something bad happened\n at com.example.myproject.OpenSessionInViewFilter.doFilter(OpenSessionInViewFilter.java:60)\n at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157)\n at com.example.myproject.ExceptionHandlerFilter.doFilter(ExceptionHandlerFilter.java:28)\n at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157)\n at com.example.myproject.OutputBufferFilter.doFilter(OutputBufferFilter.java:33)\nCaused by: com.example.myproject.MyProjectServletException\n at com.example.myproject.MyServlet.doPost(MyServlet.java:169)\n at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)\n at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)\n at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)\n at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1166)\n at com.example.myproject.OpenSessionInViewFilter.doFilter(OpenSessionInViewFilter.java:30)\n ... 27 common frames omitted\n", "file": "HelloWorld" }, "resource": { "type": "gce_instance", "labels": { "instance_id": "...", "project_id": "...", "zone": "..." } }, "timestamp": "2022-10-17T22:00:00.187512963Z", "severity": "ERROR", "labels": { "compute.googleapis.com/resource_name": "..." }, "logName": "projects/.../logs/custom_file", "receiveTimestamp": "2022-10-18T03:12:38.430364391Z"}parse_regex processor
Configuration structure
processors:PROCESSOR_ID:type:parse_regexregex:<regular expression>time_key:<field name within jsonPayload>time_format:<format string>time_key: Optional. If the log entry provides a field with a timestamp,this option specifies the name of that field. The extracted value is usedto set thetimestampfield of the resultingLogEntryand is removed from the payload.If the
time_keyoption is specified, you must also specify thefollowing:time_format: Required iftime_keyis used. This optionspecifies the format of thetime_keyfield so it can berecognized and analyzed properly. For details of the format,see thestrptime(3)guide.
regex: Required. The regular expression for parsing the field. Theexpression must include key names for the matched subexpressions;for example,"^(?<time>[^ ]*) (?<severity>[^ ]*) (?<msg>.*)$".The text matched by named capture groups will be placed into fields in the
LogEntry'sjsonPayloadfield. To add additional structure to your logs,use themodify_fieldsprocessor.For a set of regular expressions for extracting information from commonLinux application log files, seeCommon Linux log files.
Example configuration
processors:PROCESSOR_ID:type:parse_regexregex:"^(?<time>[^]*)(?<severity>[^]*)(?<msg>.*)$"time_key:timetime_format:"%Y-%m-%dT%H:%M:%S.%L%Z"exclude_logs processor
Configuration structure:
type:exclude_logsmatch_any:-<filter>-<filter>The top-level configuration for this processor contains a single field,match_any, which contains a list of filter rules.
match_any: Required. A list of one or more rules. If a log entrymatches any rule, then the Ops Agent doesn't ingest that entry.The logs that are ingested by Ops Agent follow the
LogEntrystructure.Field names are case-sensitive. You can only specify rules based on thefollowing fields and their subfields:httpRequestjsonPayloadlabelsoperationseveritysourceLocationtracespanId
The following example rule
severity =~ "(DEBUG|INFO)"uses a regular expression to exclude all
DEBUGandINFOlevel logs.Rules follow theCloud Logging querylanguage syntax but onlysupport a subset of the features that Logging querylanguage supports:
- Comparison operators:
=,!=,:,=~,!~.Only string comparisons are supported. - Navigation operator:
.. For examplejsonPayload.message. - Boolean operators:
AND,OR,NOT. - Grouping expressions with
().
exclude_logs processor has performance implications, so it's recommended that you avoid using this processor if you can exclude logs from the source or if you can set upexclusion filters. If you need to use theexclude_logs processor, we recommend the following best practices:- Minimize the number of
exclude_logsprocessors. - Use exact matches whenever possible instead of using regular-expression matches.
Example configuration
processors:PROCESSOR_ID:type:exclude_logsmatch_any:-'(jsonPayload.message=~"logspam1"ORjsonPayload.message=~"logspam2")ANDseverity="ERROR"'-'jsonPayload.application="foo"ANDseverity="INFO"'modify_fields Processor
Themodify_fields processor allows customization of the structure andcontents of log entries.
Configuration structure
type:modify_fieldsfields:<destination field>:# Sourcemove_from:<source field>copy_from:<source field>static_value:<string># Mutationdefault_value:<string>map_values:<old value>:<new value>type:{integer|float}omit_if:<filter>The top-level configuration for this processor contains a single field,fields, which contains a map of output field names and correspondingtranslations. For each output field, an optional source and zero or moremutation operations are applied.
All field names usethe dot-separated syntaxfrom the Cloud Logging query language. Filters use the Cloud Logging querylanguage.
All transformations are applied in parallel, which means that sources andfilters operate on the original input log entry and therefore can not referencethe new value of any other fields being modified by the same processor.
Source options: At most one specified source is allowed.
No source specified
If no source value is specified, the existing value in the destinationfield will be modified.
move_from: <source field>The value from
<source field>will be used as the source for thedestination field. Additionally,<source field>will be removed fromthe log entry. If a source field is referenced by bothmove_fromandcopy_from, the source field will still be removed.copy_from: <source field>The value from
<source field>will be used as the source for thedestination field.<source field>will not be removed from the logentry unless it is also referenced by amove_fromoperation orotherwise modified.static_value: <string>The static string
<string>will be used as the source for thedestination field.
Mutation options: Zero or more mutation operators may be applied to asingle field. If multiple operators are supplied, they will always be appliedin the following order.
default_value: <string>If the source field did not exist, the output value will be set to
<string>. If the source field already exists (even if it contains anempty string), the original value is unmodified.map_values: <map>If the input value matches one of the keys in
<map>, the output valuewill be replaced with the corresponding value from the map.map_values_exclusive: {true|false}In case the
<source field>value does not match any keys specified in themap_valuespairs, the destination field will be forcefully unset ifmap_values_exclusiveis true, or left untouched ifmap_values_exclusiveis false.type: {integer|float}The input value will be converted to an integer or a float. If thestring cannot be converted to a number, the output value will be unset.If the string contains a float but the type is specified as
integer,the number will be truncated to an integer.Note that the Cloud Logging API uses JSON and therefore it does notsupport a full 64-bit integer; if a 64-bit (or larger) integer isneeded, it must be stored as a string in the log entry.
omit_if: <filter>If the filter matches the input log record, the output field will beunset. This can be used to remove placeholder values, such as:
httpRequest.referer:move_from:jsonPayload.refereromit_if:httpRequest.referer = "-"
Sample Configurations
Theparse_json processor would transform a JSON file containing
{"http_status":"400","path":"/index.html","referer":"-"}into aLogEntry structurethat looks like this:
{"jsonPayload":{"http_status":"400","path":"/index.html","referer":"-"}}This could then be transformed withmodify_fields into thisLogEntry:
{"httpRequest":{"status":400,"requestUrl":"/index.html",}}by using this Ops Agent configuration:
logging:receivers:in:type:filesinclude_paths:-/var/log/http.jsonprocessors:parse_json:type:parse_jsonset_http_request:type:modify_fieldsfields:httpRequest.status:move_from:jsonPayload.http_statustype:integerhttpRequest.requestUrl:move_from:jsonPayload.pathhttpRequest.referer:move_from:jsonPayload.refereromit_if:jsonPayload.referer = "-"service:pipelines:pipeline:receivers:[in]processors:[parse_json,set_http_request]This configuration reads JSON-formatted logs from/var/log/http.json andpopulates part of thehttpRequest structure from fields in the logs.
Logging service
The logging service customizes verbosity for the Ops Agent's own logs, andlinks logging receivers and processors together into pipelines. Theservicesection has the following elements:
log_levelpipelines
Log verbosity level
Thelog_level field, available with Ops Agent versions 2.6.0 and later,customizes verbosity for Ops Agent logging submodule's own logs. The defaultisinfo. Available options are:error,warn,info,debug,trace.
log_level todebug (ortrace) triggers a feedbackloop in the logging sub-agent, resulting in a continuous stream of logs. This isa known issue, and is being addressed. In the meantime,do not setlog_level to anything aboveinfo.The following configuration customizes log verbosity for the logging submoduleto bedebug instead:
logging:service:log_level:debugLogging pipelines
Thepipelines field can contain multiple pipeline IDs and definitions. Eachpipeline value consists of the following elements:
receivers: Required for new pipelines. A list ofreceiver IDs, as described inLogging receivers.The order of the receivers IDs in the list doesn't matter. The pipelinecollects data from all of the listed receivers.processors: Optional. A list of processor IDs, as described inLogging processors. The order of the processor IDsin the list matters. Each record is run through the processors inthe listed order.
Example loggingservice configurations
Aservice configuration has the following structure:
service: log_level:CUSTOM_LOG_LEVEL pipelines:PIPELINE_ID: receivers: [...] processors: [...]PIPELINE_ID_2: receivers: [...] processors: [...]
To stop the agent from collecting and sending either/var/log/message or/var/log/syslog entries, redefine the default pipeline withan emptyreceivers list and no processors. This configuration does notstop the agent's logging subcomponent, because the agent must be ableto collect logs for the monitoring subcomponent. The entire empty loggingconfiguration looks like the following:
logging:service:pipelines:default_pipeline:receivers:[]The followingservice configuration defines a pipeline with the IDcustom_pipeline:
logging:service:pipelines:custom_pipeline:receivers:-RECEIVER_IDprocessors:-PROCESSOR_IDMetrics configurations
Themetrics configuration uses theconfiguration modeldescribed previously:
receivers: a list of receiver definitions. Areceiverdescribes thesource of the metrics; for example, system metrics likecpuormemory.The receivers in this list can be shared among multiple pipelines.processors: a list of processor definitions. Aprocessordescribeshow to modify the metrics collected by a receiver.service: contains apipelinessection that is a list ofpipelinedefinitions. Apipelineconnects a list ofreceiversand a list ofprocessorsto form the data flow.
The following sections describe each of these elements.
The Ops Agent sends metrics to Cloud Monitoring. You can't configureit to export metrics to other services.
Metrics receivers
Thereceivers element contains a set of receiver definitions. A receiverdescribes from where to retrieve the metrics, such ascpu andmemory.A receiver can be shared among multiple pipelines.
Structure of metrics receivers
Each receiver must have an identifier,RECEIVER_ID, and include atypeelement. Validbuilt-in types are:
hostmetricsiis(Windows only)mssql(Windows only)
A receiver can also specify the operationcollection_interval option. Thevalue is in the format of a duration, for example,30s or2m. The defaultvalue is60s.
Each of these receiver types collects a set of metrics; for information aboutthe specific metrics included, seeMetrics ingested by the receivers.
You can create only one receiver for each type. For example, you can'tdefine two receivers of typehostmetrics.
Changing the collection interval in the metrics receivers
Some critical workloads might require fast alerting. By reducing thethe collection interval for the metrics, you can configure more sensitivealerts. For information on how alerts are evaluated, seeBehavior of metric-based alerting policies.
For example, the following receiver changes the collection interval for hostmetrics (the receiver ID ishostmetrics) from the default of 60 seconds to 10seconds:
metrics:receivers:hostmetrics:type:hostmetricscollection_interval:10sYou can also override the collection interval for the Windowsiisandmssql metrics receivers using the same technique.
Metrics ingested by the receivers
The metrics ingested by the Ops Agent have identifiers that begin with thefollowing pattern:agent.googleapis.com/GROUP.TheGROUP component identifies a set of related metrics; ithas values likecpu,network, and others.
Thehostmetrics receiver
Thehostmetrics receiver ingests the following metric groups. Formore information, see the linked section for each group on theOps Agent metrics page.
| Group | Metric |
|---|---|
cpu | CPU load at 1 minute intervals CPU load at 5 minute intervals CPU load at 15 minute intervals CPU usage, with labels for CPU number and CPU state CPU usage percent, with labels for CPU number and CPU state |
disk | Disk bytes read, with label for device Disk bytes written, with label for device Disk I/O time, with label for device Disk weighted I/O time, with label for device Disk pending operations, with label for device Disk merged operations, with labels for device and direction Disk operations, with labels for device and direction Disk operation time, with labels for device and direction Disk usage, with labels for device and state Disk utilization, with labels for device and state |
gpuLinux only; see About the gpu metrics for other important information. | Current number of GPU memory bytes used, by state Maximum amount of GPU memory, in bytes, that has been allocated by theprocess Percentage of time in the process lifetime that one or more kernels has been running on the GPU Percentage of time, since last sample, the GPU has been active |
interfaceLinux only | Total count of network errors Total count of packets sent over the network Total number of bytes sent over the network |
memory | Memory usage, with label for state (buffered, cached, free, slab, used) Memory usage percent, with label for state (buffered, cached, free, slab, used) |
network | TCP connection count, with labels for port and TCP state |
swap | Swap I/O operations, with label for direction Swap bytes used, with labels for device and state Swap percent used, with labels for device and state |
pagefileWindows only | Current percentage of pagefile used by state |
processes | Processes count, with label for state Processes forked count Per-process disk read I/O, with labels for process name + others Per-process disk write I/O, with labels for process name + others Per-process RSS usage, with labels for process name + others Per-process VM usage, with labels for process name + others |
gpu metrics:Thehostmetrics receiver collects metrics reported by theNVIDIAManagement Library (NVML) asagent.googleapis.com/gpu metrics.
To collect these metrics, you mustcreate your VM with attached GPUs andinstall the GPU driver. Thehostmetrics receiver doesn'tcollect these metrics on VMs with no attached GPUs.
Only Ops Agent version2.38.0 or versions 2.41.0 or higherare compatible with GPU monitoring.Do not installOps Agent versions 2.39.0and 2.40.0 on VMs with attached GPUs.For more information, seeAgent crashes and report mentions NVIDIA.
You can install or upgrade the NVIDIA GPU driver by using package managers orlocal installation scripts. When using local installation scripts,the Ops Agent service must be stopped before the driverinstallation can proceed. To stop the agent, run the following command:
sudo systemctl stop google-cloud-ops-agent
You must also reboot the VM after installing or upgrading an NVIDIAGPU driver.
Theiis receiver (Windows only)
Theiis receiver (Windows only) ingests metrics of theiis group.For more information, see theAgent metrics page.
| Group | Metric |
|---|---|
iisWindows only | Currently open connections to IIS Network bytes transferred by IIS Connections opened to IIS Requests made to IIS |
Themssql receiver (Windows only)
Themssql receiver (Windows only) ingests metrics of themssql group. Formore information, see theOps Agent metrics page.
| Group | Metric |
|---|---|
mssqlWindows only | Currently open connections to SQL server SQL server total transactions per second SQL server write transactions per second |
Metrics processors
Theprocessor element contains a set of processor definitions. A processordescribes metrics from the receiver type to exclude. The only supported typeisexclude_metrics, which takes ametrics_pattern option. The value isa list of globs that match theOps Agent metric typesyou want to exclude from the group collected by a receiver. For example:
- To exclude all agentCPU metrics,specify
agent.googleapis.com/cpu/*. - To exclude the agent CPU utilization metric, specify
agent.googleapis.com/cpu/utilization. - To exclude the client-side request-count metric from themetricscollected by the Apache Cassandra third-partyintegration, specify
workloads.googleapis.com/cassandra.client.request.count. - To exclude all client-side metrics from themetricscollected by the Apache Cassandra third-partyintegration, specify
workloads.googleapis.com/cassandra.client.*.
Sample metrics processor
The following example shows theexclude_metrics processor supplied inthe built-in configurations. This processor supplies an emptymetrics_patternvalue, so it doesn't exclude any metrics.
processors:metrics_filter:type:exclude_metricsmetrics_pattern:[]To disable the collection of all process metrics by the Ops Agent,add the following to yourconfig.yaml file:
metrics: processors: metrics_filter: type: exclude_metrics metrics_pattern: - agent.googleapis.com/processes/*
This excludes process metrics from collection in themetrics_filterprocessor that applies to the default pipeline in themetrics service.
Metrics service
The metrics service customizes verbosity for the Ops Agent metrics module's ownlogs and links metrics receivers and processors together into pipelines. Theservice section has two elements:log_level andpipelines.
Metrics verbosity level
log_level, available with Ops Agent versions 2.6.0 and later, customizesverbosity for Ops Agent metrics submodule's own logs. The default isinfo.Available options are:error,warn,info,debug.
Metrics pipelines
Theservice section has a single element,pipelines, which can containmultiple pipeline IDs and definitions. Eachpipelinedefinition consists of the following elements:
receivers: Required for new pipelines. A list of receiver IDs, as describedinMetrics receivers. The order of the receivers IDsin the list doesn't matter. The pipeline collects data from all of thelisted receivers.processors: Optional. A list of processor IDs, as described inMetrics processors. The order of the processor IDsin the listdoes matter. Each metric point is run through the processorsin the listed order.
Example metricsservice configurations
Aservice configuration has the following structure:
service: log_level:CUSTOM_LOG_LEVEL pipelines:PIPELINE_ID: receivers: [...] processors: [...]PIPELINE_ID_2: receivers: [...] processors: [...]
To turn off the built-in ingestion of host metrics, redefine the defaultpipeline with an emptyreceivers list and no processors. The entire metricsconfiguration looks like the following:
metrics:service:pipelines:default_pipeline:receivers:[]The following example shows the built-inservice configuration forWindows:
metrics:service:pipelines:default_pipeline:receivers:-hostmetrics-iis-mssqlprocessors:-metrics_filterThe followingservice configuration customizes log verbosity for the metricssubmodule to bedebug instead:
metrics:service:log_level:debugCollection of self logs
By default, the Ops Agent's Fluent Bit self logs are sent to Cloud Logging.These logs can include a lot of information, and the additional volume mightincrease your costs to use Cloud Logging.
You can disable the collection of these self logs, starting with Ops Agentversion 2.44.0, by using thedefault_self_log_file_collection option.
To disable self-log collection, add aglobal section to your user-specifiedconfiguration file and set thedefault_self_log_file_collection optionto the valuefalse:
logging: ...metrics: ...global: default_self_log_file_collection: false
Log-rotation configuration
Starting with Ops Agent version 2.31.0,you can also set up the agent's log-rotation feature by using theconfiguration files. For more information, seeConfigure log rotationin the Ops Agent.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-12-15 UTC.