AI Protection overview

Premium and Enterpriseservice tiers (requiresorganization-level activation)

Preview — AI Protection for Security Command Center Premium

This feature is subject to the "Pre-GA Offerings Terms" in the General Service Terms section of theService Specific Terms. Pre-GA features are available "as is" and might have limited support. For more information, see thelaunch stage descriptions.

AI Protection helps you manage the security posture of your AIworkloads by detecting threats and helping you to mitigate risks to your AIasset inventory. This document provides a general overview ofAI Protection, including its benefits and several key concepts.

Capabilities of AI Protection

AI Protection provides several capabilities to help you manage threatsand risks to your AI systems, including the following:

  • Assess your AI inventory: Assess and understand your AI systems and AIassets including your models, datasets, and endpoints.
  • Manage risks and compliance: Proactively manage risks to your AI assetsand verify that your AI deployments adhere to relevant security standards.
  • Mitigate legal and financial risks: Reduce the financial, reputational,and legal risks associated with security breaches and regulatorynoncompliance.
  • Detect and manage threats: Detect and respond to potential threats toyour AI systems and assets in a timely manner.
  • View one dashboard: Manage all of your AI-related risks and threats fromone centralized dashboard.

Use cases for AI Protection

AI Protection helps organizations enhance their security by identifyingand mitigating threats and risks related to AI systems and sensitive data. Thefollowing use cases are examples of how AI Protection can be used indifferent organizations:

  • Financial services institution: customer financial data

    A large financial services institution uses AI models that process sensitivefinancial data.

    • Challenge: Processing highly sensitive financial data with AI modelsentails several risks, including the risk of data breaches, dataexfiltration during training or inference, and vulnerabilities in theunderlying AI infrastructure.
    • Use case: AI Protection continuously monitors AI workflowsfor suspicious activity, works to detect unauthorized data access andanomalous model behavior, performs sensitive data classification, andaids in improving your compliance with regulations such as PCI DSS andGDPR.
  • Healthcare provider: patient privacy and compliance

    A major healthcare provider manages electronic health records and uses AIfor diagnostics and treatment planning, dealing with Protected HealthInformation (PHI).

    • Challenge: PHI analyzed by AI models is subject to strictregulations like HIPAA. Risks include accidental PHI exposure throughmisconfigurations or malicious attacks that target AI systems forpatient data.
    • Use case: AI Protection identifies and alerts on potentialHIPAA violations, detects unauthorized PHI access by models or users, flagsvulnerable and potentially misconfigured AI services, and monitors for dataleakage.
  • Manufacturing and robotics company: proprietary intellectual property

    A manufacturing company specializing in advanced robotics and automationrelies heavily on AI for optimizing production lines and robotic control,with vital intellectual property (IP) embedded within its AI algorithms andmanufacturing data.

    • Challenge: Proprietary AI algorithms and sensitive operational dataare vulnerable to theft from insider threats or external adversaries,potentially leading to competitive disadvantage or operationaldisruption.
    • Use case: AI Protection monitors for unauthorized access toAI models and code repositories, detects attempts to exfiltrate trainedmodels and unusual data access patterns, and flags vulnerabilities in AIdevelopment environments to prevent IP theft.

Event Threat Detection rules for Vertex AI assets

The following Event Threat Detection rules run detections on Vertex AI assets:

  • Persistence: New AI API Method
  • Persistence: New Geography for AI Service
  • Privilege Escalation: Anomalous Impersonation of Service Account for AIAdmin Activity
  • Privilege Escalation: Anomalous Service Account Impersonator for AI DataAccess
  • Privilege Escalation: Anomalous Multistep Service Account Delegation forAI Admin Activity
  • Privilege Escalation: Anomalous Multistep Service Account Delegation forAI Data Access
  • Privilege Escalation: Anomalous Service Account Impersonator for AIAdmin Activity
  • Initial Access: Dormant Service Account Activity in AI Service

For more information about Event Threat Detection, seeEvent Threat Detectionoverview.

AI Protection framework

AI Protection uses a framework that includes specific cloudcontrols that are deployed automatically indetective mode. Detective modemeans that the cloud control is applied to the defined resources for monitoringpurposes. Any violations are detected and alerts are generated. You useframeworks and cloud controlsto define your AI Protection requirements and apply those requirementsto your Google Cloud environment. AI Protection includes theDefaultframework, which defines recommended baseline controls forAI Protection. When you enable AI Protection, the defaultframework is automatically applied to the Google Cloud organization in detectivemode.

If required, you can make copies of the framework to create customAI Protection frameworks. You can add the cloud controls to your customframeworks and apply the custom frameworks to the organization, folders, orprojects. For example, you can create custom frameworks that apply specificjurisdictional controls to specific folders to ensure that data within thosefolders stays within a particular geographical region.

Note: You can't assign AI Protection frameworks to applications.

Cloud controls in the default AI Protection framework

The following cloud controls are part of the default AI Protectionframework.

Cloud control nameDescription

Block Default VPC Network for Vertex AI Workbench Instances

Don't create Workbench instances in the default VPC network to helpprevent the use of its over-permissive default firewall rules.

Block File Downloading in JupyterLab Console

Don't permit file downloading from the JupyterLab console in Workbenchinstances to reduce data exfiltration risks and help prevent malware distribution.

Block Internet Access for Vertex AI Runtime Templates

Don't permit internet access in Colab Enterprise runtime templates toreduce the external attack surface and help prevent potential data exfiltration.

Block Public IP Address for Vertex AI Workbench Instances

Don't permit external IP addresses for Workbench instances to reduceexposure to the internet and minimize the risk of unauthorized access.

Block Root Access on Vertex AI Workbench Instances

Don't permit root access on Workbench instances to help preventunauthorized modification of critical system files or installation of malicioussoftware.

Enable Automatic Upgrades for Vertex AI Workbench Instances

Enable automatic upgrades for Workbench instances to ensure access to thelatest features, framework updates, and security patches.

Enable CMEK for Vertex AI Custom Jobs

Require customer-managed encryption keys (CMEK) on Vertex AI customtraining jobs to gain more control over the encryption of job inputs and outputs.

Enable CMEK for Vertex AI Datasets

Require customer-managed encryption keys (CMEK) for Vertex AI datasets togain more control over data encryption and key management.

Enable CMEK for Vertex AI Endpoints

Require customer-managed encryption keys (CMEK) for Vertex AI endpointsto gain more control over the encryption of deployed models and control dataaccess.

Enable CMEK for Vertex AI Featurestore

Require customer-managed encryption keys (CMEK) for Vertex AIfeaturestore to gain more control over data encryption and access.

Enable CMEK for Vertex AI Hyperparameter Tuning Jobs

Require customer-managed encryption keys (CMEK) on hyperparameter tuningjobs to gain more control over the encryption of model training data and jobconfiguration.

Enable CMEK for Vertex AI Metadata Stores

Require customer-managed encryption keys (CMEK) for Vertex AI metadatastores to gain more control over the encryption of metadata and control access.

Enable CMEK for Vertex AI Models

Require customer-managed encryption keys (CMEK) for Vertex AI models togain more control over data encryption and key management.

Enable CMEK for Vertex AI Notebook Runtime Templates

Require customer-managed encryption keys (CMEK) for Colab Enterpriseruntime templates to help secure runtime environments and associated data.

Enable CMEK for Vertex AI TensorBoard

Require customer-managed encryption keys (CMEK) for Vertex AI TensorBoardto gain more control over the encryption of experiment data and modelvisualizations.

Enable CMEK for Vertex AI Training Pipelines

Require customer-managed encryption keys (CMEK) on Vertex AI trainingpipelines to gain more control over the encryption of training data andresulting artifacts.

Enable CMEK for Vertex AI Workbench Instances

Require customer-managed encryption keys (CMEK) for Vertex AI Workbenchinstances to gain more control over data encryption.

Enable Delete to Trash Feature for Vertex AI Workbench Instances

Enable the Delete to Trash metadata feature for Workbench instances toprovide a crucial recovery safety net and help prevent accidental data loss.

Enable Idle Shutdown for Vertex AI Runtime Templates

Enable automatic idle shutdown in Colab Enterprise runtime templates tooptimize cloud costs, improve resource management, and enhance security.

Enable Integrity Monitoring for Vertex AI Workbench Instances

Enable integrity monitoring on Workbench instances to continuously attestthe boot integrity of your VMs against a trusted baseline.

Enable Secure Boot for Vertex AI Runtime Templates

Enable secure boot in Colab Enterprise runtime templates to help preventunauthorized code execution and help protect operating system integrity.

Enable Secure Boot for Vertex AI Workbench Instances

Enable secure boot for Workbench instances to help prevent unauthorizedor malicious software from running during the boot process.

Enable vTPM on Vertex AI Workbench Instances

Enable the virtual trusted platform module (vTPM) on Workbench instancesto safeguard the boot process and gain more control over encryption.

Restrict Use of Default Service Account for Vertex AI Workbench Instances

Restrict the use of the highly permissive default service account forWorkbench instances to reduce the risk of unauthorized access to Google Cloudservices.

Supported functional areas for AI Protection

This section defines functional areas that AI Protection can helpsecure.

Use the AI Security dashboard

The AI Security dashboard provides a comprehensive view of your organization'sAI asset inventory and proposes potential mitigations for enhanced risk andthreat management.

Access the AI Security dashboard

To access the AI Security dashboard, go to theRisk overview>AI security page in the Google Cloud console:

Note: For the dashboard to populate with data, you need one of theIAM roles in theRequiredroles section.

For more information, seeAI Securitydashboard.

Understand risk management for AI systems

This section provides information about potential risks that are associated withAI systems. You can view the top risks in your AI inventory.

You can click any issue to open a details pane that provides a visualization ofthe issue.

View AI threats

This section provides insights into threats associated with AI systems. You canview the top 5 recent threats associated with your AI resources.

On this page, you can do the following:

  • ClickView all to see threats that are associated with your AIresources.
  • Click any threat to see further details about the threat.

Visualize your AI inventory

You can view a visualization of your AI inventory on the dashboard that providesa summary of the projects that involve generative AI, the first-party andthird-party models in active use, and the datasets that are used in training thethird-party models.

On this page, you can do the following:

  • To view the inventory details page, click any of the nodes in thevisualization.
  • To view a detailed listing of individual assets (such as foundational modelsand custom-built models), click the tooltip.
  • To open a detailed view of the model, click the model. This view displaysdetails such as the endpoints where the model is hosted and the dataset used totrain the model. If Sensitive Data Protection is enabled, thedatasets view also displays whether the dataset contains any sensitive data.

Review AI framework findings summary

This section helps you assess and manage the findings generated by AI frameworkand data security policies. This section includes the following:

  • Findings: This section displays a summary of findings generated by AIsecurity policies and data security policies. ClickView all findings orclick the count against each finding category to view details about thefinding. Click a finding to display additional information about thatfinding.
  • Sensitive data in Vertex AI datasets: This section displays a summary ofthe findings based on sensitive data in datasets as reported bySensitive Data Protection. For more information,seeIntroduction to VertexAI.

Examine Model Armor findings

A graph shows the total number of prompts or responses scanned byModel Armor and the number of issues thatModel Armor detected. In addition, it displays summary statisticsfor various types of issues detected, such as prompt injection, jailbreakdetection, and sensitive data detection.

This information is populated based on the metrics thatModel Armor publishes to Cloud Monitoring. For more information,seeModel Armor overview.

What's next

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2026-02-20 UTC.