Use AI securely and responsibly

Last reviewed 2025-02-05 UTC

This principle in the security pillar of theGoogle Cloud Well-Architected Framework provides recommendations to help you secure your AI systems. Theserecommendations are aligned with Google'sSecure AI Framework (SAIF),which provides a practical approach to address the security and risk concerns ofAI systems. SAIF is a conceptual framework that aims to provide industry-widestandards for building and deploying AI responsibly.

Principle overview

To help ensure that your AI systems meet your security, privacy, and compliancerequirements, you must adopt a holistic strategy that starts with the initialdesign and extends to deployment and operations. You can implement this holisticstrategy by applying thesix core elements of SAIF.

Google uses AI to enhance security measures, such as identifying threats,automating security tasks, and improving detection capabilities, while keepinghumans in the loop for critical decisions.

Google emphasizes a collaborative approach to advancing AI security. Thisapproach involves partnering with customers, industries, and governments toenhance the SAIF guidelines and offer practical, actionable resources.

The recommendations to implement this principle are grouped within thefollowing sections:

Recommendations to use AI securely

To use AI securely, you need both foundational security controls andAI-specific security controls. This section provides an overview ofrecommendations to ensure that your AI and ML deployments meet the security,privacy, and compliance requirements of your organization.For an overview of architectural principles and recommendations that are specific to AIand ML workloads in Google Cloud, see theAI and ML perspectivein the Well-Architected Framework.

Define clear goals and requirements for AI usage

This recommendation is relevant to the followingfocus areas:

  • Cloud governance, risk, and compliance
  • AI and ML security

This recommendation aligns with the SAIF element about contextualizing AIsystem risks in the surrounding business processes. When you design and evolveAI systems, it's important to understand your specific business goals, risks, andcompliance requirements.

Keep data secure and prevent loss or mishandling

This recommendation is relevant to the followingfocus areas:

  • Infrastructure security
  • Identity and access management
  • Data security
  • Application security
  • AI and ML security

This recommendation aligns with the following SAIF elements:

  • Expand strong security foundations to the AI ecosystem. This elementincludes data collection, storage, access control, and protection againstdata poisoning.
  • Contextualize AI system risks. Emphasize data security to supportbusiness objectives and compliance.

Keep AI pipelines secure and robust against tampering

This recommendation is relevant to the followingfocus areas:

  • Infrastructure security
  • Identity and access management
  • Data security
  • Application security
  • AI and ML security

This recommendation aligns with the following SAIF elements:

  • Expand strong security foundations to the AI ecosystem. As a keyelement of establishing a secure AI system, secure your code and modelartifacts.
  • Adapt controls for faster feedback loops. Because it's important formitigation and incident response, track your assets and pipeline runs.

Deploy apps on secure systems using secure tools and artifacts

This recommendation is relevant to the followingfocus areas:

  • Infrastructure security
  • Identity and access management
  • Data security
  • Application security
  • AI and ML security

Using secure systems and validated tools and artifacts in AI-based applicationsaligns with the SAIF element about expanding strong security foundations to theAI ecosystem and supply chain. This recommendation can be addressed through thefollowing steps:

Protect and monitor inputs

This recommendation is relevant to the followingfocus areas:

  • Logging, auditing, and monitoring
  • Security operations
  • AI and ML security

This recommendation aligns with the SAIF element about extending detection andresponse to bring AI into an organization's threat universe. To prevent issues,it's critical to manage prompts for generative AI systems, monitor inputs, andcontrol user access.

Recommendations for AI governance

All of the recommendations in this section are relevant to the followingfocus area:Cloud governance, risk, and compliance.

Google Cloud offers a robust set of tools and services that you can useto build responsible and ethical AI systems. We also offer a framework ofpolicies, procedures, and ethical considerations that can guide the development,deployment, and use of AI systems.

As reflected in our recommendations, Google's approach for AI governance isguided by the following principles:

  • Fairness
  • Transparency
  • Accountability
  • Privacy
  • Security

Use fairness indicators

Vertex AI can detect bias during the data collection or post-training evaluation process.Vertex AI providesmodel evaluation metrics likedata bias andmodel bias to help you evaluate your model for bias.

These metrics are related to fairness across different categories like race,gender, and class. However, interpreting statistical deviations isn't astraightforward exercise, because differences across categories might not be aresult of bias or a signal of harm.

Use Vertex Explainable AI

To understand how the AI models make decisions, use Vertex Explainable AI. Thisfeature helps you to identify potential biases that might be hidden in themodel's logic.

This explainability feature is integrated withBigQuery ML andVertex AI,which provide feature-based explanations. You can either perform explainabilityin BigQuery ML orregister your model in Vertex AI and perform explainability inVertex AI.

Track data lineage

Track the origin and transformation of data that's used in your AI systems.This tracking helps you understand the data's journey and identify potentialsources of bias or error.

Data lineage is a Dataplex Universal Catalog feature that lets you track how data moves through yoursystems: where it comes from, where it's passed to, and what transformations areapplied to it.

Establish accountability

Establish clear responsibility for the development, deployment, and outcomes ofyour AI systems.

UseCloud Logging to log key events and decisions made by your AI systems. The logs provide anaudit trail to help you understand how the system is performing and identifyareas for improvement.

UseError Reporting to systematically analyze errors made by the AI systems. This analysis canreveal patterns that point to underlying biases or areas where the model needsfurther refinement.

Implement differential privacy

During model training,add noise to the data in order to make it difficult to identify individual data points butstill enable the model to learn effectively. WithSQL in BigQuery,you can transform the results of a query with differentially privateaggregations.

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-02-05 UTC.