Movatterモバイル変換


[0]ホーム

URL:


Jump to Content

Google Account
Google Account

Blue shield with white G icon.

Google’s Secure AI Framework (SAIF)

The potential of AI, especially generative AI, is immense. As innovation moves forward, the industry needs security standards for building and deploying AI responsibly. That’s why we introduced the Secure AI Framework (SAIF), a conceptual framework to secure AI systems.

Six coreelements of SAIF

SAIF is designed to address top-of-mind concerns for security professionals, such as AI/ML model risk management, security, and privacy — helping to ensure that when AI models are implemented, they are secure by default.

  • Expand strong security foundations to the AI ecosystem

  • Extend detection and response to bring AI into an organization’s threat universe

  • Automate defenses to keep pace with existing and new threats

  • Harmonize platform-level controls to ensure consistent security across the organization

  • Adapt controls to adjust mitigations and create faster feedback loops for AI deployment

  • Contextualize AI system risks in surrounding business processes

Enabling asafer ecosystem

We’re excited to share the first steps in our journey to build a SAIF ecosystem across governments, businesses, and organizations to advance a framework for secure AI deployment that works for all.

Graphic showing a pink grid sending prompts to a screen.

Introducing SAIF.Google: Secure AI starts here

SAIF.Googleis a resource hub to help security professionals navigate the evolving landscape of AI security. It provides a collection of AI security risks and controls, including a “Risk Self-Assessment Report” to guide practitioners in understanding the risks that could affect them and how to implement SAIF in their organizations. These resources will help address the critical need to build and deploy secure AI systems in a rapidly evolving world.
Constellation graphic with blue shield with a white G in the middle, surrounded by building, padlock, profile, world, and document icons.

Bringing SAIF to governments and organizations

We collaborate with governments and organizations to help mitigate AI security risks. Our work with policymakers and standards organizations such asNISTcontributes to evolving regulatory frameworks. We recently highlightedSAIF's rolein securing AI systems, aligning with the White House’sAI commitments.
Circle with tech company logos inside.

Coalition for Secure AI: Expanding SAIF with industry allies

We are advancing this work and fostering industry support byformingtheCoalition for Secure AI (CoSAI), with founding members like Anthropic, Cisco, GenLab, IBM, Intel, Nvidia, and Paypal, to address critical challenges in implementing secure AI systems.

Additionalresources

Enhancing AI security: Google’s AI Red Team

Explore how Google’s AI Red Team, armed with cutting-edge tactics, enhances security for AI systems. Discover key insights and lessons in our latest report.

Learn more

Securing AI systems with Mandiant

Mandiant urges proactive security integration in AI systems, aligning with SAIF for robust protection.

Learn more

Android: Secure development guidelines

Secure your organization with Android’s real-time vulnerability alerts and follow secure development guidelines for machine learning code.

Download pdf

Securing AI with Google Cloud

Google Cloud offers resources essential for boards of directors that focus on cybersecurity, deployment of AI systems, risk governance, and secure transformation.

Learn more

Securing the AI software supply chain

This white paper addresses AI-supply-chain security using provenance information, outlines traditional and AI-software risks, and offers practical solutions for organizations.

Learn more
1
/

Common questions

about SAIF

How are SAIF and Responsible AI related?

Google has an imperative to build AI responsibly, and to empower others to do the same. OurAI Principles, published in 2018, describe our commitment to developing technology responsibly and in a manner that is built for safety, enables accountability, and upholds high standards of scientific excellence. Responsible AI is our overarching approach, which has several dimensions, such as “Fairness,” “Interpretability,” “Security,” and “Privacy,” that guide all of Google’s AI product development.

SAIF is our framework for creating a standardized and holistic approach to integrating security and privacy measures into ML-powered applications. It is aligned with the “Security” and “Privacy” dimensions of building AI responsibly. SAIF ensures that ML-powered applications are developed in a responsible manner, taking into account the evolving threat landscape and user expectations.

How is Google putting SAIF into action?

Google has a long history of driving responsible AI and cybersecurity development, and we have been mapping security best practices to AI innovation for many years. Our Secure AI Framework is distilled from the body of experience and best practices we’ve developed and implemented, and reflects Google’s approach to building ML and generative-AI powered apps with responsive, sustainable, and scalable protections for security and privacy. We will continue to evolve and build SAIF to address new risks, changing landscapes, and advancements in AI.

How can practitioners implement the framework?

See ourquick guideto implementing the SAIF framework:

  • Step 1 - Understand the use
    • Understanding the specific business problem AI will solve and the data needed to train the model will help drive the policy, protocols, and controls that need to be implemented as part of SAIF.
  • Step 2 - Assemble the team
    • Developing and deploying AI systems, just like traditional systems, are multidisciplinary efforts.
    • AI systems are often complex and opaque, include  large numbers of moving parts, rely on large amounts of data, are resource intensive, can be used to apply judgment-based decisions, and can generate novel content that may be offensive, harmful, or can perpetuate stereotypes and social biases.
    • Establish the right cross-functional team to ensure that security, privacy, risk, and compliance considerations are included from the start.
  • Step 3 - Level set with an AI primer

    • As teams embark on evaluating the business use of AI, and the various and evolving complexities, risks, and security controls that apply, it is critical that all parties involved understand thebasics of the AI model development life cycle, the design and logic of the model methodologies, including capabilities, merits, and limitations.
  • Step 4 - Apply thesix core elements of SAIF

    • These elements are not intended to be applied in chronological order.

Where can I find more information about SAIF and how to apply it to my business or entity?

Stay tuned! Google will continue to build and share Secure AI Framework resources, guidance, and tools, along with other best practices in AI application development.

Why we support asecure AI community for everyone

As one of the first companies to articulateAl principles, we’ve set the standard forresponsible Al, and it guides our product development for safety. We’ve advocated for and developed industry frameworks to raise the security bar and learned that building a community to advance this work is essential for success in the long term. That’s why we’re excited to build an SAIF community for all.


[8]ページ先頭

©2009-2025 Movatter.jp