Use AI for security Stay organized with collections Save and categorize content based on your preferences.
This principle in the security pillar of theGoogle Cloud Well-Architected Framework provides recommendations to use AI to help you improve the security of yourcloud workloads.
Because of the increasing number and sophistication of cyber attacks, it'simportant to take advantage of AI's potential to help improve security. AI canhelp to reduce the number of threats, reduce the manual effort required by securityprofessionals, and help compensate for the scarcity of experts in the cyber-securitydomain.
Principle overview
Use AI capabilities to improve your existing security systems and processes.You can useGemini in Security as well as the intrinsic AI capabilities that are built into Google Cloud services.
These AI capabilities can transform security by providing assistance acrossevery stage of the security lifecycle. For example, you can use AI to do thefollowing:
- Analyze and explain potentially malicious code without reverse engineering.
- Reduce repetitive work for cyber-security practitioners.
- Use natural language to generate queries and interact with securityevent data.
- Surface contextual information.
- Offer recommendations for quick responses.
- Aid in the remediation of events.
- Summarize high-priority alerts for misconfigurations andvulnerabilities, highlight potential impacts, and recommend mitigations.
Levels of security autonomy
AI and automation can help you achieve better security outcomes when you'redealing with ever-evolving cyber-security threats. By using AI for security, youcan achieve greater levels of autonomy to detect and prevent threats and improveyour overall security posture. Google defines fourlevels of autonomy when you use AI for security, and they outline the increasing role of AI inassisting and eventually leading security tasks:
- Manual: Humans run all of the security tasks (prevent, detect,prioritize, and respond) across the entire security lifecycle.
- Assisted: AI tools, like Gemini, boost humanproductivity by summarizing information, generating insights, and makingrecommendations.
- Semi-autonomous: AI takes primary responsibility for many securitytasks and delegates to humans only when required.
- Autonomous: AI acts as a trusted assistant that drives the securitylifecycle based on your organization's goals and preferences, with minimalhuman intervention.
Recommendations
The following sections describe the recommendations for using AI for security.The sections also indicate how the recommendations align with Google's Secure AIFramework (SAIF)core elements and how they're relevant to thelevels of security autonomy.
- Enhance threat detection and response with AI
- Simplify security for experts and non-experts
- Automate time-consuming security tasks with AI
- Incorporate AI into risk management and governance processes
- Implement secure development practices for AI systems
Enhance threat detection and response with AI
This recommendation is relevant to the followingfocus areas:
- Security operations (SecOps)
- Logging, auditing, and monitoring
AI can analyze large volumes of security data, offer insights into threat actorbehavior, and automate the analysis of potentially malicious code. Thisrecommendation is aligned with the following SAIF elements:
- Extend detection and response to bring AI into your organization'sthreat universe.
- Automate defenses to keep pace with existing and new threats.
Depending on your implementation, this recommendation can be relevant to thefollowing levels of autonomy:
- Assisted: AI helps with threat analysis and detection.
- Semi-autonomous: AI takes on more responsibility for the security task.
Google Threat Intelligence,which uses AI to analyze threat actor behavior and malicious code, can help youimplement this recommendation.
Simplify security for experts and non-experts
This recommendation is relevant to the followingfocus areas:
- Security operations (SecOps)
- Cloud governance, risk, and compliance
AI-powered tools can summarize alerts and recommend mitigations, and thesecapabilities can make security more accessible to a wider range of personnel.This recommendation is aligned with the following SAIF elements:
- Automate defenses to keep pace with existing and new threats.
- Harmonize platform-level controls to ensure consistent security acrossthe organization.
Depending on your implementation, this recommendation can be relevant to thefollowing levels of autonomy:
- Assisted: AI helps you to improve the accessibility of securityinformation.
- Semi-autonomous: AI helps to make security practices more effectivefor all users.
Gemini inSecurity Command Center can provide summaries of alerts for misconfigurations and vulnerabilities.
Automate time-consuming security tasks with AI
This recommendation is relevant to the followingfocus areas:
- Infrastructure security
- Security operations (SecOps)
- Application security
AI can automate tasks such as analyzing malware, generating security rules, andidentifying misconfigurations. These capabilities can help to reduce theworkload on security teams and accelerate response times. This recommendation isaligned with the SAIF element about automating defenses to keep pace withexisting and new threats.
Depending on your implementation, this recommendation can be relevant to thefollowing levels of autonomy:
- Assisted: AI helps you to automate tasks.
- Semi-autonomous: AI takes primary responsibility for security tasks,and only requests human assistance when needed.
Gemini inGoogle SecOps can help to automate high-toil tasks by assisting analysts, retrieving relevantcontext, and making recommendations for next steps.
Incorporate AI into risk management and governance processes
This recommendation is relevant to the followingfocus area:Cloud governance, risk, and compliance.
You can use AI to build a model inventory and risk profiles. You can also use AIto implement policies for data privacy, cyber risk, and third-party risk. Thisrecommendation is aligned with the SAIF element about contextualizing AI systemrisks in surrounding business processes.
Depending on your implementation, this recommendation can be relevant to thesemi-autonomous level of autonomy. At this level, AI can orchestrate securityagents that run processes to achieve your custom security goals.
Implement secure development practices for AI systems
This recommendation is relevant to the followingfocus areas:
- Application security
- AI and ML security
You can use AI for secure coding, cleaning training data, and validating toolsand artifacts. This recommendation is aligned with the SAIF element aboutexpanding strong security foundations to the AI ecosystem.
This recommendation can be relevant to all levels of security autonomy, becausea secure AI system needs to be in place before AI can be used effectively forsecurity. The recommendation is most relevant to the assisted level, wheresecurity practices are augmented by AI.
To implement this recommendation, follow theSupply-chain Levels for Software Artifacts (SLSA) guidelines for AI artifacts and use validated container images.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-02-05 UTC.