Movatterモバイル変換


[0]ホーム

URL:


Skip to main content

This browser is no longer supported.

Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.

Download Microsoft EdgeMore info about Internet Explorer and Microsoft Edge
Table of contentsExit editor mode

Security for Microsoft 365 Copilot

  • Applies to: ✅ Microsoft 365 Copilot
Feedback

In this article

Security is foundational to our approach at Microsoft; it safeguards customer data, supports system integrity, and includes user safety features. This commitment aligns with ourbroader principles of privacy, compliance, and trust. This article outlines our approach to securing Microsoft 365 Copilot and provides guidance you can use to strengthen your AI security posture.

Note

Learn about new features and capabilities inMicrosoft Security products for AI.

What Microsoft does to secure Microsoft 365 Copilot

Microsoft applies a multi-layered, defense-in-depth strategy to secure Microsoft 365 Copilot at every level, grounded in enterprise security, privacy, and compliance standards. This means that if one layer is breached, others still provide protection. Microsoft's approach is guided byResponsible AI principles and is reinforced by the recently expandedSecure Future Initiative.

Our comprehensive security posture for AI includes:

  • Secure engineering and development practices
  • Threat intelligence and risk mitigation
  • Privacy and compliance by design

Each aspect of this foundation forms a safer digital ecosystem for you to confidently adopt AI features and tools.

In addition, Microsoft embeds its Responsible AI principle-based governance across the entire AI lifecycle to help ensure that systems are developed and deployed ethically and securely. This strategy helps ensure AI behaves in ways that are trustworthy, responsible, and inclusive. A core part of the Responsible AI program is designed to identify potential risks, measure their propensity to occur, and build mitigations to manage them, outlined inApplication card: Microsoft 365 Copilot.

Secure engineering and development practices

Security is integrated from the ground up through ourSecurity Development Lifecycle (SDL). This integration helps ensure that vulnerabilities are identified and mitigated early in the development process. Microsoft also provides tailored security guidance and best practices for developers, engineers, and security professionals working with Microsoft AI technologies. SeeBuild a strong security posture for AI.

Assessments and testing

Microsoft conducts internal red teaming and commissions third-party assessments that include penetration testing. These assessments help evaluate Microsoft 365 Copilot implementations against traditional vulnerabilities and theOpen Web Application Security Project (OWASP) Top 10 for LLMs. To see the assessments, visit theService Trust Portal.

Execution controls

Microsoft 365 Copilot enforces secure coding and architectural safeguards to prevent misuse, including ransomware generation and remote code execution. Malicious patterns are blocked through prompt inspection and content filtering, while sandboxing helps ensure that Microsoft 365 Copilot operates within constrained execution boundaries. For more information, seeMicrosoft 365 Copilot architecture and how it works.

Threat intelligence and risk mitigation

Microsoft 365 Copilot is protected by a multi-layered defense strategy that combines threat intelligence, AI-specific detection, and architectural containment. Microsoft uses global threat intelligence to monitor adversarial attacks, model manipulation, and data leakage. To see the latest findings, visit theMicrosoft Security: Threat intelligence blog.

Key practices include:

  • Internal red teaming and third-party penetration testing
  • Proactive identification to block malicious inputs
  • Machine learning classifiers
  • Metaprompting
  • Content filtering to detect prompt injection attempts, including jailbreaks, eXternalized Prompt Injection Attacks (XPIAs), and agentic vulnerabilities

To view the reports, whitepapers, and other resources, visit theService Trust Portal.

Microsoft 365 Copilot mitigates XPIA and agentic vulnerabilities through layered defenses, including markdown sanitization, malicious prompt classifiers, session hardening, and content security policies. These protections help prevent unauthorized actions and data exfiltration across Microsoft 365 Copilot surfaces, and are deployed automatically through Microsoft's cloud infrastructure without customer action required. This methodology also includes continuous testing and containment strategies.

Containment by design

In the event of a successful injection attempt,Microsoft 365 Copilot's architecture helps ensure containment by design. Microsoft 365 Copilot operates within the user's identity and access context, limiting the blast radius of any potential compromise.

  • Microsoft 365 Copilot operates within the user's identity and tenant context
  • Microsoft 365 Copilot only accesses data the user is authorized to view
  • All interactions are scoped to existing permissions, preventing lateral movement or unauthorized data access

Prompt injection defenses

Microsoft employs a multi-layered defense strategy across the Microsoft 365 Copilot prompt flow to mitigate risks of prompt injection. Here are some examples of protection features that are active by default and don't require setup:

  • The user-in-the-loop design enables users to review, modify, or reject AI-generated content.
  • Spam, scam, and suspicious content filtering help block malicious instructions, phishing attempts, and fraudulent material in prompts.
  • Microsoft 365 Copilot ignores junk email and untrusted Microsoft Teams chats, including chats from external contacts.
  • Microsoft 365 Copilot honors Bing web blocking to filter out adult, low-authority, and malicious sites during web search.
  • Microsoft 365 Copilot operates using a stateless LLM architecture. Requests are processed in real time using tenant-scopedsemantic indexing to help ensure data access and relevance are strictly limited to the user's organizational context.

For more information about how Microsoft safeguards data, enforces privacy controls, and secures AI operations, seeData, Privacy, and Security for Microsoft 365 Copilot.

Data exfiltration prevention

Microsoft 365 Copilot's layered security model addresses traditional and emerging threats, including scenarios with the potential for data exfiltration, like these:

  • Unauthenticated image URLs, where a user generates an image containing sensitive data, extracts the URL using browser tools, and then shares the image externally. If the image is accessible without authentication, it could bypass enterprise controls, such asConditional Access orsensitivity labels.
  • Malicious images, such as QR codes, where a user in one tenant generates a malicious image and shares an anonymous URL with users in another tenant. If a URL isn't protected by authentication, access controls might not be enforced.

To help mitigate such scenarios, Microsoft applies its defense-in-depth strategy. This strategy includes continuous monitoring for data leakage vectors, adversarial misuse, and unauthorized access patterns.

Content generated by Microsoft 365 Copilot is governed by the same access controls and compliance policies as other Microsoft 365 content. This means that user permissions, sensitivity labels, and Conditional Access policies are enforced at the point of content generation and access.

Privacy and compliance by design

Microsoft 365 Copilot adheres to the privacy and compliance standards described inData, Privacy, and Security for Microsoft 365 Copilot. Protections that are enforced through security controls include:

  • Data access enforcement
  • Encryption and isolation
  • Compliance tooling
  • Safeguarding data across the AI lifecycle
  • EU data boundary
  • Cross-cloud governance for AI workloads
  • Policy integration and enforcement

For more information, seeEnterprise data protection in Microsoft 365 Copilot and Microsoft 365 Copilot Chat.

Data access enforcement

Microsoft 365 Copilot respectsMicrosoft Entra ID permissions andMicrosoft Purview policies. Microsoft 365 Copilot only surfaces organizational data to which individual users have at least view permissions. Policies are enforced by Microsoft Entra ID, Microsoft Purview, andConditional Access.

Microsoft 365 Copilot connectors enhance the value of Microsoft 365 Copilot while maintaining enterprise protections.

Encryption and isolation

Data is encrypted in transit and at rest using FIPS 140-2–compliant technologies, with tenant-level isolation.Double Key Encryption (DKE) helps to ensure that Microsoft can't access protected content without the customer's key, and the content isn't accessible to Microsoft 365 Copilot.

When you have data that's encrypted by Microsoft Purview Information Protection, Microsoft 365 Copilot honors the usage rights granted to the user. Encryption can be applied bysensitivity labels or by restricted permissions in apps in Microsoft 365 by using Information Rights Management (IRM).

For more information about using Purview with Microsoft 365 Copilot, seeMicrosoft Purview data security and compliance protections for generative AI apps.

Cross-cloud governance for AI workloads

Microsoft Purview helps you govern AI across hybrid and multicloud environments like Azure, AWS, and Google Cloud. If you haveMicrosoft Security Copilot, you get more AI insights and threat detection capabilities.

  • Purview enables consistent data classification, labeling, and policy enforcement across clouds, with visibility into how data flows into AI models and plugins.
  • Security Copilot detects AI-related risks across platforms, correlates threats, and surfaces posture insights fromCloud security posture management.

Policy integration and enforcement

Microsoft 365 Copilot is part of Microsoft's enterprise compliance program and benefits from a range of certifications and assessments. These standards include (but aren't limited to):

  • FedRAMP
  • HiTrust
  • SOC 2 Type 1
  • ISO/IEC 27001, 27701, 22301, 27018, and 42001

Microsoft Entra ID, Microsoft Purview, and Microsoft 365 for business enforce Conditional Access, sensitivity labels, and information barriers.

For more information, see the following resources:

Secure your data for Microsoft 365 Copilot

Securing your data for AI tools like Microsoft 365 Copilot is a shared responsibility. In addition to what Microsoft does to secure Microsoft 365 Copilot, there are certain tasks your organization must do to manage your data and help ensure you're using AI safely and securely. See theAI shared responsibility model.

Learn about Microsoft Purview tools and capabilities

Microsoft Purview provides tools to help you secure and govern your data for use in Microsoft 365 Copilot and AI tools. See the following articles:

Note

Security Copilot is an AI-powered security solution that provides real-time assistance in threat detection, incident response, and risk assessment. In the coming months, Security Copilot will be included in Microsoft 365 E5. As you make agentic AI a part of your daily workflows, you can use Security Copilot to manage agents and security across your organization.Learn about Security Copilot inclusion in Microsoft 365 E5 subscription.

Download scenario-based deployment models and guides

Download and review our scenario-based deployment models, presentations, and guides. These resources describe how to rapidly implement a secure-by-default configuration, address oversharing concerns, and prevent data leak to shadow AI. SeeNotes from engineering: Microsoft Purview deployment models.

Security dashboard

Microsoft 365 Copilot includes built-in security controls fromMicrosoft Purview. The Copilot security dashboard provides more insights and controls to help you:

To view the dashboard in theMicrosoft 365 admin center, selectCopilot >Overview >Security. To display theSecurity section, you need theGlobal Reader role. To make changes theAI administrator role is required.

See also


Feedback

Was this page helpful?

YesNoNo

Need help with this topic?

Want to try using Ask Learn to clarify or guide you through this topic?

Suggest a fix?

  • Last updated on

In this article

Was this page helpful?

YesNo
NoNeed help with this topic?

Want to try using Ask Learn to clarify or guide you through this topic?

Suggest a fix?