Configure DSPM for AI in Microsoft Purview to discover AI data flows, classify sensitive training data, monitor AI interactions with labeled content, and enforce responsible AI compliance policies.
This lab introduces Data Security Posture Management (DSPM) for AI, a Microsoft Purview capability that extends data classification, protection, and governance to AI workloads. DSPM for AI gives security teams visibility into how sensitive data flows through AI services. including Microsoft 365 Copilot, Azure OpenAI, and third-party AI applications discovered by Defender for Cloud Apps. You will learn how to discover AI interactions across the organization, apply sensitivity labels to AI training data and outputs, configure DLP policies for AI-generated content, monitor Copilot interactions with classified documents, and build a comprehensive AI data governance framework that satisfies regulatory and responsible AI requirements.
A global pharmaceutical company with 30,000 employees has rapidly adopted Microsoft 365 Copilot and several third-party AI services. Research teams use AI to analyze drug trial data, marketing uses AI for content generation, and legal uses AI for contract review. The CISO needs visibility into what sensitive data flows through AI systems, whether proprietary research data is being exposed, and how to enforce classification policies when employees interact with AI tools.
Navigate to the Microsoft Purview compliance portal. Go to Data Security Posture Management > AI security. Enable DSPM for AI to begin discovering AI interactions across the organization.
Review the AI data discovery dashboard showing all AI interactions detected: Copilot queries, Azure OpenAI API calls, and third-party AI usage discovered by Defender for Cloud Apps. Examine data flow maps showing which sensitivity labels are most frequently accessed by AI services.
Create AI-specific sensitivity labels: AI Training Approved, AI Restricted, Copilot Safe, and AI Generated. Configure auto-labeling rules to identify AI-generated content and training datasets.
Use the AI activity explorer to monitor employee interactions with AI services. Configure alerts for high-risk activities such as copying AI-Restricted content into prompts or sharing AI-generated summaries of confidential documents.
Building on your sensitivity labels, create DLP policies that prevent AI-generated content with restricted labels from being shared externally or uploaded to unauthorized locations.
DLP-AI-Content-Protection# Connect to Security & Compliance PowerShell
# WHY: Required session for DLP policy management cmdlets
Connect-IPPSSession
# Create DLP policy for AI-generated content protection
# WHAT: Deploys a DLP policy that monitors AI-generated content across all M365 workloads
# WHY: Prevents AI-generated summaries or outputs containing confidential data from
# being shared externally. AI can inadvertently surface sensitive data from
# restricted documents in its responses.
# -Mode Enable: Active enforcement (not simulation) - blocks sharing immediately
# LOCATIONS: Exchange, SharePoint, OneDrive, and Teams for comprehensive coverage
New-DlpCompliancePolicy -Name "DLP-AI-Content-Protection" `
-Comment "Prevent external sharing of AI-generated confidential content" `
-ExchangeLocation All `
-SharePointLocation All `
-OneDriveLocation All `
-TeamsLocation All `
-Mode Enable
# Create a rule matching AI-specific sensitivity labels
# WHAT: Triggers when content with "AI Restricted" or "AI Generated - Confidential"
# labels is shared outside the organisation
# -ContentContainsSensitivityLabels: Matches specific AI-related sensitivity labels
# -ExternalSharingRuleAction Block: Prevents external sharing entirely
# -NotifyUser Owner: Sends a notification to the document owner explaining the block
# -NotifyPolicyTipCustomText: Clear, actionable message shown to the user
# WHY: AI outputs may contain summaries of restricted data that users don't realise
# are sensitive. This policy catches the exposure before it leaves the organisation.
New-DlpComplianceRule -Name "Block AI Restricted External Sharing" `
-Policy "DLP-AI-Content-Protection" `
-ContentContainsSensitivityLabels "AI Restricted","AI Generated - Confidential" `
-ExternalSharingRuleAction Block `
-NotifyUser Owner `
-NotifyPolicyTipCustomText "This AI-generated content contains restricted data and cannot be shared externally."Set up alerts that trigger when AI services access data with restricted sensitivity labels or when unusual AI interaction patterns are detected.
// Detect Copilot interactions with highly confidential content
// WHAT: Finds every instance where Microsoft 365 Copilot accessed files
// with restricted sensitivity labels in the last 7 days
// WHY: Copilot inherits user permissions - if a user has access to overshared
// restricted content, Copilot can surface it in summaries and responses.
// This query identifies which restricted documents Copilot is accessing.
// OUTPUT: Timestamp, user account, file name, sensitivity label, and action type
// CONCERN: Files labeled "Highly Confidential" or "AI Restricted" being accessed
// by Copilot may indicate oversharing that needs remediation
// THRESHOLD: Any hits warrant investigation - Copilot should not routinely access
// highly restricted content
CloudAppEvents
| where TimeGenerated > ago(7d)
| where Application == "Microsoft 365 Copilot"
| where ActionType == "FileAccessed" or ActionType == "ContentViewed"
| extend SensitivityLabel = tostring(RawEventData["SensitivityLabel"])
| where SensitivityLabel in ("Highly Confidential", "AI Restricted")
| project TimeGenerated, AccountId, FileName=tostring(RawEventData["FileName"]),
SensitivityLabel, ActionType
| order by TimeGenerated descConnect DSPM for AI signals to Insider Risk Management to identify users whose AI interactions create data security risks.
Enable Adaptive Protection to dynamically adjust DLP and access policies based on a user’s insider risk level from AI-related activities.
Use the DSPM for AI dashboard to monitor how employees interact with Microsoft 365 Copilot and which sensitivity-labeled documents Copilot accesses.
// Copilot interactions grouped by sensitivity label (30-day summary)
// WHAT: Aggregates all Copilot file access events by the sensitivity label
// of the documents accessed, showing total interaction volume
// WHY: Reveals which sensitivity levels Copilot most frequently accesses.
// Helps identify if Copilot is routinely accessing restricted data.
// OUTPUT: Label name, total interactions, unique users, and unique files accessed
// CONCERN: High interaction counts on "Highly Confidential" labels may indicate
// widespread oversharing that Copilot is amplifying
CloudAppEvents
| where TimeGenerated > ago(30d)
| where Application == "Microsoft 365 Copilot"
| extend SensitivityLabel = tostring(RawEventData["SensitivityLabel"])
| where isnotempty(SensitivityLabel)
| summarize Interactions=count(), UniqueUsers=dcount(AccountId),
UniqueFiles=dcount(tostring(RawEventData["FileName"]))
by SensitivityLabel
| order by Interactions desc
// Users with highest Copilot access to restricted content (top 20)
// WHAT: Ranks users by how often Copilot accessed restricted documents on their behalf
// WHY: Identifies users who may have overly broad access to restricted content.
// These users' permissions should be reviewed and right-sized.
// OUTPUT: User account ID and count of restricted document accesses
// THRESHOLD: Users with >50 restricted accesses/month warrant a permissions review
// ACTION: Review SharePoint site permissions and OneDrive sharing for flagged users
CloudAppEvents
| where TimeGenerated > ago(30d)
| where Application == "Microsoft 365 Copilot"
| extend SensitivityLabel = tostring(RawEventData["SensitivityLabel"])
| where SensitivityLabel in ("Highly Confidential", "AI Restricted", "Board Eyes Only")
| summarize RestrictedAccess=count() by AccountId
| top 20 by RestrictedAccessCreate dashboards that show how sensitive data flows through AI services, which departments are heaviest AI users, and where data governance gaps exist.
// Tile 1: AI interactions by application (30-day breakdown)
// WHAT: Counts total interactions and unique users per AI application
// WHY: Shows which AI services are most used in the organisation
// OUTPUT: Application name, total interactions, and unique user count
// USE: Identifies shadow AI usage (ChatGPT, Claude) alongside sanctioned services
CloudAppEvents
| where TimeGenerated > ago(30d)
| where Application has_any ("Microsoft 365 Copilot", "Azure OpenAI", "ChatGPT", "Claude")
| summarize Interactions=count(), UniqueUsers=dcount(AccountId) by Application
| order by Interactions desc
// Tile 2: Sensitivity label exposure via AI (30 days)
// WHAT: Shows which sensitivity labels are most frequently accessed by AI services
// WHY: Identifies data classification tiers most exposed to AI processing.
// Labels like "Highly Confidential" appearing here warrant immediate review.
// OUTPUT: Sensitivity label name and total AI interaction count
CloudAppEvents
| where TimeGenerated > ago(30d)
| where Application has_any ("Microsoft 365 Copilot", "Azure OpenAI")
| extend Label = tostring(RawEventData["SensitivityLabel"])
| where isnotempty(Label)
| summarize Interactions=count() by Label
| order by Interactions desc
// Tile 3: Daily AI interaction trend over 30 days
// WHAT: Charts daily AI usage volume to identify trends and anomalies
// WHY: Rapid increases may indicate new AI tool adoption or shadow AI growth.
// Sudden drops may indicate policy blocks taking effect.
// OUTPUT: Daily counts suitable for a time-series chart in Power BI or Sentinel
CloudAppEvents
| where TimeGenerated > ago(30d)
| where Application has_any ("Microsoft 365 Copilot", "Azure OpenAI", "ChatGPT")
| summarize DailyInteractions=count() by bin(TimeGenerated, 1d)
| order by TimeGenerated asc
// Tile 4: DLP policy matches in AI context (30 days)
// WHAT: Counts how many times DLP policies triggered for AI-related activities
// WHY: Measures AI DLP policy effectiveness and identifies which policies
// are catching the most violations. High match counts may mean policies
// are too broad; zero matches may mean policies are misconfigured.
// OUTPUT: Policy name, severity level, and match count
DLPPolicyMatchEvents
| where TimeGenerated > ago(30d)
| where Application has_any ("Copilot", "OpenAI", "ChatGPT")
| summarize Matches=count() by PolicyName, Severity
| order by Matches descUse Purview Compliance Manager to build assessment reports that map AI data governance controls to regulatory frameworks like the EU AI Act, NIST AI RMF, and ISO 42001.
Create executive-level reports that summarise the organisation’s AI data security posture, risk trends, and compliance status.
// Executive AI data security summary - single-row KPI report
// WHAT: Generates a consolidated executive-level summary of AI data security metrics
// WHY: Provides the CISO and board with a single view of AI data risk exposure
// Step 1: Calculate total AI activity metrics across all AI applications
// OUTPUT: TotalInteractions, UniqueUsers, UniqueApps for the last 30 days
let ai_activity = CloudAppEvents
| where TimeGenerated > ago(30d)
| where Application has_any ("Microsoft 365 Copilot", "Azure OpenAI", "ChatGPT", "Claude")
| summarize TotalInteractions=count(), UniqueUsers=dcount(AccountId),
UniqueApps=dcount(Application);
// Step 2: Count AI interactions with restricted/highly confidential content
// OUTPUT: RestrictedInteractions - total AI accesses to sensitive data
let restricted_access = CloudAppEvents
| where TimeGenerated > ago(30d)
| where Application has_any ("Microsoft 365 Copilot", "Azure OpenAI")
| extend Label = tostring(RawEventData["SensitivityLabel"])
| where Label in ("Highly Confidential", "AI Restricted", "Board Eyes Only")
| summarize RestrictedInteractions=count();
// Step 3: Join both datasets and calculate the restricted access percentage
// OUTPUT: Single row with all KPIs plus RestrictedPct (target: <5%)
// THRESHOLD: RestrictedPct > 5% indicates excessive AI exposure to sensitive data
// and warrants a permissions review and oversharing remediation
ai_activity | extend d=1
| join kind=inner (restricted_access | extend d=1) on d
| project TotalInteractions, UniqueUsers, UniqueApps, RestrictedInteractions,
RestrictedPct=round(RestrictedInteractions*100.0/TotalInteractions, 2)Extend DSPM for AI monitoring to third-party AI services discovered by Defender for Cloud Apps, covering data flows to ChatGPT, Claude, Gemini, and other external AI platforms.
Configure granular data flow controls that govern how sensitive data moves between organisational repositories and AI services.
Synthesise all DSPM for AI controls into a comprehensive responsible AI framework covering data governance, privacy, transparency, and accountability.
Define the operational cadence and responsibilities for maintaining AI data security posture over time.
| Resource | Description |
|---|---|
| Purview AI Hub | AI data security and governance in Microsoft Purview |
| DSPM for AI Considerations | Requirements, limitations, and deployment considerations for DSPM for AI |
| Sensitivity Labels | Create and configure data classification labels for AI workloads |
| Data Loss Prevention | DLP policies for protecting sensitive data in AI interactions |
| Insider Risk Management | Detect risky AI usage patterns and insider threats |
| Audit Log Activities | Track and monitor AI interaction events in the unified audit log |