Advanced ⏱ 130 min 📄 16 Steps

Data Security Posture Management for AI Workloads

Configure DSPM for AI in Microsoft Purview to discover AI data flows, classify sensitive training data, monitor AI interactions with labeled content, and enforce responsible AI compliance policies.

📋 Overview

About This Lab

This lab introduces Data Security Posture Management (DSPM) for AI, a Microsoft Purview capability that extends data classification, protection, and governance to AI workloads. DSPM for AI gives security teams visibility into how sensitive data flows through AI services. including Microsoft 365 Copilot, Azure OpenAI, and third-party AI applications discovered by Defender for Cloud Apps. You will learn how to discover AI interactions across the organization, apply sensitivity labels to AI training data and outputs, configure DLP policies for AI-generated content, monitor Copilot interactions with classified documents, and build a comprehensive AI data governance framework that satisfies regulatory and responsible AI requirements.

🎯 What You Will Learn

  1. Understand DSPM for AI and how it extends Purview to AI workloads
  2. Enable and configure DSPM for AI in the Purview compliance portal
  3. Discover AI applications consuming organizational data
  4. Map data flows between AI services and sensitive information stores
  5. Create sensitivity labels for AI training data and model outputs
  6. Configure policies to detect labeled data used in AI prompts
  7. Monitor AI interactions for data governance compliance
  8. Set up alerts for unauthorized AI access to restricted data
  9. Build dashboards tracking AI data consumption patterns
  10. Integrate DSPM for AI with Insider Risk Management
  11. Create responsible AI compliance assessment reports
  12. Configure Adaptive Protection for AI-related activities
  13. Monitor Copilot interactions with sensitivity-labeled documents
  14. Set up DLP policies for AI-generated content
  15. Build an AI data governance framework
  16. Generate executive reports on AI data security posture

🏢 Enterprise Scenario

A global pharmaceutical company with 30,000 employees has rapidly adopted Microsoft 365 Copilot and several third-party AI services. Research teams use AI to analyze drug trial data, marketing uses AI for content generation, and legal uses AI for contract review. The CISO needs visibility into what sensitive data flows through AI systems, whether proprietary research data is being exposed, and how to enforce classification policies when employees interact with AI tools.

โš™๏ธ Prerequisites

  • Microsoft 365 E5 or E5 Compliance licence
  • Microsoft Purview DSPM for AI enabled in the compliance portal
  • Sensitivity labels configured (from Lab 01)
  • Defender for Cloud Apps integration active for third-party AI discovery
  • Insider Risk Management configured (from Lab 03)

Step 1. Enable DSPM for AI

Navigate to the Microsoft Purview compliance portal. Go to Data Security Posture Management > AI security. Enable DSPM for AI to begin discovering AI interactions across the organization.

💡 Key Concept: DSPM for AI extends Purview data classification and protection to AI workloads. It monitors how sensitivity-labeled data is accessed, processed, and generated by AI services including Microsoft 365 Copilot, Azure OpenAI, and third-party LLMs.

Step 2. Discover AI Data Flows

Review the AI data discovery dashboard showing all AI interactions detected: Copilot queries, Azure OpenAI API calls, and third-party AI usage discovered by Defender for Cloud Apps. Examine data flow maps showing which sensitivity labels are most frequently accessed by AI services.

Step 3. Configure AI Sensitivity Labels

Create AI-specific sensitivity labels: AI Training Approved, AI Restricted, Copilot Safe, and AI Generated. Configure auto-labeling rules to identify AI-generated content and training datasets.

Step 4. Monitor and Alert

Use the AI activity explorer to monitor employee interactions with AI services. Configure alerts for high-risk activities such as copying AI-Restricted content into prompts or sharing AI-generated summaries of confidential documents.

Step 5. Create DLP Policies for AI-Generated Content

Building on your sensitivity labels, create DLP policies that prevent AI-generated content with restricted labels from being shared externally or uploaded to unauthorized locations.

Portal Instructions

  1. Navigate to Microsoft Purview > Data loss prevention > Policies > + Create policy
  2. Template: Custom policy
  3. Name: DLP-AI-Content-Protection
  4. Locations: Exchange, SharePoint, OneDrive, Teams, Defender for Cloud Apps, Endpoint devices
  5. Conditions:
    • Content contains sensitivity label: AI Restricted or AI Generated – Confidential
    • Content is shared with: People outside my organization
  6. Actions: Block content sharing, send notification to user and compliance team
  7. User override: allow with business justification for medium-confidence matches

PowerShell: Create AI DLP Policy

# Connect to Security & Compliance PowerShell
# WHY: Required session for DLP policy management cmdlets
Connect-IPPSSession

# Create DLP policy for AI-generated content protection
# WHAT: Deploys a DLP policy that monitors AI-generated content across all M365 workloads
# WHY: Prevents AI-generated summaries or outputs containing confidential data from
#      being shared externally. AI can inadvertently surface sensitive data from
#      restricted documents in its responses.
# -Mode Enable: Active enforcement (not simulation) - blocks sharing immediately
# LOCATIONS: Exchange, SharePoint, OneDrive, and Teams for comprehensive coverage
New-DlpCompliancePolicy -Name "DLP-AI-Content-Protection" `
  -Comment "Prevent external sharing of AI-generated confidential content" `
  -ExchangeLocation All `
  -SharePointLocation All `
  -OneDriveLocation All `
  -TeamsLocation All `
  -Mode Enable

# Create a rule matching AI-specific sensitivity labels
# WHAT: Triggers when content with "AI Restricted" or "AI Generated - Confidential"
#       labels is shared outside the organisation
# -ContentContainsSensitivityLabels: Matches specific AI-related sensitivity labels
# -ExternalSharingRuleAction Block: Prevents external sharing entirely
# -NotifyUser Owner: Sends a notification to the document owner explaining the block
# -NotifyPolicyTipCustomText: Clear, actionable message shown to the user
# WHY: AI outputs may contain summaries of restricted data that users don't realise
#      are sensitive. This policy catches the exposure before it leaves the organisation.
New-DlpComplianceRule -Name "Block AI Restricted External Sharing" `
  -Policy "DLP-AI-Content-Protection" `
  -ContentContainsSensitivityLabels "AI Restricted","AI Generated - Confidential" `
  -ExternalSharingRuleAction Block `
  -NotifyUser Owner `
  -NotifyPolicyTipCustomText "This AI-generated content contains restricted data and cannot be shared externally."

Step 6. Configure Alerts for Unauthorized AI Data Access

Set up alerts that trigger when AI services access data with restricted sensitivity labels or when unusual AI interaction patterns are detected.

Portal Instructions

  1. Navigate to Microsoft Purview > Data Security Posture Management > AI security
  2. Go to Alerts > + Create alert policy
  3. Alert conditions:
    • AI service accessed content with label Highly Confidential or AI Restricted
    • User shared AI-generated summary of a restricted document
    • Copilot referenced more than 5 restricted documents in a single session
  4. Severity: High
  5. Notification: email to Security Operations and Compliance Officer
  6. Automated action: generate an incident in Microsoft Defender XDR

KQL: AI Access to Restricted Content

// Detect Copilot interactions with highly confidential content
// WHAT: Finds every instance where Microsoft 365 Copilot accessed files
//       with restricted sensitivity labels in the last 7 days
// WHY: Copilot inherits user permissions - if a user has access to overshared
//      restricted content, Copilot can surface it in summaries and responses.
//      This query identifies which restricted documents Copilot is accessing.
// OUTPUT: Timestamp, user account, file name, sensitivity label, and action type
// CONCERN: Files labeled "Highly Confidential" or "AI Restricted" being accessed
//          by Copilot may indicate oversharing that needs remediation
// THRESHOLD: Any hits warrant investigation - Copilot should not routinely access
//            highly restricted content
CloudAppEvents
| where TimeGenerated > ago(7d)
| where Application == "Microsoft 365 Copilot"
| where ActionType == "FileAccessed" or ActionType == "ContentViewed"
| extend SensitivityLabel = tostring(RawEventData["SensitivityLabel"])
| where SensitivityLabel in ("Highly Confidential", "AI Restricted")
| project TimeGenerated, AccountId, FileName=tostring(RawEventData["FileName"]),
    SensitivityLabel, ActionType
| order by TimeGenerated desc

Step 7. Integrate with Insider Risk Management

Connect DSPM for AI signals to Insider Risk Management to identify users whose AI interactions create data security risks.

Configuration Steps

  1. Navigate to Microsoft Purview > Insider Risk Management > Settings > Intelligent detections
  2. Enable AI interaction indicators:
    • User shares restricted documents via Copilot summaries
    • User copies AI-generated content containing proprietary data to personal locations
    • User uses AI services to process sensitive data from multiple departments
    • User accesses AI tools from unmanaged devices
  3. Create a policy using template: Data leaks by risky users
  4. Add AI-specific triggering events under Policy indicators
  5. Set risk level weights: AI interactions with restricted data = High
💡 Pro Tip: DSPM for AI provides unique visibility into data flows that traditional DLP cannot capture. When a user asks Copilot to “summarise all Q4 revenue projections,” the AI accesses dozens of restricted documents. Insider Risk Management can flag this pattern even though no single file was externally shared.

Step 8. Configure Adaptive Protection for AI Activities

Enable Adaptive Protection to dynamically adjust DLP and access policies based on a user’s insider risk level from AI-related activities.

Portal Instructions

  1. Navigate to Microsoft Purview > Insider Risk Management > Adaptive Protection
  2. Enable Adaptive Protection if not already active
  3. Configure risk level conditions:
    • Elevated risk: user has 3+ AI policy violations in 7 days → warn on AI data sharing
    • High risk: user has 5+ violations or bulk AI data access → block AI external sharing
    • Critical risk: user matches departing employee + AI data access → block AI and restrict Copilot
  4. Create corresponding DLP policies that reference Adaptive Protection risk levels
  5. Test: verify that a simulated high-risk user triggers the dynamic policy enforcement

Step 9. Monitor Microsoft 365 Copilot Interactions

Use the DSPM for AI dashboard to monitor how employees interact with Microsoft 365 Copilot and which sensitivity-labeled documents Copilot accesses.

Portal Instructions

  1. Navigate to Microsoft Purview > Data Security Posture Management > AI security > Activity explorer
  2. Filter by application: Microsoft 365 Copilot
  3. Review interactions by type: document summarisation, email drafting, presentation creation, data analysis
  4. Identify which sensitivity labels are most frequently accessed by Copilot
  5. Check for Copilot accessing labels it should not (e.g., Board Eyes Only, Restricted – Legal Privilege)

KQL: Copilot Label Access Patterns

// Copilot interactions grouped by sensitivity label (30-day summary)
// WHAT: Aggregates all Copilot file access events by the sensitivity label
//       of the documents accessed, showing total interaction volume
// WHY: Reveals which sensitivity levels Copilot most frequently accesses.
//      Helps identify if Copilot is routinely accessing restricted data.
// OUTPUT: Label name, total interactions, unique users, and unique files accessed
// CONCERN: High interaction counts on "Highly Confidential" labels may indicate
//          widespread oversharing that Copilot is amplifying
CloudAppEvents
| where TimeGenerated > ago(30d)
| where Application == "Microsoft 365 Copilot"
| extend SensitivityLabel = tostring(RawEventData["SensitivityLabel"])
| where isnotempty(SensitivityLabel)
| summarize Interactions=count(), UniqueUsers=dcount(AccountId),
    UniqueFiles=dcount(tostring(RawEventData["FileName"]))
    by SensitivityLabel
| order by Interactions desc

// Users with highest Copilot access to restricted content (top 20)
// WHAT: Ranks users by how often Copilot accessed restricted documents on their behalf
// WHY: Identifies users who may have overly broad access to restricted content.
//      These users' permissions should be reviewed and right-sized.
// OUTPUT: User account ID and count of restricted document accesses
// THRESHOLD: Users with >50 restricted accesses/month warrant a permissions review
// ACTION: Review SharePoint site permissions and OneDrive sharing for flagged users
CloudAppEvents
| where TimeGenerated > ago(30d)
| where Application == "Microsoft 365 Copilot"
| extend SensitivityLabel = tostring(RawEventData["SensitivityLabel"])
| where SensitivityLabel in ("Highly Confidential", "AI Restricted", "Board Eyes Only")
| summarize RestrictedAccess=count() by AccountId
| top 20 by RestrictedAccess
⚠️ Important: Copilot inherits the user’s permissions. if a user has access to overshared SharePoint sites with sensitive data, Copilot can surface that data in summaries and responses. Use DSPM for AI findings to identify and remediate oversharing before Copilot amplifies the exposure.

Step 10. Build AI Data Consumption Dashboards

Create dashboards that show how sensitive data flows through AI services, which departments are heaviest AI users, and where data governance gaps exist.

Dashboard KQL Tiles

// Tile 1: AI interactions by application (30-day breakdown)
// WHAT: Counts total interactions and unique users per AI application
// WHY: Shows which AI services are most used in the organisation
// OUTPUT: Application name, total interactions, and unique user count
// USE: Identifies shadow AI usage (ChatGPT, Claude) alongside sanctioned services
CloudAppEvents
| where TimeGenerated > ago(30d)
| where Application has_any ("Microsoft 365 Copilot", "Azure OpenAI", "ChatGPT", "Claude")
| summarize Interactions=count(), UniqueUsers=dcount(AccountId) by Application
| order by Interactions desc

// Tile 2: Sensitivity label exposure via AI (30 days)
// WHAT: Shows which sensitivity labels are most frequently accessed by AI services
// WHY: Identifies data classification tiers most exposed to AI processing.
//      Labels like "Highly Confidential" appearing here warrant immediate review.
// OUTPUT: Sensitivity label name and total AI interaction count
CloudAppEvents
| where TimeGenerated > ago(30d)
| where Application has_any ("Microsoft 365 Copilot", "Azure OpenAI")
| extend Label = tostring(RawEventData["SensitivityLabel"])
| where isnotempty(Label)
| summarize Interactions=count() by Label
| order by Interactions desc

// Tile 3: Daily AI interaction trend over 30 days
// WHAT: Charts daily AI usage volume to identify trends and anomalies
// WHY: Rapid increases may indicate new AI tool adoption or shadow AI growth.
//      Sudden drops may indicate policy blocks taking effect.
// OUTPUT: Daily counts suitable for a time-series chart in Power BI or Sentinel
CloudAppEvents
| where TimeGenerated > ago(30d)
| where Application has_any ("Microsoft 365 Copilot", "Azure OpenAI", "ChatGPT")
| summarize DailyInteractions=count() by bin(TimeGenerated, 1d)
| order by TimeGenerated asc

// Tile 4: DLP policy matches in AI context (30 days)
// WHAT: Counts how many times DLP policies triggered for AI-related activities
// WHY: Measures AI DLP policy effectiveness and identifies which policies
//      are catching the most violations. High match counts may mean policies
//      are too broad; zero matches may mean policies are misconfigured.
// OUTPUT: Policy name, severity level, and match count
DLPPolicyMatchEvents
| where TimeGenerated > ago(30d)
| where Application has_any ("Copilot", "OpenAI", "ChatGPT")
| summarize Matches=count() by PolicyName, Severity
| order by Matches desc

Step 11. Create Responsible AI Compliance Assessments

Use Purview Compliance Manager to build assessment reports that map AI data governance controls to regulatory frameworks like the EU AI Act, NIST AI RMF, and ISO 42001.

Portal Instructions

  1. Navigate to Microsoft Purview > Compliance Manager > Assessments
  2. Click + Add assessment
  3. Select regulations relevant to AI governance:
    • NIST AI Risk Management Framework
    • ISO/IEC 42001:2023 (AI management systems)
    • EU AI Act (where applicable)
  4. Map improvement actions to the controls implemented in this lab:
    • Data classification for AI training data → NIST MAP function
    • DLP policies for AI content → NIST GOVERN function
    • Monitoring and alerting → NIST MEASURE function
    • Insider Risk integration → NIST MANAGE function
  5. Assign improvement actions to responsible teams with deadlines
  6. Export the compliance score and assessment for executive reporting

Step 12. Generate Executive AI Security Reports

Create executive-level reports that summarise the organisation’s AI data security posture, risk trends, and compliance status.

Report Components

  1. AI Adoption Summary: total AI interactions, unique users, applications, and growth trend
  2. Data Exposure Risk: sensitivity labels accessed by AI, DLP policy matches, data volume processed
  3. Policy Effectiveness: violations blocked, user override rates, false positive rates
  4. Insider Risk Signals: users flagged for AI-related risk, investigation outcomes
  5. Compliance Score: AI governance compliance against selected frameworks
  6. Recommendations: top actions to improve AI data security posture

Executive Summary KQL

// Executive AI data security summary - single-row KPI report
// WHAT: Generates a consolidated executive-level summary of AI data security metrics
// WHY: Provides the CISO and board with a single view of AI data risk exposure

// Step 1: Calculate total AI activity metrics across all AI applications
// OUTPUT: TotalInteractions, UniqueUsers, UniqueApps for the last 30 days
let ai_activity = CloudAppEvents
| where TimeGenerated > ago(30d)
| where Application has_any ("Microsoft 365 Copilot", "Azure OpenAI", "ChatGPT", "Claude")
| summarize TotalInteractions=count(), UniqueUsers=dcount(AccountId),
    UniqueApps=dcount(Application);

// Step 2: Count AI interactions with restricted/highly confidential content
// OUTPUT: RestrictedInteractions - total AI accesses to sensitive data
let restricted_access = CloudAppEvents
| where TimeGenerated > ago(30d)
| where Application has_any ("Microsoft 365 Copilot", "Azure OpenAI")
| extend Label = tostring(RawEventData["SensitivityLabel"])
| where Label in ("Highly Confidential", "AI Restricted", "Board Eyes Only")
| summarize RestrictedInteractions=count();

// Step 3: Join both datasets and calculate the restricted access percentage
// OUTPUT: Single row with all KPIs plus RestrictedPct (target: <5%)
// THRESHOLD: RestrictedPct > 5% indicates excessive AI exposure to sensitive data
//            and warrants a permissions review and oversharing remediation
ai_activity | extend d=1
| join kind=inner (restricted_access | extend d=1) on d
| project TotalInteractions, UniqueUsers, UniqueApps, RestrictedInteractions,
    RestrictedPct=round(RestrictedInteractions*100.0/TotalInteractions, 2)

Step 13. Monitor Third-Party AI Services

Extend DSPM for AI monitoring to third-party AI services discovered by Defender for Cloud Apps, covering data flows to ChatGPT, Claude, Gemini, and other external AI platforms.

Configuration Steps

  1. Verify Defender for Cloud Apps discovers third-party AI applications via Cloud Discovery
  2. In DSPM for AI settings, enable monitoring for Third-party AI applications
  3. Map third-party AI data flows: which users, what data types, what volumes
  4. Create sensitivity-aware policies:
    • Block sharing of Highly Confidential labeled content with any non-Microsoft AI service
    • Warn when Confidential content is pasted into external AI chats
    • Alert when users access multiple external AI services in a single session
  5. Review third-party AI app data residency and training data policies through the Cloud App Catalog

Step 14. Implement AI Data Flow Controls

Configure granular data flow controls that govern how sensitive data moves between organisational repositories and AI services.

Data Flow Policies

  1. Copilot → External sharing: Block Copilot-generated summaries of restricted documents from being shared externally via email or Teams
  2. SharePoint → Azure OpenAI: Monitor API calls that pull SharePoint data into custom AI applications
  3. Endpoint → External AI: Use Endpoint DLP to block copy/paste of labeled content into browser-based AI tools
  4. Email → AI processing: Alert when emails with restricted labels are forwarded to AI processing workflows
  5. Training data → Model: Ensure datasets used for fine-tuning are scanned and classified before use
💡 Pro Tip: Map your AI data flows visually. document every path from data source to AI service to output destination. This data flow map is essential for both security posture management and regulatory compliance, especially under the EU AI Act which requires documentation of data lineage for high-risk AI systems.

Step 15. Build a Responsible AI Compliance Framework

Synthesise all DSPM for AI controls into a comprehensive responsible AI framework covering data governance, privacy, transparency, and accountability.

Framework Pillars

  1. Data Discovery: DSPM for AI auto-discovers all AI interactions across Microsoft 365, Azure, and third-party services
  2. Classification: sensitivity labels applied to AI training data, prompts, and outputs
  3. Protection: DLP policies, session controls, and Endpoint DLP for AI data flows
  4. Monitoring: AI activity explorer, Copilot interaction dashboards, and anomaly detection
  5. Risk Management: Insider Risk integration, Adaptive Protection, and risk-based access controls
  6. Compliance: Compliance Manager assessments mapped to NIST AI RMF, ISO 42001, and EU AI Act
  7. Governance: AI Governance Committee, review cadence, policy update process, exception management

Step 16. Establish Ongoing AI Data Governance

Define the operational cadence and responsibilities for maintaining AI data security posture over time.

Governance Cadence

  1. Daily: review DSPM for AI alerts and DLP policy matches from the SOC queue
  2. Weekly: review AI activity explorer trends, investigate anomalous patterns, update sensitivity labels as needed
  3. Monthly: generate executive AI data security reports, review Insider Risk AI indicators, tune DLP policies
  4. Quarterly: full compliance assessment update, AI governance committee meeting, framework policy review
  5. Annually: responsible AI compliance audit, cross-functional AI risk assessment, framework version update

Key Metrics to Track

  • Percentage of AI interactions involving restricted data (< 5% target)
  • DLP policy match rate and false positive rate (< 10% target)
  • Mean time to investigate AI-related Insider Risk alerts (< 48 hours target)
  • AI compliance score in Compliance Manager (> 80% target)
  • Percentage of AI applications in the sanctioned catalogue vs. shadow AI (> 90% target)
💡 Pro Tip: The most effective AI governance programmes treat data security as a shared responsibility. Train employees on responsible AI usage, publish clear guidelines on what data can and cannot be shared with AI tools, and celebrate teams that adopt AI safely rather than purely restricting access.

Summary

What You Accomplished

  • Enabled DSPM for AI and discovered AI data flows across the organisation
  • Created AI-specific sensitivity labels and auto-labeling policies
  • Configured DLP policies for AI-generated content protection
  • Set up alerts for unauthorized AI access to restricted data
  • Integrated DSPM for AI with Insider Risk Management and Adaptive Protection
  • Monitored Microsoft 365 Copilot interactions with sensitivity-labeled documents
  • Created responsible AI compliance assessments mapped to regulatory frameworks
  • Built executive AI data security dashboards and reports
  • Designed a comprehensive responsible AI data governance framework

Next Steps

๐Ÿ“š Documentation Resources

ResourceDescription
Purview AI HubAI data security and governance in Microsoft Purview
DSPM for AI ConsiderationsRequirements, limitations, and deployment considerations for DSPM for AI
Sensitivity LabelsCreate and configure data classification labels for AI workloads
Data Loss PreventionDLP policies for protecting sensitive data in AI interactions
Insider Risk ManagementDetect risky AI usage patterns and insider threats
Audit Log ActivitiesTrack and monitor AI interaction events in the unified audit log
โ† Lab 06 All Labs โ†’