Intermediate ⏱ 120 min 📋 10 Steps

Investigate DLP Incidents in the Unified XDR Portal

Investigate DLP violations correlated with endpoint, identity, and cloud app signals in Defender XDR. Trace data exfiltration, perform impact assessment, create custom detection rules, and build automated response workflows.

📋 Overview

About This Lab

The Defender XDR portal provides a unified view of DLP incidents correlated with endpoint, identity, email, and cloud app signals. This integration enables investigators to see DLP violations in the context of broader security events: a user copying sensitive data to USB might also have a risky sign-in, a compromised mailbox, or suspicious cloud app activity. This lab covers end-to-end DLP incident investigation in the unified portal.

🏢 Enterprise Use Case

A DLP alert fires when an employee copies 500 customer records to a USB drive. Investigation in Defender XDR reveals the same user had a high-risk sign-in from an anonymous IP 30 minutes earlier, followed by mailbox rule creation forwarding emails to a personal account. The unified incident view connects these dots: this is not an accidental policy violation. it is a credential compromise leading to data exfiltration.

🎯 What You Will Learn

  1. Navigate DLP incidents in the Defender XDR portal
  2. Correlate DLP alerts with identity and endpoint signals
  3. Use the unified attack story to trace data exfiltration
  4. Investigate DLP violations using advanced hunting
  5. Contain data exfiltration in progress
  6. Perform impact assessment for DLP incidents
  7. Create custom detection rules for DLP-related attacks
  8. Build DLP investigation playbooks
  9. Generate incident reports for compliance and legal
  10. Configure automated response for critical DLP incidents

🔑 Why This Matters

Most data breaches involve multiple signals across different security layers. Investigating DLP violations in isolation misses the attack context. Defender XDR''s unified DLP integration reveals whether a DLP violation is an honest mistake or part of a coordinated attack, fundamentally changing the response strategy.

⚙️ Prerequisites

  • Licensing: Microsoft 365 E5, E5 Compliance, or E5 Information Protection & Governance add-on
  • Portal Access: Security Administrator or Security Operator role in security.microsoft.com
  • DLP Integration: DLP alerts must be streaming to the Defender XDR portal (enabled in Purview > Settings > Endpoints > Advanced features)
  • Advanced Hunting: Access to Advanced Hunting in Defender XDR with tables: AADSignInEventsBeta, DeviceEvents, CloudAppEvents, EmailEvents
  • MDE Onboarding: Target devices onboarded to Microsoft Defender for Endpoint with sensor health verified
  • PowerShell: Microsoft Graph PowerShell SDK (Microsoft.Graph) for containment actions and Exchange Online (ExchangeOnlineManagement)
  • Entra ID: User Administrator or Privileged Authentication Administrator role for account disable/session revoke actions
  • Sentinel (Optional): Microsoft Sentinel workspace connected to Defender XDR for automated playbook response
⚠️ Important: DLP incidents in Defender XDR require the DLP-to-XDR integration to be enabled. Without this, DLP alerts remain isolated in the Purview compliance portal and cannot be correlated with endpoint, identity, or cloud app signals.

Step 1 · Navigate DLP Incidents in Defender XDR

The Defender XDR portal unifies DLP alerts alongside endpoint, identity, email, and cloud app signals into correlated incidents. This means a single DLP policy match is no longer investigated in isolation - it is automatically grouped with related security events such as risky sign-ins, suspicious mailbox rules, or lateral movement. The first step is learning how to filter, triage, and prioritise DLP incidents in this unified view.

  1. Navigate to Incidents & alerts in the Defender XDR portal
  2. Filter by Service source: Microsoft Data Loss Prevention
  3. Review incident titles: look for patterns like “DLP policy violation”, “Sensitive data exfiltration”
  4. Note which incidents have correlated alerts from other services (MDE, Entra ID, MDA) - multi-source incidents indicate higher severity
  5. Click on a multi-service incident to see the full attack story and examine the correlation graph
  6. Review the Evidence and response tab to see affected entities: users, devices, mailboxes, and files
💡 Pro Tip: Sort incidents by “Highest severity” and look for multi-service incidents first - a DLP alert correlated with an identity alert is almost always more urgent than a standalone DLP match. Use the “Assigned to” filter to track your active investigations.

Step 2 · Correlate DLP with Identity Signals

A DLP violation that follows a risky sign-in is fundamentally different from an accidental policy match. When an attacker compromises credentials and then begins exfiltrating sensitive data, the DLP alert is your last line of defence. This step uses KQL to correlate DLP alerts with identity risk signals from Entra ID, revealing whether the user behind the DLP violation may be operating under compromised credentials.

  1. In the incident, check if the user had risky sign-in activity before the DLP violation
  2. Look for: anonymous IP, impossible travel, leaked credentials, password spray
  3. If identity compromise precedes DLP violation, escalate to identity incident response
  4. Check if MFA was bypassed or legacy authentication was used
  5. Use the KQL query below to programmatically correlate these signals at scale

KQL - Correlate DLP Alerts with Risky Sign-Ins

// WHAT: Join DLP alert events with risky sign-ins from Entra ID to find
//       users whose DLP violations occurred within 2 hours of a risky login.
// WHY:  A DLP violation after a risky sign-in suggests credential compromise
//       leading to data exfiltration - not an accidental policy match.
// HOW:  Pull DLP-sourced alerts, extract the impacted user, then inner-join
//       against AADSignInEventsBeta for medium/high-risk logins within ±2h.
// TABLE: AlertInfo + AlertEvidence - unified alert data in Defender XDR
// TABLE: AADSignInEventsBeta - Entra ID sign-in events with risk scoring
let dlpAlerts = AlertInfo
| where Timestamp > ago(7d)
| where ServiceSource == "Microsoft Data Loss Prevention"
| join kind=inner (
    AlertEvidence
    | where Timestamp > ago(7d)
    | where EntityType == "User"
    | project AlertId, AccountObjectId, AccountUpn = AccountName
) on AlertId
| project DlpTime = Timestamp, AlertId, AlertTitle = Title,
          Severity, AccountObjectId, AccountUpn;
// Join with risky sign-ins - look for risk activity within 2h before DLP alert
dlpAlerts
| join kind=inner (
    AADSignInEventsBeta
    | where Timestamp > ago(7d)
    | where RiskLevelDuringSignIn in ("medium", "high")
    | where ErrorCode == 0   // successful sign-in only
    | project SignInTime = Timestamp, AccountObjectId,
             RiskLevel = RiskLevelDuringSignIn,
             RiskDetail = RiskState,
             IPAddress, Country, Application
) on AccountObjectId
| where SignInTime between (ago(7d) .. DlpTime)
| where (DlpTime - SignInTime) between (0min .. 2h)
| project DlpTime, AlertTitle, Severity, AccountUpn,
          SignInTime, RiskLevel, IPAddress, Country, Application
| sort by DlpTime desc
💡 Pro Tip: If this query returns results, treat the incident as a potential credential compromise - not just a DLP violation. The 2-hour correlation window catches attackers who sign in and immediately begin data collection. Extend the window to 24h for slow-and-low exfiltration scenarios.

Step 3 · Trace Data Exfiltration with Attack Story

The Defender XDR attack story provides a visual timeline of all correlated events. However, to reconstruct the full exfiltration kill chain - from initial access through data staging to actual exfiltration - you need a unified timeline from multiple tables. This KQL query stitches together cloud app activity, endpoint file operations, and email events into a single chronological attack narrative for a specific user.

  1. Use the Attack story timeline to trace the sequence of events visually
  2. Map the data exfiltration path: compromise > access > collection > exfiltration
  3. Identify what data was accessed, collected, and where it was sent
  4. Check for staging behaviour: did the user aggregate files before exfiltration?
  5. Run the KQL query below to build a comprehensive cross-table timeline

KQL - Reconstruct Attack Story Timeline

// WHAT: Build a unified exfiltration timeline by combining signals from
//       cloud apps, endpoint devices, and email for a specific user.
// WHY:  The attack story graph shows correlated alerts, but hunting
//       across raw tables reveals the FULL sequence - including benign
//       access that escalated into exfiltration (staging, lateral file
//       copies, archive creation, then upload to personal cloud storage).
// HOW:  Union three key tables, filter for a target user, and tag each
//       event with its source for easy reading.
// USAGE: Replace "targetuser@contoso.com" with the compromised account.
let targetUser = "targetuser@contoso.com";
let investigationWindow = 24h;
// Cloud app events: SharePoint downloads, OneDrive moves, 3rd-party uploads
let cloudEvents = CloudAppEvents
| where Timestamp > ago(investigationWindow)
| where AccountId has targetUser or RawEventData has targetUser
| where ActionType in ("FileDownloaded", "FileUploaded", "FileCopied",
                       "FileDeleted", "FileMoved", "FileRenamed")
| extend Source = "CloudApp"
| project Timestamp, Source, ActionType,
          Detail = tostring(RawEventData.SourceFileName),
          Destination = tostring(RawEventData.ObjectName);
// Endpoint device events: USB copy, print, clipboard, archive creation
let deviceEvents = DeviceEvents
| where Timestamp > ago(investigationWindow)
| where AccountName has targetUser or InitiatingProcessAccountName has targetUser
| where ActionType in ("SensitiveFileRead", "RemovableMediaAccess",
                       "FileCreated", "FileModified",
                       "ClipboardDataRead", "PrintJobCreated")
| extend Source = "Endpoint"
| project Timestamp, Source, ActionType,
          Detail = FileName,
          Destination = coalesce(RemoteUrl, FolderPath);
// Email events: forwarding rules, attachments sent externally
let emailEvents = EmailEvents
| where Timestamp > ago(investigationWindow)
| where SenderFromAddress == targetUser
| where EmailDirection == "Outbound"
| extend Source = "Email"
| project Timestamp, Source, ActionType = "EmailSent",
          Detail = Subject,
          Destination = RecipientEmailAddress;
// Unified timeline
union cloudEvents, deviceEvents, emailEvents
| sort by Timestamp asc
| project Timestamp, Source, ActionType, Detail, Destination
💡 Pro Tip: Look for the “staging” pattern: a burst of FileDownloaded events from SharePoint, followed by FileCreated (archive/zip) on the endpoint, and then a single FileUploaded to a personal cloud service. This pattern is the hallmark of intentional data exfiltration rather than accidental policy violation.

Step 4 · Hunt for DLP-Related Activity with KQL

// WHAT: Correlate DLP violations with risky identity activity to detect
//       credential compromise leading to data exfiltration
// WHY: A DLP violation AFTER a risky sign-in is likely an attacker exfiltrating data
//      using stolen credentials, not an accidental employee policy violation.
//      This correlation upgrades the incident from "DLP alert" to "active breach".
// HOW: First identifies users with medium/high-risk sign-ins, then searches
//      for sensitive file operations by those same users on endpoints.
// TABLE: AADSignInEventsBeta - used to identify users with risky sign-ins
// TABLE: DeviceEvents - endpoint activity events
// KEY ActionType values for DLP-relevant endpoint activity:
//   SensitiveFileRead           - sensitive file was opened/read on the device
//   RemovableMediaAccess        - file copied to USB drive or external media
//   CloudAppUnSanctionedAccess  - file uploaded to an unsanctioned cloud app
// OUTPUT: Timeline of suspicious endpoint activities by users with risky sign-ins
let riskyUsers = AADSignInEventsBeta
| where Timestamp > ago(7d)
| where RiskLevelDuringSignIn in ("medium", "high")
| distinct AccountObjectId;
DeviceEvents
| where Timestamp > ago(7d)
| where ActionType in ("SensitiveFileRead", "RemovableMediaAccess", "CloudAppUnSanctionedAccess")
| where AccountObjectId in (riskyUsers)
| project Timestamp, DeviceName, AccountName, ActionType, FileName, RemoteUrl
💡 Pro Tip: Save this query as a custom detection rule with a 1-hour frequency for continuous monitoring. The riskyUsers variable dynamically updates, so any new risky sign-in immediately triggers DLP-correlated hunting across all endpoints. Consider adding CloudAppEvents to the second query for broader exfiltration coverage.

Step 5 · Contain Active Data Exfiltration

When evidence confirms active data exfiltration - especially when correlated with identity compromise - containment must be immediate. The priority order is: (1) isolate the device to stop ongoing transfers, (2) disable the user account to revoke access, (3) invalidate all sessions and tokens to prevent lateral movement, and (4) block known external destinations. This PowerShell script automates the critical containment steps.

  1. If exfiltration is in progress: isolate the device using MDE response actions
  2. Disable the user account in Entra ID
  3. Revoke all sessions and tokens
  4. Block the external destination (URL, IP) if identifiable
  5. Preserve evidence: collect investigation package from the device

PowerShell - Automated Containment Actions

# -------------------------------------------------------------------
# WHAT: Automated containment for active DLP exfiltration incidents.
# WHY:  Every minute of delay during active exfiltration increases the
#       volume of compromised data. Automating containment reduces
#       Mean Time to Contain (MTTC) from hours to seconds.
# PREREQ: Microsoft.Graph module, MDE API access, appropriate admin roles.
# -------------------------------------------------------------------

# Connect to Microsoft Graph with required scopes
Connect-MgGraph -Scopes "User.ReadWrite.All","Directory.ReadWrite.All"

# --- Step 1: Disable the compromised user account ---
# This immediately prevents new sign-ins (existing sessions remain
# valid until revoked in Step 2)
$userId = "compromised-user@contoso.com"
Update-MgUser -UserId $userId -AccountEnabled:$false
Write-Host "[CONTAINMENT] Account disabled: $userId" -ForegroundColor Red

# --- Step 2: Revoke all active sessions and refresh tokens ---
# Forces re-authentication on ALL devices; combined with the disabled
# account above, this locks the attacker out completely
Revoke-MgUserSignInSession -UserId $userId
Write-Host "[CONTAINMENT] All sessions revoked: $userId" -ForegroundColor Red

# --- Step 3: Isolate the endpoint via MDE API ---
# Network isolation cuts the device off from everything except the MDE
# cloud service, stopping any in-progress file uploads or transfers
$machineId = "<MDE-Machine-ID-from-investigation>"
$isolateBody = @{
    Comment  = "DLP exfiltration containment - Incident #INC-XXXX"
    IsolationType = "Full"
} | ConvertTo-Json

# MDE API call to isolate the device
$headers = @{ Authorization = "Bearer $accessToken" }
Invoke-RestMethod `
    -Uri "https://api.securitycenter.microsoft.com/api/machines/$machineId/isolate" `
    -Method Post -Body $isolateBody -ContentType "application/json" `
    -Headers $headers
Write-Host "[CONTAINMENT] Device isolated: $machineId" -ForegroundColor Red

# --- Step 4: Collect investigation package for forensic evidence ---
$collectBody = @{
    Comment = "Evidence collection for DLP incident - Incident #INC-XXXX"
} | ConvertTo-Json
Invoke-RestMethod `
    -Uri "https://api.securitycenter.microsoft.com/api/machines/$machineId/collectInvestigationPackage" `
    -Method Post -Body $collectBody -ContentType "application/json" `
    -Headers $headers
Write-Host "[EVIDENCE] Investigation package collection initiated" -ForegroundColor Yellow

# --- Step 5: Document containment actions with timestamp ---
$containmentLog = @{
    Timestamp   = (Get-Date -Format "o")
    User        = $userId
    Machine     = $machineId
    Actions     = @("AccountDisabled", "SessionsRevoked",
                    "DeviceIsolated", "EvidenceCollected")
    Operator    = (Get-MgContext).Account
}
$containmentLog | ConvertTo-Json | Out-File "DLP-Containment-$(Get-Date -Format 'yyyyMMdd-HHmmss').json"
Write-Host "[LOG] Containment actions documented" -ForegroundColor Green
💡 Pro Tip: Always document containment actions with timestamps and the operator’s identity - this creates the audit trail needed for legal proceedings and regulatory notifications. The containment log JSON file should be attached to the incident ticket and preserved in your evidence repository.

Step 6 · Perform Impact Assessment

After containment, the next critical question is: how much data was compromised? Impact assessment quantifies the volume of files accessed or exfiltrated, the sensitivity classifications involved, and the number of affected data subjects. This information drives regulatory notification decisions (GDPR 72-hour window, state breach notification laws) and determines the severity classification for executive reporting.

  1. Determine the volume and type of data exfiltrated
  2. Classify data sensitivity: PII, PCI, PHI, confidential business data
  3. Identify affected data subjects (customers, employees, partners)
  4. Assess regulatory notification requirements: GDPR, HIPAA, state breach laws
  5. Calculate potential impact: financial, reputational, regulatory

KQL - Impact Assessment: Files and Sensitive Data Types

// WHAT: Quantify the scope of data exfiltration by counting files accessed,
//       sensitive data types matched, and exfiltration channels used.
// WHY:  Regulatory notification decisions (GDPR Art. 33, HIPAA Breach Rule)
//       require knowing WHAT data was compromised, HOW MUCH, and WHERE it went.
//       This query produces the evidence needed for legal and compliance teams.
// USAGE: Replace targetUser and adjust the time window to match the incident.
let targetUser = "compromised-user@contoso.com";
let incidentStart = ago(48h);
// --- File volume and types accessed on endpoint ---
DeviceFileEvents
| where Timestamp > incidentStart
| where InitiatingProcessAccountName has targetUser
| where ActionType in ("FileCreated", "FileModified", "FileRenamed")
| summarize
    TotalFilesAccessed = dcount(FileName),
    UniqueFileTypes = make_set(tostring(split(FileName, ".")[-1])),
    FirstAccess = min(Timestamp),
    LastAccess = max(Timestamp),
    DevicesUsed = dcount(DeviceName)
| extend DurationMinutes = datetime_diff('minute', LastAccess, FirstAccess);
// --- Sensitive information types detected ---
CloudAppEvents
| where Timestamp > incidentStart
| where AccountId has targetUser
| where ActionType in ("FileDownloaded", "FileUploaded", "FileCopied")
| summarize
    FilesDownloaded = countif(ActionType == "FileDownloaded"),
    FilesUploaded = countif(ActionType == "FileUploaded"),
    ExfilDestinations = make_set(tostring(RawEventData.TargetAppName)),
    FileNames = make_set(tostring(RawEventData.SourceFileName), 50)
| extend ExfilChannels = array_length(ExfilDestinations)
💡 Pro Tip: Save the impact assessment results as a JSON report and attach it to the incident. For GDPR breaches, the 72-hour notification clock starts when the controller becomes “aware” of the breach - having rapid, accurate impact data lets you make a notification decision faster and with confidence.

Step 7 · Create Custom DLP Detection Rules

Custom detection rules in Defender XDR let you codify lessons learned from investigations into automated detections. The most valuable rules correlate DLP signals with other security events - catching attacks that no single product would detect on its own. Below are three high-value rules: DLP + risky sign-in correlation, bulk USB exfiltration after mailbox rule creation, and excessive DLP override abuse.

  1. Create a custom detection rule: DLP violation + risky sign-in within 1 hour
  2. Create a rule: bulk file copy to USB following mailbox forwarding rule creation
  3. Create a rule: DLP override used more than 3 times by same user in 24 hours
  4. Set automated response actions: isolate device, disable user, create high-priority incident

KQL - Custom Detection Rule: DLP + Risky Sign-In

// WHAT: Custom detection rule that fires when a user triggers a DLP
//       violation within 1 hour of a medium/high-risk sign-in.
// WHY:  This cross-signal correlation catches credential compromise
//       leading to data exfiltration - the #1 pattern in insider
//       threat and advanced persistent threat (APT) data theft.
// HOW TO DEPLOY:
//   1. Go to Defender XDR > Hunting > Custom detection rules
//   2. Click "+ Create detection rule"
//   3. Paste this query and set frequency to 1 hour
//   4. Configure automated actions: High severity, isolate device
// OUTPUT COLUMNS: Required columns for custom detection rule entities
//   - AccountObjectId, AccountUpn: maps to User entity
//   - DeviceName, DeviceId: maps to Device entity
//   - Timestamp: required for timeline placement
let riskySignIns = AADSignInEventsBeta
| where Timestamp > ago(1h)
| where RiskLevelDuringSignIn in ("medium", "high")
| where ErrorCode == 0
| project SignInTime = Timestamp, AccountObjectId,
          RiskLevel = RiskLevelDuringSignIn, IPAddress;
DeviceEvents
| where Timestamp > ago(1h)
| where ActionType in ("SensitiveFileRead", "RemovableMediaAccess",
                       "CloudAppUnSanctionedAccess")
| join kind=inner riskySignIns on AccountObjectId
| where Timestamp > SignInTime
| where (Timestamp - SignInTime) <= 1h
| project Timestamp, DeviceName, DeviceId,
          AccountName, AccountObjectId,
          AccountUpn = AccountName,
          ActionType, FileName, RiskLevel, IPAddress
| extend AlertTitle = strcat("DLP violation after risky sign-in: ",
                             AccountName, " - ", ActionType)

KQL - Custom Detection Rule: Excessive DLP Overrides

// WHAT: Detect users who override DLP policy blocks more than 3 times
//       within a 24-hour window - a strong indicator of intentional
//       data exfiltration or policy circumvention.
// WHY:  Occasional overrides are expected (legitimate business needs),
//       but repeated overrides suggest a user deliberately bypassing
//       DLP controls to move sensitive data.
// DEPLOY: Custom detection rule with 24h frequency, Medium severity.
CloudAppEvents
| where Timestamp > ago(24h)
| where ActionType == "DlpRuleMatch"
| extend PolicyAction = tostring(RawEventData.PolicyAction)
| where PolicyAction == "Override"
| summarize
    OverrideCount = count(),
    Policies = make_set(tostring(RawEventData.PolicyName)),
    Files = make_set(tostring(RawEventData.SourceFileName), 10)
    by AccountObjectId = tostring(RawEventData.UserId),
       AccountName = tostring(RawEventData.UserPrincipalName)
| where OverrideCount > 3
| project Timestamp = now(), AccountObjectId, AccountName,
          OverrideCount, Policies, Files
| extend AlertTitle = strcat("Excessive DLP overrides: ",
                             AccountName, " (", OverrideCount, "x)")
💡 Pro Tip: When deploying custom detection rules, start with “Audit only” mode for 2 weeks to validate the query produces accurate results. Once confident in the signal quality, enable automated response actions. Always include the AccountObjectId and DeviceId columns so Defender XDR can properly map entities in the generated incident.

Step 8 · Build DLP Investigation Playbooks

Consistent investigation quality requires documented playbooks that define triage criteria, escalation paths, and response actions for each severity level. Without playbooks, the same DLP incident might be handled differently depending on which analyst is on shift. Define clear SLAs and decision trees so every DLP incident gets the appropriate level of investigation.

  1. Define investigation steps for each DLP incident severity
  2. Low: audit only - review in weekly DLP report, no immediate action required
  3. Medium: investigate within 4 hours - contact user, verify intent, check for identity compromise
  4. High: investigate immediately - full incident response, containment if needed, evidence preservation
  5. Critical: automated containment + immediate SOC escalation + executive notification within 1 hour
💡 Pro Tip: Integrate DLP severity with your overall incident classification scheme. A “High” DLP incident that correlates with identity compromise should automatically escalate to “Critical” and trigger the full cyber incident response plan. Include this escalation logic in your playbook’s decision tree.

Step 9 · Generate Compliance and Legal Reports

DLP incident reports serve multiple audiences: the SOC needs technical details, legal counsel needs evidence chains, compliance needs regulatory notification assessments, and executives need business impact summaries. Structure your reports to address all stakeholders with a single document that has audience-specific sections.

  1. Create incident reports documenting: what data, how much, where it went, who was affected
  2. Include investigation timeline and remediation actions with timestamps
  3. Document evidence chain for potential legal proceedings (chain of custody)
  4. Prepare regulatory notification if required (include template for ICO, HHS, state AG)
  5. Include the KQL query results from Steps 4–6 as appendices to the report
💡 Pro Tip: Pre-build report templates for each regulatory framework you operate under. A GDPR breach notification template should include: nature of the breach, categories and approximate numbers of data subjects, categories of personal data records, likely consequences, and measures taken or proposed. Having this template ready reduces notification preparation from days to hours.

Step 10 · Configure Automated DLP Response

Manual DLP response does not scale. When you handle hundreds of DLP incidents per month, automated triage and response workflows are essential. Microsoft Sentinel playbooks (Logic Apps) can automatically perform containment, notification, and ticketing based on incident severity and correlation signals. The script below creates a Sentinel playbook stub that responds to DLP incidents with Teams notifications and ServiceNow ticket creation.

  1. Create Logic App / Sentinel playbooks for automated DLP incident response
  2. Auto-actions: send Teams alert to SOC, create ServiceNow ticket, isolate device
  3. Configure automated evidence collection for high-severity DLP incidents
  4. Test automated workflows with simulated DLP violations

PowerShell - Deploy Sentinel Playbook Stub for DLP Response

# -------------------------------------------------------------------
# WHAT: Create a Sentinel playbook (Logic App) stub for automated
#       DLP incident response in Microsoft Sentinel.
# WHY:  Automates SOC notification, ticket creation, and containment
#       for DLP incidents - reducing MTTR from hours to minutes.
# PREREQ: Az.LogicApp module, Sentinel workspace, Logic App Contributor role.
# NOTE:  This creates the Logic App skeleton; configure the specific
#        connectors (Teams, ServiceNow, MDE) in the Azure portal.
# -------------------------------------------------------------------

# Import required module
Import-Module Az.LogicApp

# Configuration
$resourceGroup    = "rg-sentinel-prod"
$location         = "eastus"
$playbookName     = "DLP-HighSeverity-Response"
$sentinelWorkspace = "law-sentinel-prod"

# Logic App definition: trigger on Sentinel incident creation
$definition = @"
{
  "definition": {
    "\$schema": "https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json#",
    "contentVersion": "1.0.0.0",
    "triggers": {
      "Microsoft_Sentinel_incident": {
        "type": "ApiConnectionWebhook",
        "inputs": {
          "body": {
            "callback_url": "@{listCallbackUrl()}"
          },
          "host": {
            "connection": { "name": "@parameters('\$connections')['azuresentinel']['connectionId']" }
          },
          "path": "/incident-creation"
        }
      }
    },
    "actions": {
      "Filter_DLP_Incidents": {
        "type": "If",
        "expression": {
          "and": [
            { "contains": ["@triggerBody()?['object']?['properties']?['title']", "DLP"] },
            { "equals": ["@triggerBody()?['object']?['properties']?['severity']", "High"] }
          ]
        },
        "actions": {
          "Post_Teams_Alert": {
            "type": "ApiConnection",
            "inputs": {
              "body": {
                "messageBody": "\ud83d\udea8 High-Severity DLP Incident: @{triggerBody()?['object']?['properties']?['title']}"
              }
            }
          },
          "Create_ServiceNow_Ticket": {
            "type": "ApiConnection",
            "inputs": {
              "body": {
                "short_description": "DLP Incident: @{triggerBody()?['object']?['properties']?['title']}",
                "urgency": "1",
                "impact": "1"
              }
            },
            "runAfter": { "Post_Teams_Alert": ["Succeeded"] }
          }
        }
      }
    }
  }
}
"@

# Deploy the Logic App
New-AzLogicApp `
    -ResourceGroupName $resourceGroup `
    -Location $location `
    -Name $playbookName `
    -Definition ($definition | ConvertFrom-Json).definition

Write-Host "[DEPLOYED] Sentinel playbook stub: $playbookName" -ForegroundColor Green
Write-Host "[NEXT] Configure API connections in Azure portal:" -ForegroundColor Yellow
Write-Host "  1. Microsoft Sentinel connector" -ForegroundColor Yellow
Write-Host "  2. Microsoft Teams connector" -ForegroundColor Yellow
Write-Host "  3. ServiceNow connector" -ForegroundColor Yellow
Write-Host "  4. Microsoft Defender for Endpoint connector" -ForegroundColor Yellow
💡 Pro Tip: After deploying the stub, add a conditional branch based on whether the DLP incident has correlated identity alerts. If yes, automatically add the Isolate device and Disable user actions. If no (standalone DLP alert), limit to notification and ticket creation. This prevents over-containment on accidental policy matches.

📚 Documentation Resources

Resource Description
Learn about DLPComprehensive overview of Microsoft Purview DLP capabilities
Investigate DLP alerts in Defender XDROfficial guide for DLP incident investigation in the unified portal
Custom detection rulesCreate and manage custom detection rules with KQL
MDE response actionsDevice isolation, investigation package, and other containment actions
Revoke sign-in sessions (Graph API)Microsoft Graph API for revoking user sessions programmatically
Sentinel playbooksAutomate threat response with Logic App playbooks in Sentinel
Advanced hunting overviewKQL-based threat hunting across Defender XDR tables

Summary

What You Accomplished

  • Investigated DLP incidents in the unified Defender XDR portal
  • Correlated DLP alerts with identity and endpoint signals
  • Traced data exfiltration using the attack story timeline
  • Used advanced hunting KQL for DLP investigation
  • Created custom detection rules for DLP-related attacks
  • Built investigation playbooks and automated response workflows

Next Steps

← Previous Lab Next Lab →