Advanced⏱ 120 min📄 16 Steps

AI Security Posture Management (DSPM for AI)

Discover AI workloads in Defender for Cloud, assess AI model deployment risks, configure security recommendations for Azure OpenAI and Azure AI services, and build an AI security governance dashboard.

📋 Overview

About This Lab

This lab focuses on AI Security Posture Management in Microsoft Defender for Cloud. a set of capabilities purpose-built to discover, assess, and harden AI workloads running in Azure. You will learn how Defender for Cloud automatically discovers Azure OpenAI, Azure AI Services, and Azure Machine Learning deployments, then evaluates them against security benchmarks covering model access controls, network isolation, data encryption, and API security. The lab walks through configuring threat protection for AI services, building attack path analysis for AI compromise scenarios, mapping AI workloads to regulatory compliance frameworks, and integrating AI security findings with Microsoft Sentinel for cross-signal correlation.

🎯 What You Will Learn

  1. Enable AI security posture management in Defender for Cloud
  2. Discover Azure OpenAI, Azure AI Services, and ML model deployments
  3. Assess AI workload security against Microsoft cloud security benchmarks
  4. Configure recommendations for AI model access controls and network isolation
  5. Monitor AI API usage patterns for anomalous behaviour
  6. Set up threat protection for Azure AI services
  7. Create custom security initiatives for AI workloads
  8. Configure governance rules for AI resource compliance
  9. Monitor AI model endpoints for data exfiltration risks
  10. Integrate AI security findings with Sentinel for correlation
  11. Build attack path analysis for AI workload compromise scenarios
  12. Configure regulatory compliance mapping for AI governance frameworks
  13. Set up continuous export of AI security findings
  14. Create workbooks for AI security posture visualization
  15. Implement runtime protection for AI inference endpoints
  16. Build an executive AI security posture dashboard

🏢 Enterprise Scenario

A financial services company with 15,000 employees has deployed multiple Azure OpenAI instances for fraud detection, customer service chatbots, and document processing. The security team needs to discover all AI deployments, assess their security posture, monitor for prompt injection attacks, and ensure AI models are not exposing sensitive customer financial data through API endpoints.

⚙️ Prerequisites

  • Azure subscription with Microsoft Defender for Cloud enabled
  • Defender CSPM plan active on target subscriptions
  • Azure OpenAI or Azure AI Services resource deployed
  • Security Admin role or equivalent on the Azure subscription
  • Log Analytics workspace connected to Microsoft Sentinel

Step 1. Enable AI Workload Protection

In the Azure portal, navigate to Defender for Cloud > Environment settings. Enable the AI threat protection plan for subscriptions containing Azure OpenAI and Azure AI Services resources. This enables discovery, security assessment, and runtime threat detection for AI workloads.

Step 2. Discover AI Resources

Review the AI workload inventory showing all discovered Azure OpenAI accounts, Azure AI Search instances, Azure Machine Learning workspaces, and Cognitive Services deployments. Examine the security score for each AI resource and identify unprotected AI endpoints.

Step 3. Review AI Security Recommendations

Defender for Cloud generates specific security recommendations for AI services based on the Microsoft cloud security benchmark. Review and prioritise findings for your Azure OpenAI, Azure AI Services, and Azure Machine Learning deployments.

Portal Instructions

  1. Navigate to Defender for Cloud > Recommendations
  2. Filter by Resource type: Microsoft.CognitiveServices/accounts, Microsoft.MachineLearningServices/workspaces
  3. Review high-severity recommendations such as:
    • Azure AI Services resources should restrict network access
    • Azure AI Services resources should have key access disabled
    • Azure AI Services resources should use private link
    • Diagnostic logs in Azure AI Services should be enabled
  4. Click on each recommendation to view affected resources and remediation steps

Query AI Recommendations via KQL

// List all unhealthy AI-service security recommendations grouped by
// severity and name.
// WHY: AI services (Azure OpenAI, Cognitive Services, Azure ML) have
// unique security risks - exposed API endpoints can lead to data
// exfiltration, prompt injection, and unauthorized model access.
// This query surfaces which AI-specific controls are failing.
// Properties filter: matches resources under CognitiveServices,
//   MachineLearningServices, or OpenAI resource providers.
// OUTPUT: Table of recommendation names with severity and count.
// High-severity findings (e.g., "disable key access", "use private link")
// should be remediated first as they directly expose AI endpoints.
SecurityRecommendation
| where RecommendationState == "Unhealthy"
| where Properties has "CognitiveServices" or Properties has "MachineLearningServices" or Properties has "OpenAI"
| summarize Count=count() by RecommendationSeverity, RecommendationName
| order by RecommendationSeverity asc, Count desc

Step 4. Configure AI Model Access Controls

Restrict who can access AI model deployments and enforce least-privilege access controls on Azure OpenAI endpoints.

Portal Instructions

  1. Navigate to your Azure OpenAI resource > Access control (IAM)
  2. Review current role assignments: identify over-privileged users with Cognitive Services Contributor
  3. Replace broad roles with scoped roles: Cognitive Services OpenAI User for consumers, Cognitive Services OpenAI Contributor for model deployers
  4. Enable managed identity for applications calling Azure OpenAI instead of API keys
  5. Disable API key access: navigate to Resource management > Keys and Endpoint > disable key-based access

Enforce Managed Identity Access

# Disable local (key-based) authentication on Azure OpenAI.
# WHY: API keys are the #1 security risk for AI services - they cannot
# be scoped to specific users, cannot enforce conditional access, and
# provide no per-user audit trail. Disabling keys forces all callers
# to authenticate via Entra ID managed identities or service principals.
# disableLocalAuth: true = reject all API key-based requests.
az cognitiveservices account update \
  --name "my-openai-instance" \
  --resource-group "rg-ai-workloads" \
  --custom-domain "my-openai-instance" \
  --api-properties "{\"disableLocalAuth\": true}"

# Grant a managed identity the scoped "Cognitive Services OpenAI User" role.
# This role allows the application to call the model endpoint (chat, completions)
# but NOT deploy new models or modify the resource configuration.
# --assignee-object-id: the Object ID of the app's managed identity
#   (find via: az identity show --name  --query principalId).
# --scope: restricts the role to this specific OpenAI instance only.
# WHY: Least-privilege access - apps get inference-only permissions,
# not the broad Contributor role that could modify or delete the resource.
az role assignment create \
  --assignee-object-id <managed-identity-object-id> \
  --role "Cognitive Services OpenAI User" \
  --scope "/subscriptions/<sub-id>/resourceGroups/rg-ai-workloads/providers/Microsoft.CognitiveServices/accounts/my-openai-instance"

# Verify that key-based access is now blocked.
# Expected output: an error message confirming that local authentication
# is disabled. If keys are returned, the disableLocalAuth setting did
# not apply - check the resource configuration.
az cognitiveservices account keys list \
  --name "my-openai-instance" \
  --resource-group "rg-ai-workloads" 2>&1 || echo "Key access disabled - using Entra ID auth only"
💡 Pro Tip: Disabling key-based authentication is the single most impactful security control for Azure OpenAI. API keys cannot be scoped, audited per-user, or revoked granularly. Managed identities with Entra ID RBAC provide full audit trails and conditional access enforcement.

Step 5. Enforce Network Isolation for AI Services

Configure private endpoints and network restrictions to ensure AI model inference traffic stays on the corporate network and is not exposed to the internet.

Portal Instructions

  1. Navigate to your Azure OpenAI resource > Networking
  2. Under Firewalls and virtual networks, select Disabled for public access
  3. Go to Private endpoint connections > click + Private endpoint
  4. Configure the endpoint in your hub VNet: name pe-openai-prod, subnet snet-ai-services
  5. Create a Private DNS Zone: privatelink.openai.azure.com
  6. Verify connectivity: from a VM in the VNet, resolve the Azure OpenAI hostname and confirm it returns a private IP

Configure via Azure CLI

# Disable public access to the Azure OpenAI endpoint.
# After this, the model can only be reached via private endpoints
# within your corporate VNet - no internet-based API calls allowed.
# WHY: Public endpoints expose AI models to brute-force attacks,
# prompt injection from anonymous sources, and data exfiltration risk.
az cognitiveservices account update \
  --name "my-openai-instance" \
  --resource-group "rg-ai-workloads" \
  --public-network-access Disabled

# Create a private endpoint to access Azure OpenAI from your VNet.
# --vnet-name / --subnet: the network where your applications run.
# --group-id "account": the sub-resource type for Cognitive Services.
# --private-connection-resource-id: full ARM ID of the OpenAI instance.
# After creation, traffic to the OpenAI hostname resolves to a private IP
# within your subnet instead of a public Microsoft IP.
az network private-endpoint create \
  --name pe-openai-prod \
  --resource-group rg-ai-workloads \
  --vnet-name vnet-hub \
  --subnet snet-ai-services \
  --private-connection-resource-id "/subscriptions/<sub-id>/resourceGroups/rg-ai-workloads/providers/Microsoft.CognitiveServices/accounts/my-openai-instance" \
  --group-id account \
  --connection-name openai-pe-connection

# Create a Private DNS Zone for Azure OpenAI hostname resolution.
# WHY: When applications call my-openai-instance.openai.azure.com,
# DNS must resolve to the private IP. This zone handles that mapping
# so existing code works without endpoint URL changes.
az network private-dns zone create \
  --resource-group rg-ai-workloads \
  --name "privatelink.openai.azure.com"

# Link the DNS zone to your VNet so VMs and apps in the network
# can resolve the private endpoint hostname automatically.
# --registration-enabled false: prevents auto-registering VM hostnames
# in this zone (not needed for service endpoints).
az network private-dns link vnet create \
  --resource-group rg-ai-workloads \
  --zone-name "privatelink.openai.azure.com" \
  --name link-openai-vnet \
  --virtual-network vnet-hub \
  --registration-enabled false

Step 6. Set Up AI Threat Protection Alerts

Defender for Cloud’s AI threat protection detects anomalous usage patterns including prompt injection attempts, credential exposure in prompts, and unusual API consumption spikes.

Key Alert Types

  1. Suspicious volume of API calls: detects brute-force or enumeration attempts against model endpoints
  2. Credential leak in AI prompt: identifies secrets, passwords, or tokens submitted as prompts
  3. Jailbreak attempt detected: catches prompt injection patterns designed to bypass model safety filters
  4. Anomalous data extraction: detects unusually large model outputs that may indicate data exfiltration
  5. Access from suspicious IP: flags calls from Tor exit nodes, anonymous proxies, or threat-intel-flagged IPs

Query Threat Alerts via KQL

// List all AI-specific threat alerts from the last 7 days.
// ProductName "Azure Defender for AI" filters to alerts generated by
// the AI threat protection plan (prompt injection, credential leaks,
// jailbreak attempts, anomalous API volume, suspicious IP access).
// Tactics and Techniques: map to the MITRE ATT&CK framework so SOC
// analysts can correlate with other attack signals.
// OUTPUT: Table of alert details sorted by severity then time.
// High/Critical alerts warrant immediate investigation.
SecurityAlert
| where TimeGenerated > ago(7d)
| where ProductName == "Azure Defender for AI"
| project TimeGenerated, AlertName, AlertSeverity, Description,
    ResourceId, Tactics, Techniques
| order by AlertSeverity asc, TimeGenerated desc

// Correlate AI threat alerts with Entra ID sign-in risk signals.
// WHY: An AI jailbreak attempt from an IP that also has a risky sign-in
// is a much stronger indicator of compromise than either signal alone.
// The join matches the caller IP from the AI alert with the sign-in IP.
// OUTPUT: Rows showing the alert, the associated user identity, and
// their risk state (medium/high). Escalate these as priority incidents.
SecurityAlert
| where ProductName == "Azure Defender for AI"
| extend CallerIP = tostring(ExtendedProperties["CallerIpAddress"])
| join kind=inner (
    SigninLogs
    | where TimeGenerated > ago(7d)
    | project IPAddress, UserPrincipalName, RiskState
) on $left.CallerIP == $right.IPAddress
| project TimeGenerated, AlertName, UserPrincipalName, RiskState, CallerIP
⚠️ Important: Prompt injection is one of the OWASP Top 10 risks for LLM applications. Defender for Cloud detects injection patterns in real time, but you should also implement input validation and output filtering in your application layer.

Step 7. Create Custom Security Initiatives for AI

Build a custom security initiative that groups all AI-relevant policy definitions into a single compliance standard tailored to your organisation’s AI governance requirements.

Portal Instructions

  1. Navigate to Defender for Cloud > Environment settings > select your subscription
  2. Go to Security policies > click + Create custom initiative
  3. Name: AI-Security-Baseline-v1, Category: AI Governance
  4. Add policy definitions:
    • Azure AI Services should disable key access
    • Azure AI Services should use private link
    • Azure AI Services should enable diagnostic logs
    • Azure Machine Learning workspaces should use private link
    • Azure OpenAI should use customer-managed keys
  5. Set effect to Audit initially, then Deny once baselines are established
  6. Assign the initiative to subscriptions containing AI workloads

Create Initiative via Azure CLI

# List all built-in Azure Policy definitions related to AI services.
# Filters by display name containing "AI Services", "Cognitive Services",
# or "Machine Learning" to find applicable security controls.
# OUTPUT: Table of policy definition names and display names.
# Review this list to select which policies to include in your initiative.
az policy definition list \
  --query "[?contains(displayName,'AI Services') || contains(displayName,'Cognitive Services') || contains(displayName,'Machine Learning')].{name:name,displayName:displayName}" \
  -o table

# Create a custom policy set (initiative) that groups AI-relevant policies
# into a single compliance standard for your organisation.
# --definitions: JSON file listing the policy definition IDs to include
#   (e.g., disable key access, enforce private link, enable diagnostics).
# --metadata category "AI Governance": makes the initiative easy to find
#   in the portal under the AI Governance category.
# WHY: Custom initiatives let you track AI security posture as a single
# compliance percentage, making it easy to report to leadership.
az policy set-definition create \
  --name "ai-security-baseline-v1" \
  --display-name "AI Security Baseline v1" \
  --description "Custom initiative for AI workload security posture" \
  --definitions @ai-initiative-definitions.json \
  --metadata '{"category":"AI Governance"}'

# Assign the initiative to the subscription(s) containing AI workloads.
# Replace <sub-id> with your subscription GUID.
# Once assigned, Azure Policy evaluates all AI resources against the
# initiative’s rules and reports compliance in Defender for Cloud.
az policy assignment create \
  --name "ai-baseline-assignment" \
  --policy-set-definition "ai-security-baseline-v1" \
  --scope "/subscriptions/<sub-id>"

Step 8. Configure Governance Rules for AI Compliance

Define governance rules that automatically assign owners and deadlines when AI security recommendations go unhealthy.

Portal Instructions

  1. Navigate to Defender for Cloud > Governance rules
  2. Click + Add rule
  3. Rule name: ai-workload-governance
  4. Scope: subscriptions containing AI resources
  5. Condition: recommendations matching AI Security Baseline v1 initiative
  6. Owner: AI Platform Team DL
  7. Remediation timeframe: 7 days for High severity, 14 days for Medium
  8. Grace period: 3 days
  9. Notification: weekly email digest + overdue escalation to CISO
💡 Pro Tip: Tie governance rules to your change advisory board (CAB) process. When AI deployments fail security checks, the governance rule generates a remediation task that integrates with ServiceNow or Jira via Logic Apps.

Step 9. Monitor AI API Usage Patterns

Enable diagnostic logging on AI services and create dashboards to monitor API consumption, error rates, and anomalous usage patterns that may indicate compromised credentials or data exfiltration.

Enable Diagnostic Logging

# Enable diagnostic logging for Azure OpenAI to capture all API requests,
# responses, errors, and performance metrics.
# --resource: full ARM ID of the Azure OpenAI instance.
# --workspace: the Log Analytics workspace where logs are sent. Use the
#   same workspace connected to Sentinel for cross-signal correlation.
# --logs categoryGroup "allLogs": captures RequestResponse (API calls),
#   Audit (config changes), and Metrics (token counts, latency).
# WHY: Without diagnostic logging, you have no visibility into who is
# calling your AI models, what prompts they send, or how many tokens
# they consume. This is essential for detecting anomalous usage,
# credential abuse, and data exfiltration via large API responses.
az monitor diagnostic-settings create \
  --name "openai-diagnostics" \
  --resource "/subscriptions/<sub-id>/resourceGroups/rg-ai-workloads/providers/Microsoft.CognitiveServices/accounts/my-openai-instance" \
  --workspace "/subscriptions/<sub-id>/resourceGroups/rg-security/providers/Microsoft.OperationalInsights/workspaces/law-security" \
  --logs '[{"categoryGroup":"allLogs","enabled":true}]' \
  --metrics '[{"category":"AllMetrics","enabled":true}]'

KQL: Detect Anomalous API Usage

// Baseline: hourly API call volume per caller over the past 7 days.
// WHY: Establishes a normal usage pattern for each caller IP. Without
// a baseline, you cannot distinguish legitimate spikes (e.g., batch
// processing) from malicious activity (credential theft, enumeration).
// bin(TimeGenerated, 1h): groups calls into hourly windows.
// OUTPUT: Time series per caller showing call count and average tokens.
// Use this to set alerting thresholds based on actual usage patterns.
AzureDiagnostics
| where ResourceProvider == "MICROSOFT.COGNITIVESERVICES"
| where Category == "RequestResponse"
| where TimeGenerated > ago(7d)
| summarize CallCount=count(), AvgTokens=avg(toint(properties_s))
    by bin(TimeGenerated, 1h), CallerIPAddress
| order by TimeGenerated desc

// Anomaly detection: flag callers exceeding 3x their 30-day average.
// The baseline calculates each caller’s average daily call volume over
// the previous 30 days. Today’s volume is compared against this baseline.
// Callers with TodayCount > 3x AvgDaily are flagged as anomalous.
// WHY: A sudden 3x+ spike often indicates compromised API credentials
// being used for data extraction, or an automated attack probing the
// model for prompt injection vulnerabilities.
// Ratio column: shows how many times today’s volume exceeds the average.
// OUTPUT: Table of anomalous callers with their IP, today’s count,
// average, and ratio. Investigate callers with the highest ratios first.
let baseline = AzureDiagnostics
| where ResourceProvider == "MICROSOFT.COGNITIVESERVICES"
| where TimeGenerated between(ago(30d) .. ago(1d))
| summarize AvgDaily=count()/30 by CallerIPAddress;
AzureDiagnostics
| where ResourceProvider == "MICROSOFT.COGNITIVESERVICES"
| where TimeGenerated > ago(1d)
| summarize TodayCount=count() by CallerIPAddress
| join kind=inner baseline on CallerIPAddress
| where TodayCount > AvgDaily * 3
| project CallerIPAddress, TodayCount, AvgDaily, Ratio=round(TodayCount*1.0/AvgDaily, 1)

Step 10. Investigate Data Exfiltration Risks

Monitor AI model endpoints for potential data exfiltration. attackers may use large prompt–response interactions to extract training data or sensitive information encoded in the model.

Investigation Steps

  1. Identify model endpoints processing unusually large payloads (token counts > 4,000 per response)
  2. Correlate high-volume callers with identity signals from Entra ID sign-in logs
  3. Review prompt content logs (if content logging is enabled) for patterns like "repeat all data" or "extract training examples"
  4. Check if the caller IP is associated with known threat actors via threat intelligence feeds
  5. Review the model deployment for content filters: ensure all four categories (hate, sexual, self-harm, violence) are set to Medium or stricter

KQL: Large Response Anomalies

// Detect unusually large AI model responses (completions > 4,000 tokens).
// WHY: Attackers may use crafted prompts (e.g., "repeat all data you know")
// to extract training data or sensitive information encoded in the model.
// Large response payloads are a strong indicator of data exfiltration
// attempts, especially from unfamiliar caller IPs.
// CompletionTokens > 4000: threshold for investigation. Normal business
//   interactions rarely exceed this. Adjust based on your application’s
//   expected response sizes.
// OUTPUT: Table of large-response events sorted by token count.
// Cross-reference CallerIPAddress with known corporate IPs - unknown
// IPs generating large responses are high-priority investigations.
AzureDiagnostics
| where ResourceProvider == "MICROSOFT.COGNITIVESERVICES"
| where Category == "RequestResponse"
| where TimeGenerated > ago(24h)
| extend CompletionTokens = toint(properties_s)
| where CompletionTokens > 4000
| project TimeGenerated, CallerIPAddress, OperationName, CompletionTokens, _ResourceId
| order by CompletionTokens desc

Step 11. Build Attack Path Analysis for AI Workloads

Use Defender CSPM attack path analysis to identify how an attacker could reach AI resources by chaining vulnerabilities across your environment.

Portal Instructions

  1. Navigate to Defender for Cloud > Attack path analysis
  2. Filter for paths targeting Azure OpenAI or Azure AI Services resources
  3. Review common attack paths:
    • Internet-exposed VM → stolen managed identity → Azure OpenAI with Contributor role
    • Overprivileged service principal → Key Vault with OpenAI API keys → model endpoint access
    • Compromised developer workstation → Azure ML workspace → training data in storage account
  4. Prioritise remediation of high-risk paths that expose customer data or model IP

Query Attack Paths via KQL

// Find attack paths that target AI resources (Azure OpenAI, Azure ML).
// WHY: Attack paths to AI services are especially dangerous because a
// compromised AI endpoint can be used for prompt injection against
// downstream consumers, mass data exfiltration via API calls, or
// reputational damage through unsafe content generation.
// Target filter: matches paths ending at CognitiveServices (OpenAI/AI
//   Services) or MachineLearningServices (Azure ML workspaces).
// OUTPUT: Table of attack paths with risk level, entry point, target
// resource, and suggested remediation steps. Prioritise Critical/High
// paths that start from internet-exposed resources.
SecurityAttackPath
| where Target has "CognitiveServices" or Target has "MachineLearningServices"
| project AttackPathDisplayName, RiskLevel, EntryPoint, Target, RemediationSteps
| order by RiskLevel desc
⚠️ Important: Attack paths to AI services are especially dangerous because a compromised Azure OpenAI endpoint can be used for prompt injection attacks against downstream consumers, data exfiltration via large-volume API calls, or reputational damage through unsafe content generation.

Step 12. Integrate AI Findings with Microsoft Sentinel

Stream AI security alerts and recommendations into Microsoft Sentinel for cross-signal correlation with identity, endpoint, and network data.

Configure Continuous Export

  1. Navigate to Defender for Cloud > Environment settings > Continuous export
  2. Select export type: Log Analytics workspace
  3. Enable export of: Security alerts, Recommendations, Secure score, Regulatory compliance
  4. Target workspace: your Sentinel-connected Log Analytics workspace
  5. Click Save

Sentinel Analytics Rule for AI Threats

// Sentinel analytics rule: correlate AI jailbreak/injection alerts
// with risky Entra ID sign-ins within a 2-hour window.
// WHY: A jailbreak attempt alone could be noise, and a risky sign-in
// alone might be a false positive. But BOTH occurring from the same IP
// within 2 hours is a strong indicator of a compromised identity
// actively attacking your AI services.
// ai_alerts: captures jailbreak and injection alerts from Defender for AI,
//   extracting the caller IP from extended properties.
// risky_signins: captures medium/high risk sign-ins from Entra ID.
// The join matches on IP address, and the datetime_diff filter limits
// to events within ±2 hours of each other.
// OUTPUT: Correlated rows showing the alert, the user identity, their
// risk level, and the shared IP. These should generate Sentinel incidents
// for immediate SOC investigation.
let ai_alerts = SecurityAlert
| where ProductName == "Azure Defender for AI"
| where AlertName has "jailbreak" or AlertName has "injection"
| extend CallerIP = tostring(ExtendedProperties["CallerIpAddress"])
| project AlertTime=TimeGenerated, AlertName, CallerIP, ResourceId;
let risky_signins = SigninLogs
| where RiskLevelDuringSignIn in ("medium", "high")
| project SigninTime=TimeGenerated, UserPrincipalName, IPAddress, RiskLevelDuringSignIn;
ai_alerts
| join kind=inner risky_signins on $left.CallerIP == $right.IPAddress
| where abs(datetime_diff('hour', AlertTime, SigninTime)) < 2
| project AlertTime, AlertName, UserPrincipalName, CallerIP, RiskLevelDuringSignIn, ResourceId

Step 13. Map AI Workloads to Regulatory Compliance

Map your AI security posture to regulatory compliance frameworks including the EU AI Act, NIST AI RMF, and ISO 42001 to demonstrate governance readiness.

Portal Instructions

  1. Navigate to Defender for Cloud > Regulatory compliance
  2. Add standards relevant to AI governance: NIST SP 800-53, ISO 27001:2022, CIS Azure Benchmark
  3. Filter compliance controls to those applicable to AI services
  4. Review compliance percentage for AI-specific controls:
    • Access control (AC): least privilege for AI endpoints
    • System and communications protection (SC): encryption and network isolation
    • Audit and accountability (AU): logging of AI interactions
    • Risk assessment (RA): AI model risk evaluation
  5. Export the compliance assessment as a PDF for auditors and legal teams

Step 14. Configure Continuous Export of AI Findings

Set up continuous export to stream AI security data to Event Hubs for integration with third-party SIEM, SOAR, or ticketing systems.

Configure Event Hub Export

# Create an Event Hub namespace for streaming AI security data.
# WHY: Event Hubs enable real-time integration with third-party SIEM,
# SOAR, or ticketing systems (e.g., Splunk, ServiceNow, PagerDuty)
# that cannot natively connect to Log Analytics.
# --sku Standard: supports up to 1 MB/s throughput per partition.
az eventhubs namespace create \
  --name evhns-ai-security \
  --resource-group rg-security \
  --sku Standard \
  --location eastus

# Create the Event Hub within the namespace.
# --partition-count 4: allows 4 concurrent consumers to read in parallel.
# Use more partitions for higher throughput requirements.
az eventhubs eventhub create \
  --name evh-ai-alerts \
  --namespace-name evhns-ai-security \
  --resource-group rg-security \
  --partition-count 4

# Configure Defender for Cloud continuous export to stream AI alerts
# to the Event Hub in real time.
# --sources filter: only exports alerts from "Azure Defender for AI"
#   product, keeping the stream focused and reducing noise.
# --actions actionType "EventHub": sends matched alerts to the Event Hub.
# Replace <sub-id> with your subscription GUID.
# WHY: This enables near-real-time alerting in external tools without
# waiting for Log Analytics query schedules.
az security automation create \
  --name auto-ai-export \
  --resource-group rg-security \
  --scopes "[{\"description\":\"AI subscriptions\",\"scopePath\":\"/subscriptions/<sub-id>\"}]" \
  --sources "[{\"eventSource\":\"Alerts\",\"ruleSets\":[{\"rules\":[{\"propertyJPath\":\"ProductName\",\"propertyType\":\"String\",\"expectedValue\":\"Azure Defender for AI\",\"operator\":\"Contains\"}]}]}]" \
  --actions "[{\"eventHubResourceId\":\"/subscriptions/<sub-id>/resourceGroups/rg-security/providers/Microsoft.EventHub/namespaces/evhns-ai-security/eventhubs/evh-ai-alerts\",\"actionType\":\"EventHub\"}]"

Step 15. Build AI Security Posture Workbooks

Create Azure Monitor Workbooks that visualise the security posture of all AI workloads in a single dashboard.

Workbook KQL Tiles

// Tile 1: AI resource inventory grouped by type and Azure region.
// WHY: Gives a quick count of all AI assets under management  - 
// Azure OpenAI, ML workspaces, and AI Search services.
// If resources appear in unexpected regions, investigate whether they
// were provisioned through unauthorised channels (shadow AI).
// OUTPUT: Table of resource types, locations, and counts.
resources
| where type in~ ("microsoft.cognitiveservices/accounts",
    "microsoft.machinelearningservices/workspaces",
    "microsoft.search/searchservices")
| summarize Count=count() by type, location
| order by Count desc

// Tile 2: AI recommendation compliance status by recommendation name.
// WHY: Shows which security controls are passing vs. failing across
// all AI resources. Recommendations with high Unhealthy counts are
// your top remediation priorities.
// OUTPUT: Table with Healthy and Unhealthy counts per recommendation.
SecurityRecommendation
| where Properties has "CognitiveServices" or Properties has "MachineLearningServices"
| summarize Healthy=countif(RecommendationState=="Healthy"),
    Unhealthy=countif(RecommendationState=="Unhealthy")
    by RecommendationName

// Tile 3: 30-day trend of AI-specific threat alerts by severity.
// WHY: Reveals whether AI threats are increasing, decreasing, or stable.
// A rising trend in High-severity alerts demands immediate attention
// and may indicate an ongoing targeted campaign against your AI services.
// OUTPUT: Time series grouped by day and severity for line chart display.
SecurityAlert
| where ProductName == "Azure Defender for AI"
| summarize AlertCount=count() by bin(TimeGenerated, 1d), AlertSeverity
| order by TimeGenerated asc

// Tile 4: Top 10 callers by API request volume in the last 7 days.
// WHY: Identifies who is consuming the most AI capacity. Unexpected
// top callers (unknown IPs or service principals) may indicate
// credential theft or unauthorised model access.
// OUTPUT: Table of caller IPs ranked by call count.
AzureDiagnostics
| where ResourceProvider == "MICROSOFT.COGNITIVESERVICES"
| where TimeGenerated > ago(7d)
| summarize Calls=count() by CallerIPAddress
| top 10 by Calls
💡 Pro Tip: Pin this workbook to a shared Azure Dashboard and grant read access to the AI Platform, Security, and Compliance teams. A centralised AI security posture view drives accountability across all stakeholders.

Step 16. Build the Executive AI Security Dashboard

Create an executive-level dashboard summarising AI security posture, risk trends, compliance status, and remediation progress for board and leadership reporting.

Dashboard Components

  1. AI Asset Inventory: total Azure OpenAI instances, ML workspaces, AI Search services, and their security tier
  2. AI Secure Score: percentage of AI resources meeting baseline security controls
  3. Threat Trend: 30-day chart of AI-specific alerts by severity with week-over-week comparison
  4. Top Risks: highest-severity open recommendations and unresolved attack paths targeting AI
  5. Compliance Summary: AI posture mapped to EU AI Act risk categories and NIST AI RMF functions
  6. Remediation Velocity: average time to resolve AI security findings vs. SLA targets
  7. API Usage Summary: total requests, unique callers, token consumption, and cost trends

Executive Summary KQL

// Executive-level single-row summary of AI security posture.
// Combines recommendation health and threat alert data into one view.
// ai_recs: total AI recommendations, how many pass (Healthy) vs. fail.
// ai_alerts: total alerts in the last 30 days with High and Critical counts.
// The join on dummy=1 merges both single-row summaries into one row.
// HealthyPct: percentage of AI recommendations in a healthy state  - 
//   this is your AI Secure Score. Target >80% for production readiness.
// WHY: Provides the CISO and board with a single snapshot of AI risk
// exposure without requiring them to navigate multiple dashboards.
// OUTPUT: Single row with TotalRecommendations, HealthyPct,
// AlertsLast30d, HighSevAlerts, and CriticalAlerts.
let ai_recs = SecurityRecommendation
| where Properties has "CognitiveServices" or Properties has "MachineLearningServices"
| summarize Total=count(),
    Healthy=countif(RecommendationState=="Healthy"),
    Unhealthy=countif(RecommendationState=="Unhealthy");
let ai_alerts = SecurityAlert
| where ProductName == "Azure Defender for AI"
| where TimeGenerated > ago(30d)
| summarize AlertsLast30d=count(),
    HighSev=countif(AlertSeverity=="High"),
    CritSev=countif(AlertSeverity=="Critical");
ai_recs | extend dummy=1
| join kind=inner (ai_alerts | extend dummy=1) on dummy
| project TotalRecommendations=Total, HealthyPct=round(Healthy*100.0/Total,1),
    AlertsLast30d, HighSevAlerts=HighSev, CriticalAlerts=CritSev

Summary

What You Accomplished

  • Enabled AI workload protection and discovered Azure OpenAI, AI Services, and ML deployments
  • Reviewed and remediated AI-specific security recommendations
  • Configured least-privilege access controls and disabled key-based authentication
  • Enforced network isolation with private endpoints for AI services
  • Monitored AI threat alerts including prompt injection and data exfiltration attempts
  • Created custom security initiatives and governance rules for AI compliance
  • Integrated AI findings with Sentinel for cross-signal correlation
  • Built executive dashboards and workbooks for AI security posture reporting

Next Steps

  • Integrate with Purview DSPM for AI to extend data classification to AI interactions
  • Configure Defender for Cloud Apps AI governance to discover shadow AI usage
  • Implement runtime application-layer defences: input sanitisation, output filtering, and rate limiting
  • Schedule quarterly AI security posture reviews with the AI Platform and Compliance teams

📚 Documentation Resources

ResourceDescription
AI threat protection in Defender for CloudEnable and configure threat detection for Azure AI workloads
Azure OpenAI managed identity authenticationReplace API keys with Entra ID RBAC for Azure OpenAI
Configure virtual networks for Azure AI ServicesNetwork isolation and private endpoint configuration
Attack path analysis referenceIdentify and remediate critical attack paths to AI resources
Custom security initiativesCreate custom policy initiatives for AI governance
Governance rules in Defender for CloudAutomate ownership and SLA assignment for security findings
← Previous Lab All Labs →