Beginner โฑ 60 min ๐Ÿ“‹ 10 Steps

Enable Security Copilot & First Investigation

Provision Microsoft Security Copilot, configure SCU capacity and plugins, run your first standalone investigation using natural-language prompts, interpret Copilot-generated incident summaries, and establish foundational SOC practices for AI-assisted security operations.

๐Ÿ“‹ Overview

About This Lab

Microsoft Security Copilot is a generative-AI security solution that augments human analysts with natural-language investigation, cross-product correlation, and automated reporting capabilities. In this lab you will provision the service from scratch, configure Security Compute Units (SCUs) for capacity management, enable plugins that connect Copilot to your deployed Microsoft security products, and run end-to-end investigations using prompts in the Standalone portal. By the end of this lab you will be able to investigate incidents, analyse entities, generate executive reports, and establish session management best practices. all powered by AI.

๐Ÿข Enterprise Use Case

A global retail company with 5,000 endpoints and a 4-person SOC team processes over 200 alerts per day across Defender XDR, Sentinel, and Entra ID. Their mean-time-to-investigate (MTTI) for high-severity incidents averages 45 minutes, and analysts spend 30% of their time writing incident reports for management. The CISO wants to deploy Security Copilot to reduce MTTI to under 15 minutes, automate executive-level reporting, and allow Tier 1 analysts to handle investigations that previously required Tier 2 expertise. effectively multiplying the team’s capacity without hiring additional staff.

๐ŸŽฏ What You Will Learn

  1. Provision Security Copilot capacity in the Azure portal and select the right SCU count
  2. Access the Standalone portal at securitycopilot.microsoft.com
  3. Enable and configure first-party and third-party plugins
  4. Craft effective natural-language prompts for security investigations
  5. Investigate incidents with multi-step, context-aware sessions
  6. Analyse suspicious entities by correlating data across Entra ID, Defender, and Sentinel
  7. Generate executive-level incident reports using Copilot
  8. Review Copilot responses for accuracy and verify source citations
  9. Save, pin, and share investigation sessions with teammates
  10. Monitor SCU consumption and optimise capacity

๐Ÿ”‘ Why This Matters

SOCs are overwhelmed: the average enterprise generates over 11,000 security alerts per day, and the global cybersecurity talent shortage exceeds 3.4 million professionals. AI-augmented security operations address both challenges simultaneously. Security Copilot reduces investigation time by up to 65% according to Microsoft’s internal benchmarks, enables junior analysts to perform senior-level triage, and ensures consistent, reproducible investigations across the team. Mastering Copilot is no longer optional. it is a force multiplier that transforms how modern SOCs operate.

โš™๏ธ Prerequisites

  • Azure subscription. Owner or Contributor role required to provision Security Copilot compute capacity
  • Security Administrator or Global Administrator role. needed to enable plugins and configure data access
  • At least one Microsoft security product deployed. Defender XDR, Microsoft Sentinel, or Entra ID Protection
  • Microsoft Entra ID P2 license. recommended for full identity investigation capabilities
  • Basic security operations knowledge. familiarity with incidents, alerts, and MITRE ATT&CK concepts
๐Ÿ’ก Pro Tip: Start with 1 SCU for evaluation. You can scale up at any time without losing data. Review the full provisioning checklist: Get started with Security Copilot

Step 1 ยท Provision Security Copilot Capacity

Security Copilot runs on Security Compute Units (SCUs), which are billed per hour. Each SCU provides a fixed amount of compute capacity for prompts and sessions. You provision capacity through the Azure portal, then access the Copilot experience in the standalone portal or embedded within Defender XDR and Sentinel.

Portal Instructions

  1. Sign in to portal.azure.com
  2. Search for Microsoft Security Copilot in the top search bar
  3. Click Get started (or Create if prompted)
  4. Select your Azure subscription and Resource group (create a new one named rg-copilot-lab if needed)
  5. Choose a Capacity region. select the region closest to your SOC analysts for lowest latency
  6. Set Security Compute Units (SCUs) to 1 for evaluation
  7. Review the estimated cost and click Create
  8. Wait for the deployment to complete (typically 2–5 minutes)

Azure CLI

# PURPOSE: Create a dedicated resource group for Security Copilot lab resources
# WHY: Isolates Copilot resources for easy cost tracking and cleanup
# OUTPUT: JSON with resource group properties (id, location, provisioningState)
az group create \
  --name rg-copilot-lab \
  --location eastus \
  --tags Environment=Lab Project=CopilotLab

# PURPOSE: Provision Security Copilot with 1 Security Compute Unit (SCU)
# WHY: SCUs provide compute capacity for prompts and sessions - billed per hour
# COST: ~$4/hour per SCU - start with 1 for evaluation, scale up as needed
# REQUIRED ROLE: Owner or Contributor on the Azure subscription
# OUTPUT: Capacity resource JSON with provisioning state and billing endpoint
az security copilot capacity create \
  --resource-group rg-copilot-lab \
  --name copilot-capacity-lab \
  --location eastus \
  --sku-name SCU \
  --capacity 1
๐Ÿ’ก Pro Tip: Enable the Usage monitoring dashboard from day one. Track SCU consumption by user, plugin, and prompt type to right-size your capacity before committing to production-level SCUs.

Step 2 ยท Access the Standalone Portal

The standalone experience is where you interact with Copilot through natural-language prompts in a session-based interface. Each session maintains context from previous prompts, enabling multi-step investigations without re-explaining the scenario.

Portal Instructions

  1. Open a new browser tab and navigate to securitycopilot.microsoft.com
  2. Sign in with the same account used to provision capacity
  3. Review the Home screen. note the session history panel on the left and the prompt bar at the bottom
  4. Click New session to start a fresh investigation workspace
  5. Explore the interface elements:
    • Prompt bar. where you type natural-language queries
    • Session list. left panel showing all saved sessions
    • Sources icon. shows which plugins Copilot used to generate a response
    • Pin / Share. session management controls
๐Ÿ’ก Pro Tip: Security Copilot is also embedded directly inside Microsoft Defender XDR (security.microsoft.com) and Microsoft Sentinel. The embedded experience shows Copilot in context. for example, an incident summary panel directly on the Incident page. This lab focuses on the standalone experience first; you will explore the embedded experience in Lab 04.

Step 3 ยท Configure Plugin Sources

Plugins are what connect Security Copilot to your data. Without the right plugins enabled, Copilot cannot query your security products. Each plugin gives Copilot access to a specific product’s data and capabilities.

Portal Instructions

  1. In the standalone portal, click Settings (gear icon) > Plugins
  2. Review the Microsoft plugins section and enable each one that corresponds to your deployed products:
    • Microsoft Defender XDR. incidents, alerts, device data
    • Microsoft Sentinel. KQL queries, workbooks, hunting
    • Microsoft Entra. identity risk, sign-in data, user profiles
    • Microsoft Intune. device compliance, configuration profiles
    • Microsoft Purview. DLP, sensitivity labels, compliance data
    • Microsoft Defender Threat Intelligence (MDTI). threat articles, IOC lookups
  3. Toggle each relevant plugin to On
  4. For each plugin, verify the Connection status shows Connected
  5. Optionally review the Custom plugins tab for API-based integrations
๐Ÿ’ก Pro Tip: Enable the Microsoft Defender Threat Intelligence plugin even if you don’t have a separate TI subscription. It provides built-in threat intelligence from Microsoft’s global sensor network at no additional cost.
โš ๏ธ Important: If a plugin shows Not connected, verify that you have the required permissions in the corresponding product portal. For example, the Sentinel plugin requires at least Sentinel Reader role on the workspace. See Plugin requirements.

Step 4 ยท Run Your First Prompt

Now it’s time to interact with Copilot! Start with a broad prompt to explore what data is available, then progressively narrow your focus. Good prompts are specific, contextual, and tell Copilot what format you want the answer in.

Sample Prompts to Try

# Prompt 1: Get an overview of recent incidents
# PLUGINS INVOKED: Microsoft Defender XDR + Microsoft Sentinel (auto-selected)
# PURPOSE: Quick situational awareness of your current threat landscape
# OUTPUT: List of high/critical incidents with title, severity, entities, and status
# TIP: Copilot queries both XDR and Sentinel to aggregate cross-product incidents
Tell me about the latest critical and high-severity incidents
in my environment from the last 7 days. Include the incident
title, severity, affected entities, and current status.

What to Expect

  • Copilot queries your Defender XDR and Sentinel data to find recent high-severity incidents
  • Each incident is summarised with title, severity, affected entities (users, devices, IPs), current status, and recommended next steps
  • Click the source citations (numbered references) to verify the data in the original product portal
  • If no incidents are found, try: Show me the most recent 5 alerts from any severity level

Prompt Engineering Best Practices

  1. Be specific. “Show me failed sign-ins from the last 24 hours” beats “Show me sign-in data”
  2. Specify context. mention the time range, severity, or entities you care about
  3. Request a format. “Show this as a table” or “Give me a summary in 3 bullet points”
  4. Build on previous responses. each follow-up prompt inherits session context
  5. Use product-specific language. “Run a KQL query” or “Check the MITRE ATT&CK technique”

Step 5 ยท Investigate an Incident

Select one of the incidents from the previous response and deep-dive into it. Copilot maintains session context, so it knows which incident you’re referring to.

Sample Investigation Prompts

# Prompt 2: Deep-dive into an incident
# PLUGIN: Microsoft Defender XDR (incident timeline, alert correlation)
# PURPOSE: Builds a chronological event timeline for a specific incident
# WHY: Session context carries forward - Copilot knows which incident you mean
# OUTPUT: Timeline with alerts, entities, MITRE ATT&CK techniques, lateral movement
Give me a detailed timeline of the first incident you listed.
Include all related alerts, affected entities, MITRE ATT&CK
techniques, and evidence of lateral movement or data exfiltration.

# Prompt 3: Ask about the attack chain
# PLUGINS: Microsoft Defender XDR + Threat Intelligence (MDTI)
# PURPOSE: Maps the full attack kill chain from initial access to current state
# OUTPUT: Step-by-step kill chain with MITRE ATT&CK technique IDs (e.g., T1566)
What was the initial access vector for this incident?
Walk me through the kill chain step by step.

What to Look For

  • Chronological timeline. ordered list of events from initial access through current state
  • ATT&CK mapping. each event is mapped to a MITRE technique (e.g., T1566 Phishing)
  • Entity graph. users, devices, IPs, and processes involved
  • Scope assessment. how far the attacker progressed in the kill chain
  • Source citations. verify each claim by clicking the source link
โš ๏ธ Important: Always verify Copilot’s responses against the source data. Copilot is a powerful accelerator but it can occasionally hallucinate or misinterpret data. Treat AI-generated findings as leads that require human validation, not as conclusive evidence.

Step 6 ยท Analyse a Suspicious Entity

Entity analysis is one of Copilot’s most powerful features. It aggregates data across Entra ID, Defender for Endpoint, and Sentinel to build a comprehensive profile in seconds. a task that would take an analyst 15–20 minutes manually.

Sample Entity Analysis Prompts

# Prompt 4: Investigate a user entity
# PLUGINS: Entra ID (risk, sign-ins) + Defender XDR (alerts) + Intune (devices)
# PURPOSE: Cross-product user profiling - aggregates identity, endpoint, and app data
# OUTPUT: Risk level, MFA status, anomalous sign-ins, device list, associated alerts
What do we know about user john.doe@contoso.com?
Show me their risk level, recent sign-in anomalies,
devices they have used, and any associated alerts.

# Prompt 5: Investigate an IP address
# PLUGINS: MDTI (reputation, geolocation) + Defender XDR (incident correlation)
# PURPOSE: Determines if an IP is malicious and whether it appears in your tenant
# OUTPUT: Geolocation, TI reputation verdict, related incidents, historical activity
Analyse the IP address 203.0.113.42. Is this a
known malicious IP? Show geolocation, reputation,
and any connections to incidents in our environment.

# Prompt 6: Investigate a device
# PLUGINS: Defender for Endpoint (alerts, CVEs) + Intune (compliance status)
# PURPOSE: Full device security posture assessment in a single query
# OUTPUT: Active alerts, CVE exposure, compliance status, OS version, last seen time
Show me the security posture of device DESKTOP-LAB01.
Include recent alerts, software vulnerabilities, and
whether the device is compliant with our Intune policies.

Cross-Product Correlation

Copilot automatically correlates data across multiple products. When you ask about a user, it pulls data from:

  • Entra ID. risk level, MFA status, group memberships, recent sign-ins
  • Defender for Endpoint. device associations, threat detections, vulnerabilities
  • Defender for Cloud Apps. OAuth app consents, cloud activity
  • Microsoft Sentinel. log analytics data, watchlist matches
  • Threat Intelligence. related IOCs, threat actor associations

Step 7 ยท Generate an Incident Report

Automated report generation saves analysts 30–45 minutes per major incident. Copilot produces well-structured summaries suitable for leadership, compliance teams, or incident management systems.

Sample Report Prompts

# Prompt 7: Executive summary
# PURPOSE: Auto-generates a CISO-ready incident briefing from session context
# WHY: Saves analysts 30-45 min of manual report writing per incident
# OUTPUT: Non-technical summary with business impact, status, root cause, next steps
# TIP: Review and add human context (stakeholder concerns, budget) before sharing
Generate an executive summary of this incident suitable for
sharing with the CISO. Include the business impact, current
containment status, root cause, and recommended next steps.

# Prompt 8: Technical detail report
# PURPOSE: Generates a detailed technical IR report for the SOC/IR team
# OUTPUT: Full timeline, evidence artifacts, IOCs (IPs/hashes/URLs), MITRE mapping,
#         and prioritised remediation actions with responsible teams
# TIP: Copy this into your ITSM tool (ServiceNow, Jira) for formal incident tracking
Create a detailed technical report for this incident.
Include the full attack timeline, evidence artifacts,
IOCs discovered, MITRE techniques, and remediation
actions taken or recommended.

Report Review Checklist

  1. Verify all facts against the source data. check incident IDs, timestamps, and entity names
  2. Confirm the severity assessment matches your organisation’s classification
  3. Review the recommended actions for accuracy and feasibility
  4. Add context that only a human would know (e.g., business impact, stakeholder concerns)
  5. Copy the report to your incident management system (ServiceNow, Jira, etc.)

Step 8 ยท Save and Share Sessions

Sessions are your investigation record. They maintain full context and can be reviewed by new team members to understand the entire investigation thread.

Session Management Steps

  1. Click the Pin icon (??) in the top-right of the session to save it for future reference
  2. Click the pencil icon next to the session title and rename it descriptively (e.g., INC-2026-0042-Phishing-Investigation)
  3. Click Share and select team members who should have access
  4. Shared recipients receive the session with full context including all prompts and responses
  5. Review old sessions periodically and archive completed investigations

Session Hygiene Best Practices

  • One session per incident. avoid mixing multiple investigations in a single session
  • Descriptive naming. use your incident ID or ticket number in the session name
  • Pin important sessions. pinned sessions appear at the top of your session list
  • Team sharing. share sessions with incoming shifts for seamless handoffs
  • Regular cleanup. unpin and archive sessions older than your retention policy

Step 9 ยท Monitor SCU Usage

Understanding SCU consumption is critical for cost management. Each prompt and plugin invocation consumes compute capacity. Monitoring usage helps you right-size your SCU allocation.

Portal Instructions

  1. In the standalone portal, click Settings > Owner settings
  2. Navigate to Usage monitoring
  3. Review the dashboard:
    • Total SCU consumption. over time chart
    • Usage by user. identify heavy users
    • Usage by plugin. which plugins consume the most capacity
    • Average usage per prompt. understand per-interaction cost
  4. Set up alerts if consumption exceeds 80% of provisioned capacity

PowerShell: Check Usage via API

# PURPOSE: Authenticate to Microsoft Graph with security read permissions
# SCOPE: SecurityEvents.Read.All - required to query Copilot capacity info
# PREREQ: Install-Module Microsoft.Graph -Scope CurrentUser
Connect-MgGraph -Scopes "SecurityEvents.Read.All"

# PURPOSE: Query the Copilot capacity API to check provisioned SCU count
# ENDPOINT: Beta API - subject to change; check docs.microsoft.com for GA status
# OUTPUT: JSON array of capacity resources with name, region, and SCU config
$capacity = Invoke-MgGraphRequest -Method GET `
  -Uri "https://graph.microsoft.com/beta/security/copilot/capacities"

# PURPOSE: Display capacity details in a human-readable format
# OUTPUT: Capacity name, deployment region, and current SCU count + billing state
$capacity.value | Select-Object name, location, properties | Format-List
๐Ÿ’ก Pro Tip: Complex prompts that invoke multiple plugins (e.g., “Investigate this user across all products”) consume more SCUs than simple queries. Train your team to be specific and targeted with prompts to optimise consumption.

Step 10 ยท Clean Up & Next Steps

If you are running this in a lab environment, consider scaling down or removing the capacity to avoid ongoing charges.

๐Ÿงน Clean Up Resources

# PURPOSE: Scale Security Copilot capacity down to stop hourly billing
# WHY: Setting to 0 SCUs pauses charges but preserves all configuration
# NOTE: Scale back up instantly at any time without re-provisioning
az security copilot capacity update \
  --resource-group rg-copilot-lab \
  --name copilot-capacity-lab \
  --capacity 0

# PURPOSE: Permanently delete all lab resources to stop ALL charges
# FLAGS: --yes skips interactive confirmation prompt
#        --no-wait returns immediately (deletion runs asynchronously)
# WARNING: Deletes the resource group and ALL resources inside it - irreversible
az group delete --name rg-copilot-lab --yes --no-wait

๐Ÿš€ Next Steps

๐Ÿ’ก Pro Tip: Keep the capacity running for a few days during your evaluation period. The more sessions you create and share with your team, the better you will understand the value proposition and be able to justify production deployment to stakeholders.

๐Ÿ“š Documentation Resources

ResourceDescription
Get started with Security CopilotInitial setup, provisioning, and first-run guide
Manage Security Copilot usageMonitor and manage SCU consumption and capacity
Using prompts in Security CopilotBest practices for writing effective prompts
Plugins in Security CopilotOverview of built-in and custom plugins
Authentication in Security CopilotPermissions, roles, and access control configuration
Navigating Security CopilotStandalone and embedded experience overview
Prompting tips and techniquesAdvanced prompt engineering for security scenarios
Security Copilot FAQFrequently asked questions about pricing, data, and privacy

Summary

What You Accomplished

  • Provisioned Microsoft Security Copilot capacity with Security Compute Units (SCUs) in the Azure portal
  • Accessed and navigated the standalone Security Copilot portal at securitycopilot.microsoft.com
  • Enabled and configured first-party plugins for Defender XDR, Sentinel, Entra ID, and Threat Intelligence
  • Crafted effective natural-language prompts to investigate incidents and analyse entities
  • Performed a multi-step incident investigation using session-based context
  • Analysed suspicious users, IP addresses, and devices with cross-product correlation
  • Generated executive and technical incident reports using Copilot
  • Saved, pinned, renamed, and shared investigation sessions for team collaboration
  • Monitored SCU usage and learned capacity optimisation strategies
  • Cleaned up lab resources and reviewed cost management best practices

Next Steps

  • Explore the embedded Copilot experience inside Defender XDR and Sentinel portals
  • Build custom promptbooks for repeatable investigation workflows
  • Create custom plugins to connect Copilot to internal tools and APIs
  • Continue to Lab 02 - Advanced Promptbooks and Custom Plugins
โ† All Labs Next Lab โ†’