Provision a Log Analytics workspace, deploy Microsoft Sentinel, configure the Microsoft Entra ID data connector, verify log ingestion with KQL, and create your first analytics rule to detect suspicious sign-in patterns.
In this hands-on lab you will build a fully operational Microsoft Sentinel deployment from scratch. Starting with a dedicated resource group and Log Analytics workspace, you will onboard Sentinel, connect your Microsoft Entra ID tenant to stream identity telemetry, explore sign-in and audit log schemas using KQL, and create a scheduled analytics rule that detects brute-force credential attacks. By the end of this lab you will have a working cloud-native SIEM that actively monitors identity-based threats.
A mid-size financial services company with 2,500 employees operates a hybrid identity infrastructure spanning on-premises Active Directory and Microsoft Entra ID. Their security team has identified gaps in visibility: credential-stuffing attacks against employee accounts go undetected for days, conditional access policy violations are only discovered during quarterly audits, and suspicious access patterns from unfamiliar geolocations are not flagged in real time. The CISO has mandated centralized security monitoring to detect credential attacks, policy violations, and anomalous access patterns. all funneled into a single pane of glass with automated alerting and incident management.
Identity is the #1 attack vector in modern cybersecurity. Over 80% of breaches involve compromised credentials. from password spraying and credential stuffing to phishing-based token theft. Traditional on-premises SIEM solutions struggle with cloud-scale identity data and lack native integration with Microsoft Entra ID. Microsoft Sentinel provides a cloud-native SIEM & SOAR platform purpose-built for this challenge: it ingests identity logs at cloud scale, applies machine learning and analytics rules to detect attacks in real time, and orchestrates automated response through playbooks. Mastering Sentinel's identity monitoring capabilities is essential for any security operations team defending a Microsoft 365 or hybrid environment.
A resource group is a logical container that holds all the Azure resources for this lab. Using a dedicated resource group makes it easy to track costs, manage permissions, and clean up when you are finished.
rg-sentinel-labEast US, West Europe)# Set variables for reuse across all lab steps
RESOURCE_GROUP="rg-sentinel-lab" # Logical container for all lab resources
LOCATION="eastus" # Azure region - choose one close to you
# Create the resource group
# --name : Unique name within subscription; "rg-" prefix follows naming convention
# --location : Datacenter region where metadata is stored
# --tags : Key-value pairs for cost tracking and cleanup - filter by these later
# Expected output: JSON object with "provisioningState": "Succeeded"
az group create \
--name $RESOURCE_GROUP \
--location $LOCATION \
--tags Environment=Lab Project=SentinelLab# Set variables for reuse across all lab steps
$ResourceGroup = "rg-sentinel-lab" # Logical container - all lab resources go here
$Location = "eastus" # Azure region - pick one near you for lower latency
# Create the resource group
# -Name : Must be unique within the subscription
# -Location : Determines where resource group metadata is stored
# -Tag : Hashtable of tags for filtering and cost management
# Expected output: ResourceGroupName, Location, ProvisioningState = Succeeded
New-AzResourceGroup `
-Name $ResourceGroup `
-Location $Location `
-Tag @{ Environment = "Lab"; Project = "SentinelLab" }Environment=Lab. This makes it trivially easy to find and delete them later using az group delete --name rg-sentinel-lab --yes --no-wait.Microsoft Sentinel sits on top of a Log Analytics workspace, which is the underlying data store. All ingested security data. sign-in logs, audit logs, alerts, incidents. is stored here as tables that you query using KQL (Kusto Query Language).
rg-sentinel-lablaw-sentinel-lab# Set variables - reuse these in subsequent steps
RESOURCE_GROUP="rg-sentinel-lab"
WORKSPACE_NAME="law-sentinel-lab" # "law-" prefix = Log Analytics Workspace naming convention
LOCATION="eastus" # Must match the resource group region
# Create the Log Analytics workspace - this is the data store Sentinel sits on
# --retention-time 90 : Keep logs for 90 days (first 90 days free with Sentinel)
# --sku PerGB2018 : Pay-as-you-go pricing tier; best for lab/dev workloads
# Expected output: JSON with "provisioningState": "Succeeded"
az monitor log-analytics workspace create \
--resource-group $RESOURCE_GROUP \
--workspace-name $WORKSPACE_NAME \
--location $LOCATION \
--retention-time 90 \
--sku PerGB2018
# Verify the workspace was created and confirm settings
# --query : JMESPath expression to extract specific fields into a clean table
# --output table : Formats output as a human-readable table
# Expected output: Table showing Name, Location, 90-day Retention, PerGB2018 SKU
az monitor log-analytics workspace show \
--resource-group $RESOURCE_GROUP \
--workspace-name $WORKSPACE_NAME \
--query "{Name:name, Location:location, Retention:retentionInDays, Sku:sku.name}" \
--output tablePerGB2018 pricing tier for lab workloads. For high-volume production environments (500+ GB/day), consider Commitment Tiers to save up to 65% on ingestion costs.Microsoft Sentinel is enabled as a solution on top of your Log Analytics workspace. Once deployed, it adds the Sentinel blades. Incidents, Workbooks, Analytics, Hunting, Data Connectors, and more. to your workspace.
law-sentinel-lab# Install the Sentinel CLI extension (if not already installed)
# --upgrade : Updates to latest version if already present
az extension add --name sentinel --upgrade
# Get the workspace resource ID - needed for Sentinel onboarding
# --query id : Extracts just the ARM resource ID string
# --output tsv : Returns plain text (no quotes) for use in variables
WORKSPACE_ID=$(az monitor log-analytics workspace show \
--resource-group $RESOURCE_GROUP \
--workspace-name $WORKSPACE_NAME \
--query id --output tsv)
# Enable Microsoft Sentinel on the workspace
# --name "default" : Standard onboarding state name
# --customer-managed-key false: Use Microsoft-managed encryption (standard for labs)
# This adds SIEM capabilities (Incidents, Analytics, Hunting, etc.) to your workspace
az sentinel onboarding-state create \
--resource-group $RESOURCE_GROUP \
--workspace-name $WORKSPACE_NAME \
--name "default" \
--customer-managed-key false
# Verify Sentinel is enabled
echo "Microsoft Sentinel has been enabled on workspace: $WORKSPACE_NAME"The Microsoft Entra ID (formerly Azure Active Directory) data connector streams identity telemetry into your Sentinel workspace. This includes sign-in logs (interactive and non-interactive), audit logs, provisioning logs, and managed identity sign-in logs.
If the data connector does not configure automatically, you can set up diagnostic settings directly:
sentinel-identity-logslaw-sentinel-labBefore building detections, confirm that identity log data is flowing into your workspace. Open the Logs blade in Sentinel (or directly in your Log Analytics workspace) and run the following KQL queries.
// Retrieve the 10 most recent sign-in log entries
// PURPOSE: Verify that sign-in data is flowing into the workspace
// If this returns 0 results, the Entra ID connector is not yet active
SigninLogs
| take 10 // Grab any 10 rows (fastest way to verify data exists)
| project
TimeGenerated, // UTC timestamp of the authentication event
UserPrincipalName, // User's email-style identity (e.g., user@contoso.com)
ResultType, // Error code: 0 = success, 50126 = bad password, etc.
ResultDescription, // Human-readable text for the ResultType code
IPAddress, // Source IP of the sign-in request
AppDisplayName, // Target app (e.g., "Azure Portal", "Microsoft Teams")
Location, // Geo-location derived from IPAddress
ClientAppUsed, // Auth protocol: "Browser", "Mobile Apps", "IMAP", etc.
ConditionalAccessStatus // "success", "failure", or "notApplied"// Retrieve the 10 most recent audit log entries
// PURPOSE: Verify that directory change/admin activity data is being ingested
// AuditLogs capture admin actions like user creation, role changes, app registrations
AuditLogs
| take 10 // Grab any 10 rows to confirm data flow
| project
TimeGenerated, // UTC timestamp of the directory operation
OperationName, // Action performed (e.g., "Add user", "Update policy")
Result, // "success" or "failure" - track admin action outcomes
InitiatedBy, // Who/what triggered it (user UPN or app/service principal)
TargetResources, // The object acted upon (user, group, app, role, etc.)
Category // Category grouping: "UserManagement", "Policy", etc.// Check how many records have been ingested in the last 24 hours
// PURPOSE: Validate data volume and confirm both log types are actively flowing
// A healthy tenant typically generates hundreds to thousands of sign-ins/day
union SigninLogs, AuditLogs // Combine both tables into a single result set
| where TimeGenerated > ago(24h) // Filter to only the last 24 hours
| summarize
TotalRecords = count(), // Total events across both tables
SignIns = countif(Type == "SigninLogs"), // Count of authentication events only
Audits = countif(Type == "AuditLogs") // Count of directory change events only
| project TotalRecords, SignIns, Audits
// Expected output: Single row with three columns showing ingestion counts
// If SignIns = 0, check Entra ID diagnostic settings and P1/P2 license
// If Audits = 0, verify AuditLogs checkbox is enabled in the data connectorSigninLogs. If the table does not exist, the diagnostic settings may not have been applied correctly. Re-check Step 4.Understanding the schema is critical for writing effective detection rules. The SigninLogs table contains rich telemetry about every authentication event. Let's explore its structure and key fields.
// List all columns in the SigninLogs table
// PURPOSE: Discover every available field for building detection queries
// This is essential before writing KQL - know what data you have to work with
SigninLogs
| getschema // Returns metadata about the table structure (not data)
| project ColumnName, ColumnType, DataType // Show column name, KQL type, and storage type
| order by ColumnName asc // Alphabetical for easy reference
// Expected output: ~40+ columns including TimeGenerated, UserPrincipalName,
// ResultType, IPAddress, LocationDetails, ConditionalAccessStatus, MfaDetail, etc.| Field | Description |
|---|---|
TimeGenerated | UTC timestamp of the sign-in event |
UserPrincipalName | The user's UPN (email-style identifier) |
ResultType | Numeric error code (0 = success) |
ResultDescription | Human-readable description of the result |
IPAddress | Source IP of the authentication request |
Location | Geolocation (country/city) from the IP |
AppDisplayName | Application the user signed into |
ClientAppUsed | Client protocol (browser, mobile app, etc.) |
ConditionalAccessStatus | Conditional Access evaluation result |
RiskLevelDuringSignIn | Risk level at sign-in time (none, low, medium, high) |
MfaDetail | MFA method and result details |
| Code | Meaning | Security Relevance |
|---|---|---|
0 | Success | Successful authentication |
50126 | Invalid username or password | Brute force / credential stuffing indicator |
50053 | Account locked | Lockout due to too many failed attempts |
53003 | Blocked by Conditional Access | CA policy enforcement. may indicate policy bypass attempts |
50076 | MFA required | User has not completed MFA challenge |
50074 | Strong auth required | MFA challenge was issued |
50140 | Keep me signed in interrupt | User was prompted for KMSI |
Now that you understand the schema, let's write KQL queries to detect brute-force sign-in attacks. We'll start with a basic detection and then add context for richer incident data.
This query identifies users with 10 or more failed sign-in attempts within a 1-hour window. a strong indicator of credential brute-forcing or password spraying.
// Brute Force Detection: 10+ failed sign-ins per user in 1 hour
// PURPOSE: Identify accounts under active credential attack
// MITRE ATT&CK: T1110 - Brute Force
// WHY: Attackers automate password guessing; 10+ failures in 1h is abnormal
SigninLogs
| where TimeGenerated > ago(1h) // Look at the last hour only
| where ResultType in ("50126", "50053", "50074") // Filter to authentication failures:
// 50126 = Invalid username or password (most common brute-force indicator)
// 50053 = Account is locked (lockout triggered by too many failures)
// 50074 = Strong authentication required (MFA challenge issued but not completed)
| summarize
FailedAttempts = count(), // Total failed attempts per user
DistinctIPs = dcount(IPAddress), // Unique source IPs - >1 suggests distributed attack
IPAddresses = make_set(IPAddress, 10), // Collect up to 10 unique IPs for investigation
Applications = make_set(AppDisplayName, 5),// Which apps were targeted (Portal, Teams, etc.)
FirstAttempt = min(TimeGenerated), // When the attack started
LastAttempt = max(TimeGenerated) // Most recent attempt
by UserPrincipalName // Group by targeted user account
| where FailedAttempts >= 10 // Threshold: 10+ failures = likely attack
| sort by FailedAttempts desc // Most-attacked accounts first
// Expected output: Users under attack with source IPs, apps, and timeline
// ACTION: Investigate high-count users; check if a success followed the failuresThis enriched version adds geolocation data and checks whether a successful sign-in followed the failures. a critical indicator of account compromise.
// Enriched Brute Force Detection with Location and Success Check
// PURPOSE: Detect brute-force attacks AND check if the attacker succeeded
// WHY: Failures alone are noisy - failures FOLLOWED BY success = likely compromise
// MITRE ATT&CK: T1110 - Brute Force (with post-compromise correlation)
let threshold = 10; // Min failed attempts to trigger - tune based on your environment
let timeWindow = 1h; // Detection window - 1 hour balances speed vs. noise
//
// STEP 1: Find users with excessive failed sign-ins
let failedSignIns = SigninLogs
| where TimeGenerated > ago(timeWindow)
| where ResultType in ("50126", "50053") // 50126 = bad password, 50053 = account locked
| summarize
FailedAttempts = count(), // Total failures per user
DistinctIPs = dcount(IPAddress), // Multiple IPs = distributed/botnet attack
IPAddresses = make_set(IPAddress, 10), // Source IPs for threat intel lookups
Locations = make_set(Location, 10), // Geographic origins of the attack
Applications = make_set(AppDisplayName, 5), // Targeted applications
FirstAttempt = min(TimeGenerated), // Attack start time
LastAttempt = max(TimeGenerated) // Attack end time
by UserPrincipalName
| where FailedAttempts >= threshold; // Only users exceeding the threshold
//
// STEP 2: Check if ANY of those users also had a successful sign-in
let successfulSignIns = SigninLogs
| where TimeGenerated > ago(timeWindow)
| where ResultType == "0" // ResultType 0 = successful authentication
| summarize SuccessCount = count(), SuccessIPs = make_set(IPAddress, 5) by UserPrincipalName;
//
// STEP 3: Correlate - join failures with successes to find compromised accounts
failedSignIns
| join kind=leftouter (successfulSignIns) on UserPrincipalName // leftouter: keep all failures
| extend FollowedBySuccess = iff(isnotempty(SuccessCount), true, false) // Flag compromises
| project
UserPrincipalName,
FailedAttempts,
DistinctIPs,
IPAddresses,
Locations,
Applications,
FirstAttempt,
LastAttempt,
FollowedBySuccess, // TRUE = CRITICAL: attacker likely guessed the password
SuccessIPs // IP that succeeded - compare with FailedIPs for attribution
| sort by FollowedBySuccess desc, FailedAttempts desc
// Expected output: Users sorted by risk - FollowedBySuccess=true at top
// ACTION: Immediately investigate FollowedBySuccess=true rows - reset password & revoke sessionsdcount(IPAddress) > 3 as an additional filter. distributed attacks from multiple IPs are more suspicious than a single user retrying from one deviceFollowedBySuccess field is critical in triage. Failed attempts followed by a success may indicate a compromised account . the attacker guessed the correct password. Prioritize these incidents for immediate investigation.Now let's turn the KQL query into a scheduled analytics rule that automatically creates incidents when brute-force attacks are detected.
Brute Force Detection. Multiple Failed Sign-insDetects accounts with 10 or more failed sign-in attempts within a 1-hour window, indicating potential brute-force or password spray attacks. Enriched with source IP, geolocation, and success-after-failure correlation.Paste the following KQL query into the Rule query field:
// Analytics Rule Query: Brute Force Detection
// This query runs on a schedule (every 1h) and generates alerts automatically
// MITRE ATT&CK: T1110 - Brute Force
SigninLogs
| where ResultType in ("50126", "50053") // 50126 = wrong password, 50053 = locked out
| summarize
FailedAttempts = count(), // Total failures - used for threshold check
DistinctIPs = dcount(IPAddress), // Unique IPs - distributed attacks use many
IPAddresses = make_set(IPAddress, 10), // Collect IPs for entity mapping in incidents
Locations = make_set(Location, 10), // Geo data for investigation context
Applications = make_set(AppDisplayName, 5), // Which apps were targeted
FirstAttempt = min(TimeGenerated), // Attack window start
LastAttempt = max(TimeGenerated) // Attack window end
by UserPrincipalName // One row per targeted user
| where FailedAttempts >= 10 // Alert threshold - tune for your environment
// Sentinel Entity Mapping: Account โ UserPrincipalName, IP โ IPAddressesEntity mapping enables Sentinel to correlate entities across incidents and power the investigation graph. Configure the following mappings:
FullName โ Column: UserPrincipalNameAddress โ Column: IPAddressesWith the analytics rule in place, let's generate controlled test events and verify that Sentinel creates an incident.
SigninLogs entry with ResultType = 50126After waiting 5ยท15 minutes for log ingestion, run this query to confirm your test events appear:
// Check for your test events - run this after generating failed sign-ins
// PURPOSE: Confirm test data appeared in the SigninLogs table
// Wait 5-15 min after test attempts for log ingestion latency
SigninLogs
| where TimeGenerated > ago(1h) // Last hour - matches the analytics rule window
| where ResultType == "50126" // 50126 = invalid password (our test scenario)
| summarize FailedAttempts = count() // Count failures per user+IP combination
by UserPrincipalName, IPAddress
| where FailedAttempts >= 10 // Same threshold as the analytics rule
| sort by FailedAttempts desc
// Expected output: Your test account + your IP address with 10+ failures
// If empty: wait longer or check that test failures used a valid tenant UPNCongratulations! In this lab you have built a complete identity threat detection pipeline:
If this is a lab environment and you want to avoid ongoing charges, delete all resources:
# Delete the entire resource group and all resources within it
# WARNING: This permanently deletes the workspace, Sentinel, all data, and all rules
# --yes : Skip the confirmation prompt (use cautiously)
# --no-wait : Return immediately - deletion continues in the background
az group delete \
--name rg-sentinel-lab \
--yes \
--no-wait
# Verify the deletion is in progress
# Expected output: "Deleting" while in progress, error when fully deleted
az group show --name rg-sentinel-lab --query "properties.provisioningState" --output tsv| Resource | Description |
|---|---|
| Microsoft Sentinel documentation | Comprehensive product docs covering all Sentinel capabilities |
| KQL quick reference | Cheat sheet for Kusto Query Language operators and functions |
| Entra ID sign-in log schema (SigninLogs table) | Full column reference for the SigninLogs table |
| Create custom analytics rules | Best practices for scheduled query rules in Sentinel |
| Microsoft Sentinel best practices | Operational best practices for production deployments |
| Azure Pricing Calculator | Estimate costs for Log Analytics and Sentinel |
| Connect Microsoft Entra ID to Sentinel | Data connector setup guide |
| Entra ID authentication error codes | Complete list of ResultType codes and their meanings |