Create a Model Context Protocol server that exposes Microsoft Sentinel KQL query capabilities as tools, configure authentication, define MCP tool schemas, test with a local MCP client, and validate query results.
This lab walks you through building a Model Context Protocol (MCP) server that integrates with Microsoft Sentinel, exposing KQL query capabilities as tools that AI models can discover and invoke. You will cover end-to-end project setup, Azure authentication configuration, tool schema design, KQL query implementation, error handling, testing with a local MCP client, and result validation. giving you a production-ready foundation for AI-driven security operations.
A SOC team wants to enable their AI assistant to directly query Sentinel data during investigations. Currently analysts manually copy KQL queries between tools, slowing response times and introducing errors.
Building an MCP server standardizes this integration, allowing any MCP-compatible AI client to run Sentinel queries, retrieve incidents, and analyze security data programmatically. This eliminates context-switching, accelerates incident triage, and ensures consistent, auditable query execution across the team.
DefaultAzureCredential for secure Sentinel accessThe Model Context Protocol is becoming the standard for AI-tool integration. Security teams that build MCP servers gain the ability to connect any AI model to their security stack without custom integrations. This lab teaches the foundational skills needed to bridge the gap between AI assistants and enterprise security infrastructure. enabling faster investigations, automated data retrieval, and a scalable pattern that extends to any security service with an API.
Log Analytics Reader role assignment on the Sentinel workspaceThe Model Context Protocol (MCP) defines a standard interface between AI models and external tools. An MCP server exposes capabilities (tools) that AI clients can discover, invoke, and receive results from. much like a specialized API designed for AI consumption.
stdio for local, SSE for remoteCreate a well-structured project with proper dependency management. This lab uses Python, but the same concepts apply to Node.js/TypeScript.
# Create the project
mkdir sentinel-mcp-server
cd sentinel-mcp-server
# Set up Python virtual environment
python -m venv .venv
# Activate (Windows)
.venv\Scripts\activate
# Activate (macOS/Linux)
# source .venv/bin/activate
# Install dependencies
pip install "mcp[cli]" azure-identity azure-monitor-query python-dotenv
# Create the project structure
mkdir -p src tests
touch src/__init__.py src/server.py src/tools.py src/auth.py
touch .env .env.example requirements.txt README.md# Node.js/TypeScript alternative setup
mkdir sentinel-mcp-server && cd sentinel-mcp-server
npm init -y
# Install runtime dependencies:
# @modelcontextprotocol/sdk - MCP server framework (tool registration, transport)
# @azure/identity - Entra ID authentication (DefaultAzureCredential)
# @azure/monitor-query - Execute KQL queries against Log Analytics
# dotenv - Load .env variables into process.env
npm install @modelcontextprotocol/sdk @azure/identity @azure/monitor-query dotenv
# Install dev dependencies: TypeScript compiler and Node.js type definitions
npm install -D typescript @types/node
# Generate tsconfig.json - configure TypeScript compilation options
npx tsc --initsentinel-mcp-server/
โโโ src/
โ โโโ __init__.py
โ โโโ server.py # MCP server entry point
โ โโโ tools.py # Tool definitions and handlers
โ โโโ auth.py # Azure authentication module
โโโ tests/
โ โโโ test_tools.py # Unit tests for tools
โโโ .env # Environment variables (never commit!)
โโโ .env.example # Template for .env
โโโ .gitignore
โโโ requirements.txt
โโโ README.md.env and .venv/ to your .gitignore immediately. Never commit Azure credentials to version control.Create an Entra ID app registration and configure secure authentication for your MCP server to access Sentinel data.
sentinel-mcp-servermcp-server-secret, Expiry: 6 months# Grant the Entra ID app read-only access to the Sentinel workspace
# "Log Analytics Reader" allows executing KQL queries but NOT modifying data
# This follows least-privilege: the MCP server only needs to read logs
# Replace placeholders with your actual values from the Azure portal
# Output: JSON object confirming the role assignment with principalId
az role assignment create \
--assignee "<APP_CLIENT_ID>" \
--role "Log Analytics Reader" \
--scope "/subscriptions/<SUB_ID>/resourceGroups/<RG>/providers/Microsoft.OperationalInsights/workspaces/<WORKSPACE_NAME>"# .env - Azure authentication and MCP server configuration
# SECURITY: Never commit this file to version control!
# Entra ID tenant where the app is registered (Azure Portal > Entra ID > Overview)
AZURE_TENANT_ID=your-tenant-id-here
# Application (client) ID from the app registration overview page
AZURE_CLIENT_ID=your-client-id-here
# Client secret value - copy immediately after creation (shown only once)
AZURE_CLIENT_SECRET=your-client-secret-here
# Log Analytics workspace ID (Sentinel > Settings > Workspace settings)
SENTINEL_WORKSPACE_ID=your-workspace-id-here
# Query guardrails - prevent AI clients from running expensive queries
MAX_QUERY_TIMESPAN=30 # Max days an AI client can look back
MAX_RESULTS=1000 # Max rows returned per query responseimport os
from dotenv import load_dotenv
from azure.identity import ClientSecretCredential, DefaultAzureCredential
from azure.monitor.query import LogsQueryClient
# Load environment variables from .env file into os.environ
load_dotenv()
def get_credential():
"""Get Azure credential using a two-tier authentication strategy.
Returns: TokenCredential for authenticating Azure SDK calls.
- Production: ClientSecretCredential with explicit Entra ID app secrets
- Development: DefaultAzureCredential (tries az login, managed identity, etc.)
"""
if os.getenv('AZURE_CLIENT_SECRET'):
# Production path: authenticate using the Entra ID app registration
# Requires AZURE_TENANT_ID, AZURE_CLIENT_ID, AZURE_CLIENT_SECRET
return ClientSecretCredential(
tenant_id=os.environ['AZURE_TENANT_ID'],
client_id=os.environ['AZURE_CLIENT_ID'],
client_secret=os.environ['AZURE_CLIENT_SECRET']
)
# Development path: automatically tries multiple credential sources
# in order: environment vars, managed identity, az CLI, VS Code, etc.
return DefaultAzureCredential()
def get_logs_client():
"""Create a LogsQueryClient for executing KQL queries.
This client connects to Azure Monitor / Log Analytics
and is used by all MCP tools that query Sentinel data.
Returns: Authenticated LogsQueryClient instance.
"""
credential = get_credential()
return LogsQueryClient(credential)
# The Log Analytics workspace ID that contains Sentinel data
# All KQL queries from MCP tools will target this workspace
WORKSPACE_ID = os.environ.get('SENTINEL_WORKSPACE_ID', '')DefaultAzureCredential during local development. it automatically picks up your az login session. Switch to ClientSecretCredential for production deployments. This pattern lets you develop without managing secrets locally.Initialise the MCP server with the SDK, define server metadata, and configure the stdio transport for local testing.
import asyncio
import json
from datetime import timedelta
# MCP SDK imports:
# Server - core framework class for building MCP servers
from mcp.server import Server
# stdio_server - provides stdin/stdout transport for local process communication
from mcp.server.stdio import stdio_server
# types - MCP protocol types: Tool (schema), TextContent (response format)
import mcp.types as types
# Local auth module - provides authenticated Azure SDK clients
from auth import get_logs_client, WORKSPACE_ID
# Create the MCP server instance with a unique identifier
# This name is sent to MCP clients during the initialization handshake
# and helps clients distinguish between multiple connected servers
server = Server("sentinel-mcp-server")
# Create a singleton Azure Log Analytics client
# Reused across all tool invocations to avoid repeated authentication
logs_client = get_logs_client()
# MCP Tool Discovery: @server.list_tools() registers this function
# as the handler for tool listing requests from MCP clients.
# When a client connects, it calls this to discover available tools.
# Each Tool needs: name, description (AI uses this to decide when to call),
# and inputSchema (JSON Schema defining the accepted parameters).
@server.list_tools()
async def list_tools() -> list[types.Tool]:
"""Return the list of tools this server provides.
Called automatically by MCP clients during tool discovery."""
return [
# Tool 1: Execute arbitrary KQL queries against Sentinel
types.Tool(
name="run_kql_query",
description="Execute a KQL query against the Microsoft Sentinel "
"Log Analytics workspace and return results as JSON.",
inputSchema={
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "The KQL query to execute"
},
"days": {
"type": "integer",
"description": "Number of days to look back (default: 7, max: 30)",
"default": 7
}
},
"required": ["query"]
}
),
# Tool 2: Discover available tables in the workspace
# Helps AI clients know what data exists before building queries
types.Tool(
name="list_sentinel_tables",
description="List available log tables in the Sentinel workspace "
"with row counts to help construct valid KQL queries.",
inputSchema={
"type": "object",
"properties": {},
"required": []
}
),
# Tool 3: Purpose-built tool for retrieving security incidents
# More structured than raw KQL - returns pre-formatted incident data
types.Tool(
name="get_recent_incidents",
description="Retrieve recent security incidents from Sentinel "
"with severity, status, and assigned owner.",
inputSchema={
"type": "object",
"properties": {
"severity": {
"type": "string",
"description": "Filter by severity: High, Medium, Low, Informational",
"enum": ["High", "Medium", "Low", "Informational"]
},
"days": {
"type": "integer",
"description": "Number of days to look back (default: 7)",
"default": 7
}
},
"required": []
}
)
]
async def main():
"""Run the MCP server using stdio transport.
stdio = communication via standard input/output pipes.
The MCP client (e.g., VS Code, Claude) spawns this as a child process
and sends JSON-RPC messages over stdin, reading responses from stdout.
"""
async with stdio_server() as (read_stream, write_stream):
# server.run() starts the MCP protocol handshake and message loop
# It handles: initialization, tool discovery, tool invocation, shutdown
await server.run(
read_stream, write_stream,
server.create_initialization_options()
)
if __name__ == "__main__":
# Entry point: start the async event loop and run the MCP server
asyncio.run(main())stdio transport for local development and testing. Once your tools are working correctly, add SSE transport for production deployment (covered in Lab 03). This two-phase approach lets you iterate quickly without dealing with network issues.Implement the run_kql_query tool handler that accepts a KQL query string and time range, executes it against your Sentinel workspace, and returns structured JSON results.
from azure.monitor.query import LogsQueryStatus
from azure.core.exceptions import HttpResponseError
# MCP Tool Execution: @server.call_tool() registers this as the handler
# for all tool invocation requests. When an AI client calls a tool,
# this function dispatches to the appropriate handler based on tool name.
@server.call_tool()
async def call_tool(
name: str, arguments: dict
) -> list[types.TextContent]:
"""Handle tool invocation requests from MCP clients.
Dispatches to the correct handler based on the tool name.
Returns: list of TextContent with JSON-formatted results.
"""
# Route the tool call to the appropriate handler
if name == "run_kql_query":
return await handle_kql_query(arguments)
elif name == "list_sentinel_tables":
return await handle_list_tables(arguments)
elif name == "get_recent_incidents":
return await handle_get_incidents(arguments)
else:
return [types.TextContent(
type="text",
text=f"Error: Unknown tool '{name}'"
)]
async def handle_kql_query(arguments: dict) -> list[types.TextContent]:
"""Execute a KQL query against the Sentinel workspace.
Accepts a KQL query string and optional time range (days).
Returns: JSON with status, row_count, and query results array.
"""
query = arguments.get("query", "")
# Security guardrail: cap the time range to prevent expensive full-scan queries
days = min(int(arguments.get("days", 7)), 30) # Cap at 30 days
timespan = timedelta(days=days)
try:
# Execute the KQL query against the Log Analytics workspace
response = logs_client.query_workspace(
workspace_id=WORKSPACE_ID,
query=query,
timespan=timespan
)
if response.status == LogsQueryStatus.SUCCESS:
# Convert tabular results to a list of dicts for JSON serialization
# Each row becomes {column_name: value} for easy AI consumption
rows = []
for table in response.tables:
columns = [col.name for col in table.columns]
for row in table.rows:
rows.append(dict(zip(columns, row)))
# Structure the response for AI-friendly consumption
# Include metadata (query, timespan, count) alongside results
result = {
"status": "success",
"query": query,
"timespan_days": days,
"row_count": len(rows),
"results": rows[:1000] # Limit results to prevent huge payloads
}
return [types.TextContent(
type="text", text=json.dumps(result, default=str)
)]
else:
return [types.TextContent(
type="text",
text=json.dumps({
"status": "partial",
"message": "Query returned partial results",
"error": str(response.partial_error)
})
)]
except HttpResponseError as e:
return [types.TextContent(
type="text",
text=json.dumps({
"status": "error",
"error_code": e.error.code if e.error else "unknown",
"message": str(e.message),
"suggestion": "Check your KQL syntax or narrow the time range."
})
)]min(days, 30) and rows[:1000] guards protect your workspace.The table schema tool helps AI clients understand what data is available before constructing queries. Without it, the AI model would need to guess table and column names, leading to failed queries.
async def handle_list_tables(arguments: dict) -> list[types.TextContent]:
"""List available Sentinel tables with row counts.
This discovery tool helps AI clients understand what data
is available before constructing KQL queries, improving
first-attempt query accuracy.
Returns: JSON with table names, 24h row counts, and descriptions.
"""
# KQL query to discover all tables and their recent activity
# search * scans across all tables; $table captures the source table name
query = """
search *
| summarize Count=count() by $table
| order by Count desc
| take 50
"""
try:
response = logs_client.query_workspace(
workspace_id=WORKSPACE_ID,
query=query,
timespan=timedelta(days=1)
)
tables = []
if response.status == LogsQueryStatus.SUCCESS:
for table in response.tables:
columns = [col.name for col in table.columns]
for row in table.rows:
record = dict(zip(columns, row))
tables.append({
"table_name": record.get("$table", ""),
"row_count_24h": record.get("Count", 0)
})
# Enrich tables with human-readable descriptions
# These help AI models choose the correct table without guessing
table_descriptions = {
"SecurityIncident": "Sentinel incidents with severity, status, and owner",
"SecurityAlert": "Alerts from all connected data sources",
"SigninLogs": "Azure AD sign-in activity",
"AuditLogs": "Azure AD audit trail",
"CommonSecurityLog": "CEF-formatted logs from firewalls and appliances",
"Syslog": "Linux syslog data",
"ThreatIntelligenceIndicator": "Threat intelligence IOCs",
"AzureActivity": "Azure control plane operations",
"DeviceEvents": "Defender for Endpoint device events",
"EmailEvents": "Defender for Office 365 email events"
}
for t in tables:
t["description"] = table_descriptions.get(t["table_name"], "")
return [types.TextContent(
type="text",
text=json.dumps({
"status": "success",
"table_count": len(tables),
"tables": tables
}, default=str)
)]
except Exception as e:
return [types.TextContent(
type="text",
text=json.dumps({"status": "error", "message": str(e)})
)]KQL injection is a real risk. an AI model might construct a query that is syntactically valid but excessively expensive. Implement guardrails to protect your Sentinel workspace.
import re
# Security: Regex patterns matching dangerous KQL management commands
# These are write/delete operations that could modify or destroy data
# An AI model should NEVER be allowed to execute these against Sentinel
BLOCKED_PATTERNS = [
r'\.drop\s', # Table drops - permanently deletes data
r'\.set\s', # Data modifications - writes new data
r'\.create\s', # Schema changes - creates tables/functions
r'\.alter\s', # Schema alterations - modifies table structure
r'\.delete\s', # Data deletion - removes specific records
]
# Guardrail limits to prevent expensive or oversized queries
MAX_TIMESPAN_DAYS = 30 # Prevent full-history scans
MAX_RESULTS = 1000 # Cap response payload size
MAX_QUERY_LENGTH = 5000 # Prevent excessively complex queries
def validate_kql_query(query: str, days: int) -> dict:
"""Validate a KQL query before execution.
Checks for: blocked patterns, length limits, timespan limits.
Returns: {"valid": True} or {"valid": False, "reason": "..."}.
Why: AI models can generate syntactically valid but dangerous queries.
"""
# Check query length
if len(query) > MAX_QUERY_LENGTH:
return {
"valid": False,
"reason": f"Query exceeds {MAX_QUERY_LENGTH} character limit. "
"Simplify the query or break it into multiple calls."
}
# Check for blocked patterns (KQL injection prevention)
for pattern in BLOCKED_PATTERNS:
if re.search(pattern, query, re.IGNORECASE):
return {
"valid": False,
"reason": f"Query contains blocked operation. "
"Only read operations are allowed."
}
# Enforce timespan limit
if days > MAX_TIMESPAN_DAYS:
return {
"valid": False,
"reason": f"Timespan exceeds {MAX_TIMESPAN_DAYS} days. "
"Narrow your search window."
}
# Warn about missing 'take' or 'limit'
if 'take' not in query.lower() and 'limit' not in query.lower():
# Add a default limit
pass # We cap results in the handler anyway
return {"valid": True}# In handle_kql_query, add validation before execution:
# This ensures every query is checked for dangerous patterns
# and resource limits before it reaches the Azure API
async def handle_kql_query(arguments: dict) -> list[types.TextContent]:
query = arguments.get("query", "")
days = int(arguments.get("days", 7))
# Validate before executing
validation = validate_kql_query(query, days)
if not validation["valid"]:
return [types.TextContent(
type="text",
text=json.dumps({
"status": "validation_error",
"message": validation["reason"],
"suggestion": "Modify your query and try again."
})
)]
# Cap the timespan
days = min(days, MAX_TIMESPAN_DAYS)
# ... proceed with query executionsearch * | where TimeGenerated > ago(365d) query that scans your entire data lake, consuming significant resources and potentially timing out.Add a dedicated tool for retrieving Sentinel incidents. This is one of the most common operations AI agents will perform. getting a view of the current threat landscape.
async def handle_get_incidents(arguments: dict) -> list[types.TextContent]:
"""Retrieve recent Sentinel incidents.
Returns structured incident data including severity, status,
timestamps, owner, and alert count for AI-powered triage.
"""
# Cap timespan and extract optional severity filter
days = min(int(arguments.get("days", 7)), 30)
severity = arguments.get("severity", None)
# Build KQL query dynamically based on parameters
# SecurityIncident table stores Sentinel incident records
query = """
SecurityIncident
| where TimeGenerated > ago({days}d)
""".format(days=days)
# Optionally filter by severity level
if severity:
query += f'| where Severity == "{severity}"\n'
# Project only the fields an AI needs for triage decisions
# Using KQL 'project' reduces data transfer and focuses the response
query += """
| project
IncidentNumber,
Title,
Severity,
Status,
CreatedTime,
LastModifiedTime,
Owner = tostring(Owner.assignedTo),
AlertsCount = array_length(AlertIds),
Description
| order by CreatedTime desc
| take 25
"""
try:
response = logs_client.query_workspace(
workspace_id=WORKSPACE_ID,
query=query,
timespan=timedelta(days=days)
)
incidents = []
if response.status == LogsQueryStatus.SUCCESS:
for table in response.tables:
columns = [col.name for col in table.columns]
for row in table.rows:
incidents.append(dict(zip(columns, row)))
return [types.TextContent(
type="text",
text=json.dumps({
"status": "success",
"incident_count": len(incidents),
"timespan_days": days,
"severity_filter": severity or "all",
"incidents": incidents
}, default=str)
)]
except Exception as e:
return [types.TextContent(
type="text",
text=json.dumps({"status": "error", "message": str(e)})
)]AlertsCount, Owner, and Status together means the AI can provide a comprehensive summary without needing a second tool call.Good error messages are critical for AI interaction. the AI model needs to understand what went wrong to self-correct. Implement structured error responses with error codes, descriptions, and suggested remediations.
import time
from functools import wraps
def with_retry(max_retries=3, base_delay=1):
"""Decorator that retries on transient Azure errors.
Uses exponential backoff: delay doubles with each attempt.
Handles 429 (rate limited) and 5xx (server errors) automatically.
Client errors (4xx except 429) are NOT retried.
"""
def decorator(func):
@wraps(func)
async def wrapper(*args, **kwargs):
last_exception = None
for attempt in range(max_retries):
try:
return await func(*args, **kwargs)
except HttpResponseError as e:
last_exception = e
if e.status_code == 429:
# Rate limited by Azure - respect the Retry-After header
# to avoid being blocked entirely
retry_after = int(
e.response.headers.get('Retry-After', base_delay * (2 ** attempt))
)
time.sleep(retry_after)
elif e.status_code >= 500:
# Azure server error - retry with exponential backoff
time.sleep(base_delay * (2 ** attempt))
else:
# Client error (400, 403, 404) - don't retry, fix the request
raise
raise last_exception
return wrapper
return decorator
def format_error(error: Exception) -> dict:
"""Format an exception into an AI-friendly error response.
Structured errors help AI models understand what went wrong
and self-correct on the next attempt.
Returns: dict with status, error_code, message, and suggestion.
"""
if isinstance(error, HttpResponseError):
return {
"status": "error",
"error_code": error.error.code if error.error else str(error.status_code),
"message": str(error.message),
"suggestion": get_error_suggestion(error)
}
return {
"status": "error",
"error_code": "unknown",
"message": str(error),
"suggestion": "An unexpected error occurred. Check server logs."
}
def get_error_suggestion(error: HttpResponseError) -> str:
"""Map common HTTP status codes to actionable remediation hints.
The AI model uses these to fix issues without human intervention.
"""
# Map status codes to user-friendly suggestions
suggestions = {
401: "Authentication failed. Check your Azure credentials in .env.",
403: "Insufficient permissions. Verify the Log Analytics Reader role.",
404: "Workspace not found. Check SENTINEL_WORKSPACE_ID in .env.",
429: "Rate limited by Azure. The query will be retried automatically.",
}
return suggestions.get(error.status_code, "Check the query syntax and try again.")suggestion field in every error response. AI models use this text to understand how to fix the issue and retry with a corrected request. without this, the model may fall into a retry loop with the same broken query.Use the official MCP Inspector to test your server before connecting it to an AI client. The Inspector provides a visual interface to discover tools, invoke them, and inspect responses.
pip install "mcp[cli]"# Launch the MCP Inspector - a visual UI for testing MCP servers
# The Inspector connects via stdio, discovers tools, and lets you
# invoke them interactively to verify schemas and responses
mcp dev src/server.py
# The inspector opens in your browser at http://localhost:5173
# Use the Tools tab to see all registered tools and test themlist_sentinel_tables. click it and verify it returns available tablesrun_kql_query with a simple query:# Test queries to run in the MCP Inspector
# These validate that your server correctly executes KQL and returns results
# Query 1: Simple incident list - verifies SecurityIncident table access
SecurityIncident | take 5
# Query 2: Alert summary by severity - verifies aggregation works
SecurityAlert
| summarize Count=count() by AlertSeverity
| order by Count desc
# Query 3: Sign-in anomaly detection - verifies cross-table access
# Finds users with >5 failed sign-ins (potential brute force)
SigninLogs
| where ResultType != "0"
| summarize FailedAttempts=count() by UserPrincipalName
| where FailedAttempts > 5
| order by FailedAttempts descget_recent_incidents with and without a severity filter.drop and verify it is blocked// MCP Client Configuration for VS Code
// Add to .vscode/mcp.json or VS Code settings.json
// This tells the VS Code MCP client how to launch your server
{
"mcpServers": {
// Server identifier - used in tool namespacing (sentinel.*)
"sentinel": {
// Command to launch the server process (stdio transport)
"command": "python",
// Path to the server entry point script
"args": ["src/server.py"],
// Working directory for the server process
"cwd": "/path/to/sentinel-mcp-server",
// Environment variables passed to the server process
// These override any .env file values
"env": {
"AZURE_TENANT_ID": "your-tenant-id",
"AZURE_CLIENT_ID": "your-client-id",
"AZURE_CLIENT_SECRET": "your-client-secret",
"SENTINEL_WORKSPACE_ID": "your-workspace-id"
}
}
}
}Create comprehensive documentation so other team members can deploy and use your MCP server.
# .env.example - Template for team members to create their own .env
# Copy this file: cp .env.example .env
# Then fill in your values from the Azure portal
# Azure Authentication (from Entra ID > App registrations)
AZURE_TENANT_ID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
AZURE_CLIENT_ID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
AZURE_CLIENT_SECRET=your-client-secret-here
# Sentinel Configuration (from Log Analytics workspace > Properties)
SENTINEL_WORKSPACE_ID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
# Optional: Query Guardrails - limit what AI clients can do
# These protect your workspace from expensive or oversized queries
MAX_QUERY_TIMESPAN=30 # Maximum days for KQL queries
MAX_RESULTS=1000 # Maximum rows returned per query
MAX_QUERY_LENGTH=5000 # Maximum KQL query string length# Freeze current package versions for reproducible deployments
pip freeze > requirements.txt
# Or manually create a minimal requirements.txt with version floors:
mcp[cli]>=1.0.0 # MCP server framework with CLI tools
azure-identity>=1.15.0 # Entra ID authentication (DefaultAzureCredential)
azure-monitor-query>=1.3.0 # KQL query execution against Log Analytics
python-dotenv>=1.0.0 # Load .env files into environment.env and .venv/)tests/test_tools.py to validate tool schemas| Resource | Description |
|---|---|
| Introduction to Model Context Protocol | Official MCP specification and architecture |
| MCP Tools | Define and implement tools for AI model interaction |
| Log Analytics Query API | Execute KQL queries via REST API |
| Azure Monitor Logs API overview | Programmatic access to Log Analytics data |
| Extend Sentinel across workspaces | Multi-workspace architecture patterns |
| MCP Resources | Expose data and context to AI models via resources |