Intermediate โฑ 120 min ๐Ÿ“‹ 12 Steps

Build an AI Communications Agent for Outlook & Teams

Build a Microsoft AI agent that handles all communications via Outlook and Microsoft Teams. The agent auto-acknowledges non-critical messages for Lessi Coulibaly, drafts context-aware replies for critical messages with human-in-the-loop review before sending, and showcases the power of the Microsoft 365 Agents SDK.

๐Ÿ“‹ Overview

About This Lab

Microsoft AI agents can go far beyond chat assistants. In this lab you will build a fully autonomous communications agent that monitors Outlook email and Microsoft Teams messages in real time. The agent classifies every incoming message as critical or non-critical using Azure OpenAI, then takes the appropriate action. For non-critical messages addressed to Lessi Coulibaly (at lessic@lessit.net or contact@lessit.net), the agent automatically sends an acknowledgement reply confirming receipt and promising a follow-up. For critical messages, it generates a draft reply for human review before sending, ensuring that high-stakes communications always get a human eye. This lab uses the Microsoft 365 Agents SDK, Microsoft Graph API, and Azure OpenAI to deliver a production-grade solution.

๐Ÿข Enterprise Use Case

A cybersecurity consultant receives 80+ emails and 40+ Teams messages per day across client engagements, vendor communications, newsletter subscriptions, and meeting requests. Without an AI agent, response times average 4–6 hours for routine messages, and critical client requests occasionally get buried under the volume. Deploying this AI communications agent reduces routine response time to under 30 seconds (automated acknowledgement), ensures critical messages are surfaced immediately with a drafted response ready for review, and frees 2+ hours of daily productivity. The human-in-the-loop design for critical messages maintains trust and professionalism while the autonomous handling of non-critical messages demonstrates true AI-powered productivity.

๐ŸŽฏ What You Will Learn

  1. Register a Microsoft Entra ID application with Mail.ReadWrite, Mail.Send, Chat.ReadWrite, and ChannelMessage.Send permissions
  2. Set up the Microsoft 365 Agents SDK project with TypeScript
  3. Connect to Microsoft Graph API to read and send emails and Teams messages
  4. Classify messages as critical or non-critical using Azure OpenAI
  5. Build auto-acknowledgement logic for non-critical messages addressed to specific recipients
  6. Implement human-in-the-loop review for critical messages via Adaptive Cards in Teams
  7. Generate context-aware draft replies using Azure OpenAI with conversation history
  8. Configure Microsoft Graph change notifications (webhooks) for real-time message monitoring
  9. Deploy the agent to Azure App Service with managed identity
  10. Monitor agent activity with Application Insights and build an operational dashboard
  11. Test edge cases: out-of-office, thread replies, attachments, and distribution lists
  12. Establish governance policies and audit logging for autonomous agent actions

๐Ÿ”‘ Why This Matters

The average knowledge worker spends 28% of their workday on email. Microsoft AI agents can automate the routine portions while keeping humans in control of high-stakes decisions. The Microsoft 365 Agents SDK provides a first-party framework for building agents that operate within the Microsoft security boundary, respect data residency, and integrate natively with Outlook and Teams. Unlike generic automation tools, these agents leverage Azure OpenAI for intelligent classification and response generation. making them context-aware, adaptable, and professional. This lab showcases the full spectrum: fully autonomous for the routine, human-supervised for the critical.

โš™๏ธ Prerequisites

  • Microsoft 365 E3/E5 or developer tenant with Exchange Online and Microsoft Teams enabled
  • Azure subscription with Contributor role to deploy Azure OpenAI and App Service resources
  • Azure OpenAI Service with a GPT-4.1 deployment (used for message classification and reply generation)
  • Node.js 20+ and TypeScript 5+ installed on your development machine
  • Microsoft Entra ID Global Administrator or Application Administrator role to register the app and grant admin consent
  • Visual Studio Code with the Teams Toolkit extension installed
๐Ÿ’ก Pro Tip: Use a Microsoft 365 Developer Program tenant for this lab. It includes 25 E5 licenses for development and testing at no cost.

๐Ÿ—๏ธ Architecture Overview

The agent sits between Microsoft Graph (Outlook & Teams) and Azure OpenAI, powered by the Microsoft 365 Agents SDK:

Architecture Flow
Incoming Messages
Outlook Email (lessic@lessit.net, contact@lessit.net) & Teams Chat (Channel & DM)
Microsoft Graph Change Notifications
Real-time webhooks for mail & chat resources
AI Communications Agent
Microsoft 365 Agents SDK · Azure App Service
Azure OpenAI (GPT-4.1)
Classify message · Generate draft reply
Non-Critical
Auto-reply: “Message received, follow-up will follow” · Sent via Graph API automatically
Critical
Draft reply → Adaptive Card in Teams → Human reviews → Approve, edit, or dismiss

Step 1 ยท Register the Entra ID Application

The agent needs Microsoft Graph permissions to read and send emails and Teams messages on behalf of the organisation. Register an application in Entra ID and configure the required API permissions.

Portal Instructions

  1. Navigate to entra.microsoft.com > Applications > App registrations
  2. Click New registration
  3. Name: AI Communications Agent
  4. Supported account types: Accounts in this organizational directory only
  5. Redirect URI: leave blank for now (the agent uses client credentials flow)
  6. Click Register
  7. Note the Application (client) ID and Directory (tenant) ID
  8. Go to Certificates & secrets > New client secret > Description: agent-secret > Expiry: 6 months
  9. Copy the secret value immediately (it will not be shown again)

Configure API Permissions

  1. Go to API permissions > Add a permission > Microsoft Graph > Application permissions
  2. Add the following permissions:
    • Mail.ReadWrite - Read and write mail in all mailboxes
    • Mail.Send - Send mail as any user
    • Chat.ReadWrite.All - Read and write all chat messages
    • ChannelMessage.Read.All - Read all channel messages
    • User.Read.All - Read all users’ full profiles
  3. Click Grant admin consent for [your tenant]
  4. Verify all permissions show Granted status

Azure CLI

# PURPOSE: Register the Entra ID application for the AI Communications Agent
# OUTPUT: Application object with appId (client ID) and id (object ID)
az ad app create \
  --display-name "AI Communications Agent" \
  --sign-in-audience AzureADMyOrg

# PURPOSE: Add Microsoft Graph application permissions
# GRAPH RESOURCE ID: 00000003-0000-0000-c000-000000000000
# NOTE: Replace <APP_ID> with the appId from the previous command
az ad app permission add \
  --id <APP_ID> \
  --api 00000003-0000-0000-c000-000000000000 \
  --api-permissions \
    e2a3a72e-5f79-4c64-b005-9c3541826ad8=Role \
    b633e1c5-b582-4048-a93e-9f11b44c7e96=Role \
    6b7d71aa-70aa-4810-a8d9-5d9fb2830017=Role \
    7b2449af-6ccd-4f4d-9f78-e550c193f0d7=Role \
    df021288-bdef-4463-88db-98f22de89214=Role

# PURPOSE: Grant admin consent for all permissions
az ad app permission admin-consent --id <APP_ID>
โš ๏ธ Important: These are application permissions (not delegated), meaning the agent operates without a signed-in user. Scope the agent to specific mailboxes using an application access policy to follow the principle of least privilege.

Step 2 ยท Scaffold the Agent Project

Use the Microsoft 365 Agents SDK to scaffold a TypeScript project. The SDK provides the activity handler framework, authentication helpers, and built-in Teams integration patterns.

Project Setup

# PURPOSE: Create the project directory and initialise
mkdir ai-comms-agent && cd ai-comms-agent
npm init -y

# PURPOSE: Install the Microsoft 365 Agents SDK and dependencies
npm install @microsoft/agents-hosting @microsoft/agents-hosting-express
npm install @azure/identity @microsoft/microsoft-graph-client
npm install @azure/openai
npm install dotenv express

# PURPOSE: Install dev dependencies for TypeScript
npm install -D typescript @types/node @types/express ts-node nodemon

# PURPOSE: Initialise TypeScript configuration
npx tsc --init --target ES2022 --module NodeNext \
  --moduleResolution NodeNext --outDir dist --rootDir src \
  --strict true --esModuleInterop true

Environment Configuration

# .env file - NEVER commit to source control
# PURPOSE: Configure credentials and target mailboxes

AZURE_TENANT_ID=your-tenant-id
AZURE_CLIENT_ID=your-client-id
AZURE_CLIENT_SECRET=your-client-secret

AZURE_OPENAI_ENDPOINT=https://your-openai.openai.azure.com/
AZURE_OPENAI_DEPLOYMENT=gpt-4.1
AZURE_OPENAI_API_KEY=your-openai-key

# Target mailboxes the agent monitors and responds from
MONITORED_EMAIL_1=lessic@lessit.net
MONITORED_EMAIL_2=contact@lessit.net
MONITORED_USER_NAME=Lessi Coulibaly

# Webhook endpoint for Graph change notifications
WEBHOOK_URL=https://your-agent.azurewebsites.net/api/notifications
PORT=3978
๐Ÿ’ก Pro Tip: In production, store secrets in Azure Key Vault and use managed identity instead of client secrets. The @azure/identity package supports DefaultAzureCredential which automatically uses managed identity when deployed to Azure.

Step 3 ยท Build the Microsoft Graph Client

Create a reusable Graph client that authenticates using client credentials and provides methods for reading and sending emails and Teams messages.

src/graphClient.ts

// PURPOSE: Reusable Microsoft Graph client for Outlook and Teams operations
// WHY: Centralises authentication and provides typed helpers for mail and chat
// PATTERN: Singleton client - reused across all agent modules to avoid repeated auth

import { ClientSecretCredential } from "@azure/identity";
import { Client } from "@microsoft/microsoft-graph-client";
import {
  TokenCredentialAuthenticationProvider
} from "@microsoft/microsoft-graph-client/authProviders/azureTokenCredentials";

// Authenticate using client credentials (app-only, no signed-in user)
// This credential is scoped by the application access policy configured in Step 1
const credential = new ClientSecretCredential(
  process.env.AZURE_TENANT_ID!,
  process.env.AZURE_CLIENT_ID!,
  process.env.AZURE_CLIENT_SECRET!
);

// TokenCredentialAuthenticationProvider handles token caching and auto-refresh
// The .default scope requests all permissions granted via admin consent
const authProvider = new TokenCredentialAuthenticationProvider(credential, {
  scopes: ["https://graph.microsoft.com/.default"],
});

// Singleton Graph client - import this from any module that needs Graph access
export const graphClient = Client.initWithMiddleware({ authProvider });

// Fetch unread inbox messages for a monitored mailbox
// Returns: array of message objects with id, subject, from, body, importance
// NOTE: Limited to 20 messages per call to avoid excessive API consumption
export async function getUnreadEmails(userEmail: string) {
  return graphClient
    .api(`/users/${userEmail}/mailFolders/inbox/messages`)
    .filter("isRead eq false")
    .top(20)
    .orderby("receivedDateTime desc")
    .select("id,subject,from,toRecipients,body,receivedDateTime,importance")
    .get();
}

// Reply to a specific email on behalf of the monitored mailbox
// WHY: Used by both auto-acknowledgement (non-critical) and human-approved replies (critical)
export async function sendReply(
  userEmail: string,
  messageId: string,
  replyBody: string
) {
  return graphClient
    .api(`/users/${userEmail}/messages/${messageId}/reply`)
    .post({
      message: {
        body: {
          contentType: "Text",
          content: replyBody,
        },
      },
    });
}

// Send a new email (not a reply) from a monitored mailbox
// WHY: Used when the agent needs to initiate a new conversation thread
export async function sendEmail(
  fromEmail: string,
  toEmail: string,
  subject: string,
  body: string
) {
  return graphClient.api(`/users/${fromEmail}/sendMail`).post({
    message: {
      subject,
      body: { contentType: "Text", content: body },
      toRecipients: [{ emailAddress: { address: toEmail } }],
    },
  });
}

// Send a message to a Teams chat thread
// WHY: Used for auto-acknowledgement in Teams DMs and channel messages
export async function sendTeamsReply(
  chatId: string,
  replyBody: string
) {
  return graphClient.api(`/chats/${chatId}/messages`).post({
    body: { contentType: "text", content: replyBody },
  });
}
๐Ÿ’ก Pro Tip: The Graph client uses TokenCredentialAuthenticationProvider from the Azure Identity library, which handles token caching and refresh automatically. Never implement token management manually.

Step 4 ยท Implement Message Classification with Azure OpenAI

The heart of the agent is its ability to classify every incoming message as critical or non-critical. Critical messages require human review before sending a response. Non-critical messages get an automatic acknowledgement.

src/classifier.ts

// PURPOSE: Classify incoming messages as critical or non-critical using Azure OpenAI
// WHY: Determines whether the agent auto-replies or routes to human review
// MODEL: GPT-4.1 with JSON mode for structured, parseable classification output
// FAIL-SAFE: Defaults to "critical" if classification fails - never auto-replies on error

import { AzureOpenAI } from "@azure/openai";

// Singleton Azure OpenAI client - reused across all classification calls
const openaiClient = new AzureOpenAI({
  endpoint: process.env.AZURE_OPENAI_ENDPOINT!,
  apiKey: process.env.AZURE_OPENAI_API_KEY!,
  apiVersion: "2025-04-01-preview",
});

export interface ClassificationResult {
  criticality: "critical" | "non-critical";
  confidence: number;   // 0.0 to 1.0 - how confident the model is
  reason: string;       // Brief explanation of the classification decision
  suggestedAction: string; // Recommended next step for the agent
}

// System prompt defines the classification rules and output format
// WHY: Separating critical vs. non-critical criteria gives the model clear boundaries
// TIP: Adjust these lists based on your organisation's communication patterns
const SYSTEM_PROMPT = `You are a message classification assistant for Lessi Coulibaly,
a cybersecurity consultant. Classify each incoming email or Teams message as either
"critical" or "non-critical".

CRITICAL messages include:
- Client escalations or urgent security incidents
- Executive communications (C-suite, board members)
- Contract or legal matters requiring a response
- Time-sensitive requests with deadlines within 24 hours
- Messages flagged as high importance by the sender
- Compliance or audit requests
- Incident response or breach notifications

NON-CRITICAL messages include:
- Newsletters, marketing emails, and notifications
- Meeting invitations and calendar updates
- General inquiries and informational messages
- Internal announcements and FYI communications
- Automated system notifications
- Social messages and casual check-ins
- Vendor outreach and sales pitches

Respond ONLY with valid JSON in this format:
{
  "criticality": "critical" or "non-critical",
  "confidence": 0.0 to 1.0,
  "reason": "brief explanation",
  "suggestedAction": "brief recommended action"
}`;

// Classify a single message - called by the agent for every incoming email/Teams message
// INPUT: subject, body (truncated to 2000 chars), sender address, importance flag
// OUTPUT: ClassificationResult with criticality, confidence, reason, and suggested action
// NOTE: temperature=0.1 for near-deterministic classification; JSON mode for reliable parsing
export async function classifyMessage(
  subject: string,
  body: string,
  sender: string,
  importance: string
): Promise<ClassificationResult> {
  // Build a structured prompt with all message metadata for accurate classification
  const userPrompt = [
    \`From: \${sender}\`,
    \`Subject: \${subject}\`,
    \`Importance: \${importance}\`,
    \`Body: \${body.substring(0, 2000)}\`,
  ].join("\\n");

  // Call Azure OpenAI with JSON mode to get structured classification
  const response = await openaiClient.chat.completions.create({
    model: process.env.AZURE_OPENAI_DEPLOYMENT!,
    messages: [
      { role: "system", content: SYSTEM_PROMPT },
      { role: "user", content: userPrompt },
    ],
    temperature: 0.1,
    max_tokens: 200,
    response_format: { type: "json_object" },
  });

  // FAIL-SAFE: If the model returns no content, default to critical
  // This ensures no important message is auto-replied to without human review
  const content = response.choices[0]?.message?.content;
  if (!content) {
    return {
      criticality: "critical",
      confidence: 0,
      reason: "Classification failed - defaulting to critical for safety",
      suggestedAction: "Manual review required",
    };
  }
  return JSON.parse(content) as ClassificationResult;
}
โš ๏ธ Important: The classifier defaults to critical when classification fails. This fail-safe ensures that no important message is accidentally auto-replied to without human review. Always design AI systems to fail conservatively.

Step 5 ยท Build the Auto-Acknowledgement Handler

For non-critical messages addressed to Lessi Coulibaly at lessic@lessit.net or contact@lessit.net, the agent automatically sends a professional acknowledgement confirming receipt and promising a follow-up.

src/autoReply.ts

// PURPOSE: Auto-acknowledge non-critical messages addressed to monitored mailboxes
// WHY: Ensures senders get immediate confirmation while Lessi reviews at their own pace
// SCOPE: Only replies to messages sent to lessic@lessit.net or contact@lessit.net

import { sendReply, sendTeamsReply } from "./graphClient";

// Build the list of monitored email addresses from environment configuration
// These are the addresses the agent watches and auto-replies from
const MONITORED_ADDRESSES = [
  process.env.MONITORED_EMAIL_1?.toLowerCase(),
  process.env.MONITORED_EMAIL_2?.toLowerCase(),
].filter(Boolean) as string[];

// Email acknowledgement template - professional, sets clear expectations
// WHY: Confirms receipt and promises a human follow-up without committing to a timeline
const ACK_TEMPLATE_EMAIL = `Hi,

Thank you for your message. This is to confirm that it has been well received.

Lessi Coulibaly will review your message and a follow-up email will follow shortly.

Best regards,
Lessi Coulibaly
LessIT | Cybersecurity Consultant
https://lessit.net`;

// Teams acknowledgement template - shorter format suited for chat context
const ACK_TEMPLATE_TEAMS = `Thanks for your message! This is to confirm it was well received. ` +
  `Lessi Coulibaly will follow up shortly.`;

// Check if any recipient address matches a monitored mailbox
// WHY: The agent should only auto-reply to messages actually sent to Lessi's addresses
export function isMonitoredRecipient(toAddresses: string[]): boolean {
  return toAddresses.some((addr) =>
    MONITORED_ADDRESSES.includes(addr.toLowerCase())
  );
}

// Send the appropriate acknowledgement based on channel (email or Teams)
// WHY: Each channel has a different format and API call pattern
export async function sendAutoAcknowledgement(
  channel: "email" | "teams",
  options: {
    userEmail?: string;
    messageId?: string;
    chatId?: string;
  }
) {
  if (channel === "email" && options.userEmail && options.messageId) {
    await sendReply(
      options.userEmail,
      options.messageId,
      ACK_TEMPLATE_EMAIL
    );
    console.log(
      \`[AUTO-ACK] Email acknowledgement sent from \${options.userEmail}\`
    );
  } else if (channel === "teams" && options.chatId) {
    await sendTeamsReply(options.chatId, ACK_TEMPLATE_TEAMS);
    console.log("[AUTO-ACK] Teams acknowledgement sent");
  }
}
๐Ÿ’ก Pro Tip: The acknowledgement template is professional and sets clear expectations: the sender knows their message was received and a human follow-up is coming. Avoid overly casual language or making specific time commitments that may not be met.

Step 6 ยท Implement Draft Reply Generation for Critical Messages

For critical messages, the agent generates a context-aware draft reply using Azure OpenAI and presents it to Lessi for review before sending. The draft is surfaced via an Adaptive Card in Teams for easy approve/edit/reject actions.

src/draftGenerator.ts

// PURPOSE: Generate context-aware draft replies for critical messages
// WHY: Gives Lessi a head start - AI drafts a professional reply for human review
// MODEL: GPT-4.1 with temperature=0.4 for creative yet professional tone
// SAFETY: Draft is NEVER sent automatically - always reviewed via Adaptive Card

import { AzureOpenAI } from "@azure/openai";

// Reuse a singleton OpenAI client for all draft generation calls
const openaiClient = new AzureOpenAI({
  endpoint: process.env.AZURE_OPENAI_ENDPOINT!,
  apiKey: process.env.AZURE_OPENAI_API_KEY!,
  apiVersion: "2025-04-01-preview",
});

// System prompt establishes Lessi's professional persona and reply guidelines
// WHY: Ensures consistent tone and prevents the model from making commitments
const DRAFT_SYSTEM_PROMPT = `You are a professional email assistant for Lessi Coulibaly,
a cybersecurity consultant at LessIT. Generate a draft reply to the following message.

Guidelines:
- Be professional, concise, and helpful
- Match the formality level of the incoming message
- Acknowledge the sender's concern or request specifically
- If the message involves a security incident, include standard IR language
- If the message involves a meeting or deadline, confirm availability
  or request clarification
- Sign off as "Lessi Coulibaly, LessIT | Cybersecurity Consultant"
- Do NOT make commitments you cannot verify (pricing, timelines, etc.)
  - ask for clarification instead`;

// Generate a draft reply for a critical message
// INPUT: Original message metadata + classification context from Step 4
// OUTPUT: Professional draft reply string (or fallback message on failure)
// NOTE: Body truncated to 3000 chars to stay within token limits
export async function generateDraftReply(
  subject: string,
  body: string,
  sender: string,
  classification: { reason: string; suggestedAction: string }
): Promise<string> {
  // Include classification context so the model understands why this is critical
  const userPrompt = [
    \`Original message from: \${sender}\`,
    \`Subject: \${subject}\`,
    \`Classification reason: \${classification.reason}\`,
    \`Suggested action: \${classification.suggestedAction}\`,
    \`---\`,
    \`Message body:\`,
    body.substring(0, 3000),
    \`---\`,
    \`Generate a professional draft reply.\`,
  ].join("\\n");

  // temperature=0.4 balances creativity with professionalism
  // Higher than classifier (0.1) because replies need natural variation
  const response = await openaiClient.chat.completions.create({
    model: process.env.AZURE_OPENAI_DEPLOYMENT!,
    messages: [
      { role: "system", content: DRAFT_SYSTEM_PROMPT },
      { role: "user", content: userPrompt },
    ],
    temperature: 0.4,
    max_tokens: 800,
  });

  // Return the draft or a fallback message if generation fails
  return (
    response.choices[0]?.message?.content ??
    "Unable to generate draft. Please compose a reply manually."
  );
}

Step 7 ยท Build the Human-in-the-Loop Review via Adaptive Cards

When a critical message is received, the agent sends an Adaptive Card to Lessi’s Teams chat. The card shows the original message summary, the AI-generated draft reply, and three action buttons: Approve & Send, Edit & Send, and Dismiss.

src/adaptiveCard.ts

// PURPOSE: Build an Adaptive Card for human-in-the-loop review of critical messages
// WHY: Adaptive Cards render natively in Teams with interactive buttons
// ACTIONS: Approve & Send (sends draft as-is), Edit & Send (sends modified text), Dismiss
// SPEC: Adaptive Card v1.5 - supported in Teams desktop, web, and mobile

export function buildReviewCard(
  messageId: string,
  sender: string,
  subject: string,
  criticality: string,
  confidence: number,
  reason: string,
  draftReply: string,
  channel: "email" | "teams"
) {
  return {
    type: "AdaptiveCard",
    version: "1.5",
    body: [
      {
        type: "TextBlock",
        text: "๐Ÿšจ Critical Message - Review Required",
        weight: "Bolder",
        size: "Large",
        color: "Attention",
      },
      {
        type: "FactSet",
        facts: [
          { title: "From", value: sender },
          { title: "Subject", value: subject },
          { title: "Channel", value: channel === "email" ? "๐Ÿ“ง Outlook" : "๐Ÿ’ฌ Teams" },
          { title: "Criticality", value: \`\${criticality} (\${Math.round(confidence * 100)}%)\` },
          { title: "Reason", value: reason },
        ],
      },
      {
        type: "TextBlock",
        text: "AI-Generated Draft Reply:",
        weight: "Bolder",
        spacing: "Medium",
      },
      {
        type: "TextBlock",
        text: draftReply,
        wrap: true,
        spacing: "Small",
      },
      {
        type: "Input.Text",
        id: "editedReply",
        label: "Edit the reply (optional):",
        isMultiline: true,
        placeholder: "Modify the draft above or leave blank to send as-is...",
        spacing: "Medium",
      },
    ],
    actions: [
      {
        type: "Action.Submit",
        title: "โœ… Approve & Send",
        style: "positive",
        data: {
          action: "approve",
          messageId,
          channel,
          draftReply,
        },
      },
      {
        type: "Action.Submit",
        title: "โœ๏ธ Edit & Send",
        data: {
          action: "edit",
          messageId,
          channel,
        },
      },
      {
        type: "Action.Submit",
        title: "โŒ Dismiss",
        style: "destructive",
        data: {
          action: "dismiss",
          messageId,
        },
      },
    ],
  };
}
๐Ÿ’ก Pro Tip: The Adaptive Card includes an editable text input so Lessi can modify the AI-generated draft before sending. This combination of AI assistance and human control is the gold standard for enterprise AI agents.

Step 8 ยท Wire Up the Agent Core Logic

Bring all the pieces together in the main agent handler. This module processes incoming messages, classifies them, and routes them to the appropriate handler (auto-acknowledge or human review).

src/agent.ts

// PURPOSE: Core agent logic - routes incoming messages to the correct handler
// FLOW: Classify โ†’ (non-critical: auto-ack) or (critical: draft โ†’ Adaptive Card โ†’ review)
// WHY: Single entry point for all message processing, regardless of channel

import { classifyMessage } from "./classifier";
import { isMonitoredRecipient, sendAutoAcknowledgement } from "./autoReply";
import { generateDraftReply } from "./draftGenerator";
import { buildReviewCard } from "./adaptiveCard";
import { graphClient } from "./graphClient";

// Unified message interface - normalises email and Teams message data
interface IncomingMessage {
  id: string;
  subject: string;
  body: string;
  sender: string;
  toAddresses: string[];
  importance: string;
  channel: "email" | "teams";
  chatId?: string;
  userEmail?: string;
}

// Main entry point - processes every incoming message through the classification pipeline
export async function handleIncomingMessage(message: IncomingMessage) {
  console.log(
    \`[AGENT] Processing \${message.channel} from \${message.sender}: \${message.subject}\`
  );

  // Step 1: Classify the message
  const classification = await classifyMessage(
    message.subject,
    message.body,
    message.sender,
    message.importance
  );

  console.log(
    \`[AGENT] Classification: \${classification.criticality} ` +
    \`(confidence: \${classification.confidence}) - \${classification.reason}\`
  );

  // Step 2: Route based on classification
  if (classification.criticality === "non-critical") {
    // Auto-acknowledge if addressed to monitored mailboxes
    if (isMonitoredRecipient(message.toAddresses)) {
      await sendAutoAcknowledgement(message.channel, {
        userEmail: message.userEmail,
        messageId: message.id,
        chatId: message.chatId,
      });

      // Mark email as read
      if (message.channel === "email" && message.userEmail) {
        await graphClient
          .api(\`/users/\${message.userEmail}/messages/\${message.id}\`)
          .update({ isRead: true });
      }
    }
  } else {
    // Critical: generate draft and send for human review
    const draftReply = await generateDraftReply(
      message.subject,
      message.body,
      message.sender,
      classification
    );

    const card = buildReviewCard(
      message.id,
      message.sender,
      message.subject,
      classification.criticality,
      classification.confidence,
      classification.reason,
      draftReply,
      message.channel
    );

    // Send the Adaptive Card to Lessi's personal Teams chat with the agent
    await sendAdaptiveCardToReviewer(card);

    console.log("[AGENT] Critical message - review card sent to Teams");
  }
}

// Send an Adaptive Card to Lessi's Teams chat for human review
// WHY: Proactive messaging ensures critical messages are surfaced immediately
// REQUIRES: REVIEWER_CHAT_ID - the bot's 1:1 chat ID with Lessi (set during initial interaction)
async function sendAdaptiveCardToReviewer(card: object) {
  // Send proactive message to Lessi's Teams conversation with the agent bot
  // This uses the bot's conversation reference stored during initial interaction
  const reviewerChatId = process.env.REVIEWER_CHAT_ID;
  if (!reviewerChatId) {
    console.error("[AGENT] REVIEWER_CHAT_ID not configured");
    return;
  }
  await graphClient.api(\`/chats/\${reviewerChatId}/messages\`).post({
    body: { contentType: "html", content: "" },
    attachments: [
      {
        id: "review-card",
        contentType: "application/vnd.microsoft.card.adaptive",
        content: JSON.stringify(card),
      },
    ],
  });
}
โš ๏ธ Important: The agent marks non-critical emails as read after acknowledging them, preventing re-processing. Implement idempotency by tracking processed message IDs in a lightweight store (e.g., Azure Table Storage) to handle webhook redeliveries.

Step 9 ยท Configure Graph Change Notifications (Webhooks)

Instead of polling, use Microsoft Graph change notifications to receive real-time alerts when new emails arrive or Teams messages are posted. The agent subscribes to mail and chat resources.

src/webhooks.ts

// PURPOSE: Subscribe to Microsoft Graph change notifications for real-time message monitoring
// WHY: Webhooks are far more efficient than polling - the agent is notified instantly
// EXPIRY: Mail subscriptions last 3 days; Chat subscriptions last 1 hour (Graph API limits)
// NOTE: Subscriptions must be renewed before expiry to avoid missed notifications

import { graphClient } from "./graphClient";

// Subscribe to new emails for each monitored mailbox
export async function createMailSubscription(userEmail: string) {
  const subscription = await graphClient.api("/subscriptions").post({
    changeType: "created",
    notificationUrl: process.env.WEBHOOK_URL,
    resource: \`/users/\${userEmail}/mailFolders/inbox/messages\`,
    expirationDateTime: new Date(
      Date.now() + 3 * 24 * 60 * 60 * 1000  // 3 days max for mail
    ).toISOString(),
    clientState: "ai-comms-agent-secret",
  });

  console.log(
    \`[WEBHOOK] Mail subscription created for \${userEmail}: \${subscription.id}\`
  );
  return subscription;
}

// Subscribe to Teams chat messages
export async function createChatSubscription() {
  const subscription = await graphClient.api("/subscriptions").post({
    changeType: "created",
    notificationUrl: process.env.WEBHOOK_URL,
    resource: "/chats/getAllMessages",
    expirationDateTime: new Date(
      Date.now() + 1 * 60 * 60 * 1000  // 1 hour max for chat, then renew
    ).toISOString(),
    clientState: "ai-comms-agent-secret",
    includeResourceData: true,
    encryptionCertificate: process.env.ENCRYPTION_CERT_BASE64,
    encryptionCertificateId: process.env.ENCRYPTION_CERT_ID,
  });

  console.log(\`[WEBHOOK] Chat subscription created: \${subscription.id}\`);
  return subscription;
}

// Renew subscriptions before they expire
export async function renewSubscription(subscriptionId: string, hours: number) {
  await graphClient.api(\`/subscriptions/\${subscriptionId}\`).update({
    expirationDateTime: new Date(
      Date.now() + hours * 60 * 60 * 1000
    ).toISOString(),
  });
  console.log(\`[WEBHOOK] Subscription renewed: \${subscriptionId}\`);
}

Express Webhook Endpoint

// PURPOSE: Express endpoint that receives Graph change notifications
// FLOW: Validate โ†’ respond 202 immediately โ†’ fetch full message โ†’ route to agent
// WHY: Graph expects a response within 3 seconds; async processing prevents timeouts
// SECURITY: clientState is verified on every notification to prevent spoofed webhooks

// In src/server.ts - webhook handler
import express from "express";
import { handleIncomingMessage } from "./agent";
import { graphClient } from "./graphClient";

const app = express();
app.use(express.json());

app.post("/api/notifications", async (req, res) => {
  // Handle Graph validation request
  if (req.query.validationToken) {
    return res.status(200).send(req.query.validationToken);
  }

  // Verify client state to prevent spoofed notifications
  const notifications = req.body?.value;
  if (!notifications) return res.sendStatus(202);

  // Respond immediately - process asynchronously
  res.sendStatus(202);

  for (const notification of notifications) {
    if (notification.clientState !== "ai-comms-agent-secret") continue;

    try {
      const resource = notification.resource;

      if (resource.includes("/messages")) {
        // Fetch the full message details
        const message = await graphClient.api(resource).get();
        const userEmail = resource.match(/users\/([^/]+)/)?.[1];

        await handleIncomingMessage({
          id: message.id,
          subject: message.subject ?? "(No subject)",
          body: message.body?.content ?? "",
          sender: message.from?.emailAddress?.address ?? "unknown",
          toAddresses: (message.toRecipients ?? []).map(
            (r: any) => r.emailAddress?.address
          ),
          importance: message.importance ?? "normal",
          channel: resource.includes("/chats/") ? "teams" : "email",
          chatId: resource.match(/chats\/([^/]+)/)?.[1],
          userEmail,
        });
      }
    } catch (err) {
      console.error("[WEBHOOK] Error processing notification:", err);
    }
  }
});
๐Ÿ’ก Pro Tip: Always return 202 Accepted immediately and process notifications asynchronously. Microsoft Graph expects a response within 3 seconds; if your handler takes longer, the notification is retried, potentially causing duplicate processing.

Step 10 ยท Handle the Human Review Actions

When Lessi interacts with the Adaptive Card in Teams (approve, edit, or dismiss), the agent must handle the action and send the final reply if approved.

src/reviewHandler.ts

// PURPOSE: Handle human review actions from the Adaptive Card in Teams
// ACTIONS: approve (send draft as-is), edit (send modified text), dismiss (no action)
// WHY: Completes the human-in-the-loop cycle - the agent never sends critical replies autonomously

import { sendReply, sendTeamsReply, graphClient } from "./graphClient";

// Typed interface for the action payload submitted by the Adaptive Card
interface ReviewAction {
  action: "approve" | "edit" | "dismiss";
  messageId: string;
  channel: "email" | "teams";
  draftReply?: string;
  editedReply?: string;
}

// Process the reviewer's decision and send the reply (or dismiss)
// INPUT: Action payload from Adaptive Card + the monitored mailbox address
// OUTPUT: Status object indicating what action was taken
export async function handleReviewAction(data: ReviewAction, userEmail: string) {
  switch (data.action) {
    case "approve": {
      // Send the AI-generated draft as-is
      const reply = data.draftReply ?? "Thank you for your message.";
      if (data.channel === "email") {
        await sendReply(userEmail, data.messageId, reply);
      } else {
        const chatId = data.messageId; // In Teams context
        await sendTeamsReply(chatId, reply);
      }
      console.log("[REVIEW] Approved and sent");
      return { status: "sent", message: "Reply sent successfully." };
    }

    case "edit": {
      // Send the human-edited version
      const editedReply = data.editedReply;
      if (!editedReply?.trim()) {
        return { status: "error", message: "Edited reply is empty." };
      }
      if (data.channel === "email") {
        await sendReply(userEmail, data.messageId, editedReply);
      } else {
        await sendTeamsReply(data.messageId, editedReply);
      }
      console.log("[REVIEW] Edited and sent");
      return { status: "sent", message: "Edited reply sent." };
    }

    case "dismiss": {
      console.log("[REVIEW] Dismissed - no action taken");
      return { status: "dismissed", message: "Message dismissed." };
    }

    default:
      return { status: "error", message: "Unknown action." };
  }
}

Step 11 ยท Deploy to Azure App Service

Deploy the agent to Azure App Service with managed identity for secure, always-on operation. The agent must be publicly accessible for Graph webhook notifications.

Azure CLI Deployment

# PURPOSE: Create a resource group for the agent
az group create \
  --name rg-ai-comms-agent \
  --location eastus \
  --tags Project=AICommsAgent Environment=Production

# PURPOSE: Create an App Service plan (B1 tier for always-on)
az appservice plan create \
  --name plan-ai-comms-agent \
  --resource-group rg-ai-comms-agent \
  --sku B1 \
  --is-linux

# PURPOSE: Create the Web App with Node.js 20 runtime
az webapp create \
  --name ai-comms-agent-lessit \
  --resource-group rg-ai-comms-agent \
  --plan plan-ai-comms-agent \
  --runtime "NODE:20-lts"

# PURPOSE: Enable managed identity for Key Vault and Graph access
az webapp identity assign \
  --name ai-comms-agent-lessit \
  --resource-group rg-ai-comms-agent

# PURPOSE: Configure app settings (secrets should come from Key Vault)
az webapp config appsettings set \
  --name ai-comms-agent-lessit \
  --resource-group rg-ai-comms-agent \
  --settings \
    AZURE_TENANT_ID="your-tenant-id" \
    AZURE_CLIENT_ID="your-client-id" \
    MONITORED_EMAIL_1="lessic@lessit.net" \
    MONITORED_EMAIL_2="contact@lessit.net" \
    MONITORED_USER_NAME="Lessi Coulibaly" \
    WEBHOOK_URL="https://ai-comms-agent-lessit.azurewebsites.net/api/notifications"

# PURPOSE: Deploy the compiled code
npm run build
az webapp deploy \
  --name ai-comms-agent-lessit \
  --resource-group rg-ai-comms-agent \
  --src-path dist/ \
  --type zip
โš ๏ธ Important: Enable Always On in the App Service configuration to prevent the agent from going idle. An idle agent will miss webhook notifications and delay message processing.

Step 12 ยท Review, Monitor, and Validate

Set up Application Insights for monitoring, validate the end-to-end flow, and establish governance rules for autonomous agent actions.

Monitoring with Application Insights

# PURPOSE: Create Application Insights for the agent
az monitor app-insights component create \
  --app ai-comms-agent-insights \
  --location eastus \
  --resource-group rg-ai-comms-agent \
  --application-type Node.JS

# PURPOSE: Connect App Insights to the Web App
az webapp config appsettings set \
  --name ai-comms-agent-lessit \
  --resource-group rg-ai-comms-agent \
  --settings \
    APPLICATIONINSIGHTS_CONNECTION_STRING="your-connection-string"

Validation Test Matrix

Test Scenario Expected Behaviour Status
Newsletter email to lessic@lessit.netAuto-acknowledge: “Message received, follow-up will follow”โ˜
Meeting invite to contact@lessit.netAuto-acknowledge with standard templateโ˜
Client escalation email (high importance)Draft reply generated, Adaptive Card sent to Teams for reviewโ˜
Security incident notificationClassified as critical, draft includes IR language, human reviewsโ˜
Teams DM with a questionAuto-acknowledge if non-critical; draft + review if criticalโ˜
Approve draft reply from Adaptive CardReply sent via Graph API, confirmation loggedโ˜
Edit and send from Adaptive CardEdited text sent, original draft not sentโ˜
Dismiss from Adaptive CardNo reply sent, action loggedโ˜
Email to unmonitored addressNo action taken (agent only monitors configured addresses)โ˜
Classification failure (OpenAI unavailable)Defaults to critical - sends for human reviewโ˜

Governance Checklist

Control Status
Application access policy limits agent to monitored mailboxes onlyโ˜
All auto-sent messages are logged to Application Insightsโ˜
Client secret stored in Azure Key Vault (not app settings)โ˜
Managed identity used for production authenticationโ˜
Webhook client state validated on every notificationโ˜
Auto-reply includes clear identification as automated messageโ˜
Subscription renewal runs on schedule (before expiry)โ˜

What You Accomplished

In this lab you:

  • Registered an Entra ID application with Microsoft Graph application permissions for mail and Teams.
  • Scaffolded a TypeScript project using the Microsoft 365 Agents SDK and Azure OpenAI.
  • Built a reusable Microsoft Graph client for reading and sending emails and Teams messages.
  • Implemented AI-powered message classification that distinguishes critical from non-critical messages.
  • Created an auto-acknowledgement system for non-critical messages addressed to lessic@lessit.net and contact@lessit.net.
  • Built a context-aware draft reply generator for critical messages using Azure OpenAI.
  • Designed a human-in-the-loop review flow using Adaptive Cards in Microsoft Teams.
  • Configured Microsoft Graph change notifications for real-time message monitoring.
  • Deployed the agent to Azure App Service with managed identity and monitoring.
  • Validated the complete flow across 10 test scenarios.

Next Steps

  • Add a feedback loop: when Lessi edits a draft, feed the edit back into the model for improved future drafts.
  • Expand to multiple users: create a configuration file mapping users to their monitored addresses and classification preferences.
  • Add Outlook categories: have the agent categorise processed emails (Auto-Acknowledged, Pending Review, Replied) for easy visual tracking.
  • Integrate with Copilot Studio: turn the agent into a declarative Copilot agent for broader M365 integration.
  • Build a weekly summary report of agent activity (messages processed, auto-replies sent, critical messages surfaced).

๐Ÿ“š Documentation Resources

ResourceDescription
Microsoft 365 Agents SDKOfficial documentation for building AI agents that integrate with Microsoft 365 services
Microsoft Graph Mail APIAPI reference for reading, sending, and managing emails via Microsoft Graph
Microsoft Graph Teams APIAPI reference for Teams chat messages, channels, and team management
Microsoft Graph Change NotificationsHow to set up webhooks for real-time notifications when resources change in Microsoft 365
Azure OpenAI ServiceDocumentation for deploying and using large language models via Azure OpenAI
Adaptive Cards DesignerVisual design tool for building Adaptive Cards used in Teams, Outlook, and other Microsoft surfaces
Application Access Policy for GraphHow to scope application mail permissions to specific mailboxes using access policies
← Lab 05 All Copilot Labs →