Autonomous Workflows
Autonomous workflows run investigations automatically. These are agents that operate 24/7, continuously hunting threats, triaging alerts, and analyzing security data. They're ideal for scaling security operations and ensuring consistent coverage even outside business hours.
When to Use Autonomous Workflows
Autonomous workflows solve a fundamental problem: your security team can't be everywhere at once. Alerts fire outside business hours, threat intelligence arrives without warning, and threat hunting takes time you don't have.
Use autonomous workflows when:
Alert Fatigue — Your team is drowning in alerts. You have 100+ alerts per day, but manually triaging each one wastes analyst time. Autonomous triage can instantly classify 80% as benign, leaving only real threats for human review.
Continuous Monitoring Need — You need 24/7 security coverage but can't staff analysts around the clock. Autonomous workflows hunt threats, analyze logs, and respond while your team sleeps.
Threat Intelligence Velocity — Threat reports arrive faster than you can investigate them. When a zero-day drops or a major breach is reported, you need instant visibility into whether you're affected—not days later when analysts get to it.
Routine Operations — You have repetitive security tasks that follow predictable patterns: weekly coverage analysis, daily risk assessments, incident ticket creation, team notifications. These are perfect for automation.
Detection Library Maintenance — Your detection rules need regular analysis to identify coverage gaps and validate hit rates. Monthly or quarterly coverage reviews are tedious for humans but trivial for agents.
Incident Response Workflow — When alerts fire, you follow a predictable process: gather context, enrich with threat intel, create tickets, notify teams. This entire workflow can be automated.
Scale your operations — You want to scale security operations (more rules, more monitoring, more threat hunting) and let your team focus on higher-value work. Agents can handle routine operations, freeing analysts to focus on complex investigations and strategy.
Autonomous workflows can also jumpstart work in complex cases:
Complex judgment required — Even when a finding needs human judgment, let an autonomous workflow gather all the context first: related activity, threat intelligence, user history, impact assessment. Your analyst gets a complete briefing instead of starting from zero.
Novel investigations — When investigating something new, start with an autonomous workflow to do initial reconnaissance and gather facts. Then hand off to interactive investigations for the exploratory analysis and judgment calls.
Low-volume, high-stakes alerts — Even with 2-3 critical alerts daily, an autonomous workflow can do the initial heavy lifting (enrichment, ticket creation, team notification) while you focus on the decision-making part.
Setup: Claude Agent SDK
There are various frameworks for building autonomous agents, but the Claude Agent SDK is recommended for Scanner workflows. It has shown excellent results for our users building detection and response agents, with straightforward integration to Scanner MCP and reliable agent behavior at scale.
1. Create requirements.txt with these dependencies for our examples:
claude-agent-sdk
python-dotenv
richThese are:
claude-agent-sdk — The SDK for building autonomous agents that use Claude as the reasoning engine
python-dotenv — Loads environment variables from your .env file (API keys, credentials, MCP URLs)
rich — Formats output with colors, tables, and styled text for readable terminal output
2. Install dependencies:
pip install -r requirements.txt3. Create .env file with your credentials:
ANTHROPIC_API_KEY=your-anthropic-api-key
SCANNER_MCP_URL=https://mcp.your-env-here.scanner.dev/v1/mcp
SCANNER_MCP_API_KEY=your-scanner-api-keyBuilding Your First Autonomous Agent
Here's a minimal agent that investigates the highest severity alert from the last hour:
#!/usr/bin/env python3
import asyncio
import os
from datetime import datetime, timedelta
from dotenv import load_dotenv
from rich import print as rprint
from claude_agent_sdk import query, ClaudeAgentOptions
async def alert_investigation_agent():
load_dotenv()
# Configure agent with Scanner MCP
options = ClaudeAgentOptions(
model="claude-sonnet-4-5-20250929",
allowed_tools=[
"mcp__scanner__get_scanner_context",
"mcp__scanner__execute_query",
"mcp__scanner__fetch_cached_results",
],
mcp_servers={
"scanner": {
"type": "http",
"url": os.environ.get("SCANNER_MCP_URL"),
"headers": {
"Authorization": f"Bearer {os.environ.get('SCANNER_MCP_API_KEY')}"
}
}
}
)
# Define investigation objective
prompt = """
Query Scanner to find the highest severity detection alert from
the last 1 hour. Then perform the following investigation:
1. Explain the alert: What is the detection rule looking for?
What threat does it represent?
2. Search for related activity: Find other events from the same
user, source IP, or account within the past 24 hours that
might indicate a broader attack or compromise.
3. Assess impact: Which users, systems, or data are affected?
What is the scope of this incident?
4. Cite evidence: Reference specific log events and timestamps
that support your findings.
5. Classify the alert as either:
- True Positive: Actual malicious activity with evidence
- False Positive: Benign activity triggering the rule
Include your confidence level (high/medium/low) and
reasoning for the classification.
"""
rprint(f"[{datetime.now().isoformat()}] Starting alert investigation...")
async for message in query(prompt=prompt, options=options):
rprint(message)
rprint(f"[{datetime.now().isoformat()}] Alert investigation complete.")
if __name__ == "__main__":
asyncio.run(alert_investigation_agent())Run the agent:
python agent.pySchedule execution (e.g., every hour):
# Add to crontab (runs every hour)
0 * * * * cd /path/to/agent && python agent.py >> agent.log 2>&1Autonomous Workflow Examples
Example 1: Basic Automated Alert Triage
This is the foundational autonomous workflow: automatically classify incoming alerts to reduce noise and prioritize analyst time.
When a detection alert fires, most alerts are false positives—legitimate activity that triggered a rule (e.g., a developer testing permissions, an automated backup job, a legitimate data export). Manually reviewing every alert wastes analyst time. This agent triages alerts instantly by:
Gathering context — Queries Scanner to find related activity from the same user, IP, or time period
Building a baseline — Compares this activity against the user's historical behavior to spot anomalies
Spotting patterns — Looks for indicators that distinguish legitimate activity from actual attacks
Classifying the alert — Makes a decision: benign (close it), suspicious (flag for review), or malicious (escalate immediately)
This agent processes incoming alerts and classifies them automatically:
async def alert_triage_agent(alert_id: str, alert_summary: str):
"""
Automatically triage an incoming alert:
- Quick benign assessment (no further action needed)
- Flag for human review (suspicious but needs context)
- Escalate (high confidence malicious activity)
"""
prompt = f"""
I'm receiving an alert and need you to perform automated triage:
**Alert ID**: {alert_id}
**Summary**: {alert_summary}
Please:
1. Query Scanner to gather related context (logs from same user, time, source IP)
2. Compare against that user's historical behavior
3. Check for patterns indicating false positive vs. true threat
4. Classify as one of:
- ✅ BENIGN: Legitimate activity, close alert
- ⚠️ SUSPICIOUS: Warrants investigation, flag for human review
- 🔴 MALICIOUS: High confidence threat, escalate immediately
Include:
- Your confidence level (percentage)
- Key evidence supporting classification
- Specific recommendations (what a human should do next)
- Any related alerts or activity pattern that should be investigated together
"""
async for message in query(prompt=prompt, options=options):
rprint(message)Deployment: Set this to trigger whenever a new alert arrives (via webhook, SNS, alert platform API).
Example 2: Autonomous Response Agent
This is a complete end-to-end autonomous incident response workflow. When a security alert fires, this agent:
Investigates the alert — Queries Scanner to understand what triggered it, who's involved, and what related activity exists
Enriches with threat intelligence — Searches VirusTotal for any IPs, domains, or file hashes to determine if they're known malicious
Takes action — Creates a ticket in your incident management system (Linear) with all investigation details and recommended response
Notifies the team — Posts a summary to Slack so your security team sees the alert, findings, and ticket immediately
Persists for audit — Sends the full conversation to a hypothetical audit service (eg. persists the conversation to S3) for later review, compliance, and learning
This example demonstrates orchestrating multiple MCP servers together to automate what would normally be a manual investigation workflow. Instead of:
An analyst seeing an alert
Manually investigating in multiple tools
Creating a ticket
Notifying the team
Documenting the investigation
...the agent does all of this in seconds, allowing your team to focus on complex incidents that require judgment. The complete investigation conversation is automatically saved for audit, regulatory compliance, and post-incident review.
The key innovation here is that the agent doesn't just report findings—it takes action: it creates tickets, notifies teams, and maintains an audit trail automatically. This bridges the gap between detection and response.
async def autonomous_response_agent(alert_id: str):
"""
Automatically respond to a security alert:
1. Investigate the alert with Scanner
2. Gather threat intelligence from VirusTotal
3. Create incident ticket in Linear
4. Post summary to Slack
"""
options = ClaudeAgentOptions(
model="claude-sonnet-4-5-20250929",
allowed_tools=[
"mcp__scanner__get_scanner_context",
"mcp__scanner__execute_query",
"mcp__scanner__fetch_cached_results",
"mcp__virustotal__search",
"mcp__virustotal__get_ip_report",
"mcp__virustotal__get_domain_report",
"mcp__linear__create_issue",
"mcp__linear__update_issue",
"mcp__slack__send_message",
],
mcp_servers={
"scanner": {
"type": "http",
"url": os.environ.get("SCANNER_MCP_URL"),
"headers": {
"Authorization": f"Bearer {os.environ.get('SCANNER_MCP_API_KEY')}"
}
},
"virustotal": {
"command": "npx",
"args": ["@burtthecoder/mcp-virustotal"],
"env": {
"VIRUSTOTAL_API_KEY": os.environ.get("VIRUSTOTAL_API_KEY"),
}
},
"linear": {
"command": "npx",
"args": ["-y", "mcp-remote", "https://mcp.linear.app/sse"],
},
"slack": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"-e",
"SLACK_BOT_TOKEN",
"-e",
"SLACK_TEAM_ID",
"-e",
"SLACK_CHANNEL_IDS",
"mcp/slack",
],
"env": {
"SLACK_BOT_TOKEN": os.environ.get("SLACK_BOT_TOKEN"),
"SLACK_TEAM_ID": os.environ.get("SLACK_TEAM_ID"),
"SLACK_CHANNEL_IDS": os.environ.get("SLACK_CHANNEL_IDS"),
}
}
}
)
prompt = f"""
I'm receiving a security alert (ID: {alert_id}) and need you to execute a full response workflow.
Please:
1. **Investigate in Scanner**: Query Scanner for details about this alert. Get:
- What the detection rule is
- Who triggered it (user, IP, account)
- Related activity from that user/IP in the last 24 hours
- Severity and confidence assessment
2. **Gather Threat Intelligence**: For any IPs, domains, or file hashes involved:
- Search VirusTotal for IOCs
- Get malware/reputation information
- Check if these are known malicious
3. **Create Response Ticket**: Use Linear to create an incident ticket with:
- Title: Concise description (e.g., "Potential S3 data exfiltration - user john.smith")
- Description: Full investigation summary including:
* Alert details
* Related activity context
* VirusTotal findings
* Recommended response actions
* Evidence and specific log references
- Priority: Set based on severity and confidence (High/Medium/Low)
- Assignee: Assign to security team
4. **Notify Team**: Post to Slack with:
- Alert summary (one sentence)
- Severity level and confidence
- Key findings (user, impact, threat indicators)
- Link to Linear ticket
- Recommended immediate actions
5. **Classification**: Based on your analysis, classify as:
- Confirmed Threat: Malicious activity, immediate response needed
- Suspicious: Warrants investigation, monitoring required
- Likely False Positive: Benign activity
Include confidence level (high/medium/low)
"""
response = ""
async for message in query(prompt=prompt, options=options):
response += message
rprint(message)
# Send investigation to your audit service (e.g., persists to S3 for later review)
await send_to_audit({
"timestamp": datetime.now().isoformat(),
"alert_id": alert_id,
"investigation": response
})Deployment: Trigger via webhook when high-severity alerts are generated. The agent investigates the alert, gathers threat intelligence, creates tickets in Linear, and posts to Slack—all actions happen during the agent run. The investigation summary is then sent to your audit service for persistent storage and later review.
Example 3: Scheduled Detection Coverage Analysis
This agent takes a strategic view of your security posture by analyzing your entire detection rule library and identifying what you're not detecting.
The problem this solves: You might have great rules for privilege escalation, but zero coverage for lateral movement. Or you detect exfiltration attempts, but miss reconnaissance. You won't know these gaps until you analyze your coverage systematically.
What this agent does:
Reads your detection rules from GitHub — Pulls all Scanner detection rules you've written and stored in version control
Maps to threat frameworks — Extracts MITRE ATT&CK tags from each rule to understand what tactics and techniques you detect
Creates a coverage matrix — Shows which techniques are well-covered (multiple rules), poorly-covered (1 rule), or missing entirely (0 rules)
Queries hit rates — For each detection rule, searches Scanner logs to see if it's actually catching real activity or sitting idle
Identifies high-risk gaps — Focuses on critical attack paths you're not detecting: privilege escalation, persistence, data exfiltration, lateral movement
Recommends new rules — Suggests 3-5 detection rules you should build to fill the highest-risk gaps, prioritized by likelihood and impact
This example shows how to orchestrate Scanner MCP (for querying rules and logs) with GitHub MCP (for reading your detection rule repository). It's typically run weekly or monthly to stay aligned with evolving threats and ensure your detection investment matches your actual risk.
Setup: Configure the agent with Scanner MCP and GitHub MCP to access your detection rules repository:
async def coverage_analysis_agent():
"""
Analyze detection coverage across your security rules in GitHub.
Identify gaps and recommend new detections.
"""
options = ClaudeAgentOptions(
model="claude-sonnet-4-5-20250929",
allowed_tools=[
"mcp__scanner__get_scanner_context",
"mcp__scanner__execute_query",
"mcp__scanner__fetch_cached_results",
"mcp__github__get_repository_tree",
"mcp__github__get_file_contents",
],
mcp_servers={
"scanner": {
"type": "http",
"url": os.environ.get("SCANNER_MCP_URL"),
"headers": {
"Authorization": f"Bearer {os.environ.get('SCANNER_MCP_API_KEY')}"
}
},
"github": {
"type": "http",
"url": "https://api.githubcopilot.com/mcp/",
"headers": {
"Authorization": f"Bearer {os.environ.get('GITHUB_TOKEN')}"
}
}
}
)
prompt = """
Analyze our current detection rule coverage. Please:
1. **Review Existing Rules**: Read all detection rules from our GitHub
repository (github.com/your-org/security-detections). Extract the
MITRE ATT&CK tags from each rule.
2. **Query Detection Coverage**: For each rule, query Scanner to see
what it's catching (hit rates in the last 30 days).
3. **Map to MITRE ATT&CK**: Create a coverage matrix showing which
tactics and techniques we detect. Which are covered well? Which
are gaps?
4. **Identify High-Risk Gaps**: Look for critical attack paths we don't
detect:
- Privilege escalation techniques
- Persistence mechanisms
- Lateral movement patterns
- Data exfiltration methods
5. **Search Historical Data**: For each gap, query our logs to see if that
attack technique has occurred (and we just didn't detect it).
6. **Recommend New Detections**: Propose 3-5 detection rules we should
build, prioritized by likelihood and impact.
For each recommendation:
- Explain the threat
- Describe the detection logic
- Show example logs demonstrating the indicator
- Estimate false positive rate
"""
async for message in query(prompt=prompt, options=options):
rprint(message)Deployment: Run weekly or monthly to stay aligned with evolving threats. Update your GitHub repository URL in the prompt as needed.
Example 4: IOC-Driven Investigation Agent
This agent automates threat intelligence response: when you receive a report about a threat campaign, malware family, or attack group, it instantly searches your entire environment to determine if you've been targeted or compromised.
The problem this solves: Threat intelligence reports arrive constantly—from security vendors, industry reports, vendor disclosures. But manually investigating each one is time-consuming:
Extract indicators (IPs, domains, file hashes)
Search your logs for each indicator
Correlate findings across multiple data sources
Assess impact and timeline
Determine if you were actually affected
This agent automates the entire investigation in minutes.
What this agent does:
Extracts indicators of compromise (IOCs) — Parses the threat report to identify all relevant indicators: malicious IPs, command-and-control domains, file hashes, user agents, TLS certificates, etc.
Searches comprehensively — For each IOC, queries Scanner across your entire log history (not just recent data) to find any matches
Assesses impact — If matches are found, determines:
Which users were affected
Which systems or data was accessed
What the scope of the incident is
How long the activity was ongoing
Correlates with other activity — Looks for related suspicious activity from the same timeframe, IP range, or user that might indicate a broader compromise
Generates a report — Provides:
Executive summary of findings
Timeline of all events
List of affected assets and users
Recommended response actions
Whether this matches known threat actor patterns
This is particularly valuable for:
Supply chain incidents — When a vendor you use is compromised, check if the attack reached you
Industry-specific threats — When a report targets your industry, immediately assess your exposure
APT campaigns — When government agencies or researchers publish threat reports, quickly determine if you're in scope
Zero-day disclosures — When new CVEs are released, search for exploitation attempts in your environment
When you receive a threat intelligence report, automatically search your environment:
async def ioc_investigation_agent(threat_report_url: str):
"""
Given a threat report URL, automatically:
1. Extract indicators of compromise (IPs, domains, file hashes)
2. Search entire log history for those indicators
3. Generate impact assessment
"""
prompt = f"""
I'm providing a threat intelligence report and need you to investigate
whether we've been targeted or compromised:
**Report URL**: {threat_report_url}
Please:
1. **Extract IOCs**: Identify all indicators (IPs, domains, file hashes,
user agents, etc.)
2. **Search Comprehensively**: For each indicator, query Scanner across
ALL our logs and timeframes to find any matches.
3. **Assess Impact**: If found:
- Which users were affected?
- Which systems were targeted?
- What data was accessed?
- How long was the activity ongoing?
4. **Correlate with Other Activity**: Look for related suspicious activity
from the same timeframe, IP range, or user.
5. **Generate Report**: Provide:
- Executive summary of findings
- Timeline of events
- Affected assets
- Recommended response actions
- Whether this aligns with known threat actor patterns
If no IOCs are found, confirm that we show no evidence of this threat in
our environment.
"""
async for message in query(prompt=prompt, options=options):
rprint(message)Deployment: Trigger manually or on a schedule whenever threat reports arrive.
Where to Go From Here
Autonomous workflows are one piece of a comprehensive security strategy. Use them alongside:
Interactive Investigations — For incidents that need human judgment. When an autonomous workflow flags something uncertain or complex, hand it off to an analyst who can guide the investigation in real-time.
Detection Engineering — For building the detections that autonomous workflows use. As autonomous workflows uncover new attack patterns, capture them as permanent detection rules so they catch future occurrences automatically.
The most effective teams use all three: interactive investigations for critical incidents requiring judgment, detection engineering to prevent the same attacks from recurring, and autonomous workflows to monitor continuously, triage alerts, and respond 24/7.
Last updated
Was this helpful?