Interactive Investigations
Interactive investigations put you in the driver's seat. You ask questions, your AI queries Scanner and explores your security data, and you refine the investigation direction based on findings. This is how humans and AI work best together—combining human intuition and expertise with AI's ability to execute queries and correlate data at scale.
When to Use Interactive Investigations
Interactive investigations are best for:
Critical incidents — High-stakes situations where your expertise and judgment matter
Emerging leads — When you discover new information that changes your investigation direction
Ambiguous situations — Cases requiring human judgment (Is this legitimate admin work or a compromise?)
Unknown threats — Exploring unfamiliar territory where you don't know what to look for
Deep dives — Following a suspicious lead wherever it goes
For routine alert triage, continuous threat hunting, 24/7 monitoring, and automated response, see Autonomous Workflows. Automated response includes creating Jira tickets, posting to Slack, and opening GitHub PRs automatically. The most effective teams use both: interactive for complex investigations that need your expertise, autonomous for routine operations.
Why Interactive Investigations With AI Are Powerful
Interactive investigations with AI are powerful because they combine human judgment with AI's unique capabilities:
AI Handles Messy, Heterogeneous Data Naturally
Real security logs are inconsistent. The same IP address appears as:
sourceIPAddress: "203.0.113.45"in CloudTrailrequest from 203.0.113.45in application logssrc_ip=203.0.113.45in proxy logs
Humans waste time normalizing and translating between formats. AI reads across these variations naturally, spotting the same indicator regardless of format or context.
AI Correlates Across Unrelated Systems
Attacks leave traces across many systems with different schemas and purposes. An experienced analyst manually correlating activity might think:
Find failed auth attempts (check Auth0)
Look for role assumptions from that user (check CloudTrail)
See what data they accessed (check S3 logs)
Find unusual network activity (check VPC Flow logs)
Mentally connect these disparate events into a coherent attack story
Your AI does this automatically. It sees that failed auth → role assumption → unusual data access → external network connection form a coherent story, even when logs come from completely different systems.
AI Writes Queries on Demand
You don't need to know query syntax. Describe what you're looking for:
Show me S3 access from users who normally don't access the sensitive-customer-data bucketYour AI instantly:
Understands what you want (baseline access + anomalies)
Writes the Scanner query
Executes it
Interprets results
Suggests next steps
This removes the cognitive overhead of learning query syntax, allowing you to focus on investigation logic.
AI Maintains Investigation Rigor
You can ask your AI to be systematic:
Generate competing hypotheses
Design tests for each
Update confidence as evidence emerges
Avoid confirmation bias
Humans naturally anchor on first impressions. Your AI can be instructed to systematically test alternatives and converge on the most likely explanation.
AI Scales Your Investigation Depth
A human analyst can deeply investigate 5-10 alerts per day. With your AI, you can investigate 50+. Your AI doesn't get tired, doesn't miss details, and maintains the same investigation quality throughout. You remain the expert—your AI is the force multiplier that lets you go deeper and faster.
Choosing Your Tool
Three popular tools with strong Scanner MCP support for interactive investigations. Choose based on your workflow:
Claude Desktop
When to use: You're exploring interactively and want a conversational interface
Best for: Alert triage, quick investigations, asking follow-up questions without context switching
Example: "Can you explain what this alert means and should I escalate?" (natural conversation, back-and-forth refinement)
Claude Code
When to use: You want to integrate investigations into your terminal/CLI workflow
Best for: Scripted workflows, automation integration, batch processing multiple alerts
Example: Running investigations as part of incident response automation, or reviewing multiple alerts in sequence
Cursor (IDE integration)
When to use: You're working with detection rules or query development
Best for: Writing and testing Scanner queries, creating detection rules (see Detection Engineering)
Example: Developing new detection rules and validating them against your data
See Getting Started for setup instructions for each tool.
Investigation Methodology: The Hypothesis-Driven Approach
The most effective interactive investigations follow a structured methodology:
Generate Initial Hypotheses - Based on alert details or initial evidence, propose 2-4 most likely explanations
Prioritize by Probability - Rank hypotheses by likelihood based on threat intelligence, historical patterns, and entity context
Execute Targeted Queries - Design queries to test each hypothesis efficiently
Refine Based on Evidence - Update hypothesis rankings as evidence accumulates
Converge on Truth - Systematically eliminate hypotheses until confident in conclusion
Key principle: Let evidence guide your investigation, not assumptions. Be willing to follow surprising findings wherever they lead.
Using the Methodology in Practice
The methodology works best when you explicitly ask your AI to follow it. There are two main approaches depending on investigation complexity:
Approach 1: Simple Cases (Natural Conversation)
For straightforward investigations, start with a natural question and let your AI guide you through the methodology iteratively via follow-up prompts.
Initial prompt:
I'm investigating an alert about unusual S3 access from user john.smith
on 2025-11-18 at 14:30 UTC. The alert shows 523 failed GetObject attempts
followed by 89 successful ones in rapid succession. Can you investigate
what happened and whether this is malicious activity?Your AI will naturally explore hypotheses as you ask follow-up questions:
What explains the pattern of failures followed by successes?
Can you check what else this user was doing at that time?
Are there any signs of data exfiltration?This conversational approach works well for simple alerts where your intuition guides the investigation.
Approach 2: Complex Cases (Explicit Methodology)
For serious incidents or when you need systematic investigation, explicitly instruct your AI to follow the hypothesis-driven methodology:
Initial prompt:
I need you to investigate a potential security incident involving user jane.smith.
Use the hypothesis-driven investigation methodology:
1. **Generate Initial Hypotheses** (2-4): Based on the context below, propose the most likely explanations for the activity, ranked by probability.
2. **Rank by Probability**: For each hypothesis, explain why it's likely or unlikely given what we know.
3. **Design Targeted Queries**: For each hypothesis, determine what evidence would prove or disprove it. Then query Scanner to test each hypothesis.
4. **Refine Based on Evidence**: As you receive query results, update your confidence levels. New evidence may shift probabilities or generate new hypotheses.
5. **Converge on Conclusion**: Systematically eliminate hypotheses until you reach high confidence in the true explanation.
**Context:**
- User: jane.smith (senior engineer, works normal 9-5 PT hours)
- Alert: 47 API calls to assume AWS roles at 2:15 AM PT
- Additional detail: These role assumptions were to critical infrastructure roles (admin-equivalent)
- Timeline: This is completely outside her normal behavior
Please start by generating 3-4 hypotheses and explaining which is most likely.Your AI will then:
Generate explicit hypotheses like:
Hypothesis A (HIGH): Compromised credentials, attacker trying privilege escalation
Hypothesis B (MEDIUM): Emergency on-call work triggered by incident
Hypothesis C (LOW): Automation script running under her credentials
Design queries to test each:
For A: Check authentication logs for unusual login location/time, check if role assumptions succeeded, look for data access afterward
For B: Check incident tickets, communication logs, or runbooks from that timeframe
For C: Review what the role assumptions actually did, check if it matches known automation patterns
Execute queries and refine hypotheses based on results
Conclude with high-confidence classification and specific recommendations
When to use explicit methodology:
Serious incidents (potential compromise, data breach)
Unusual or complex attack patterns
When you need to defend conclusions to executives
When investigation will take many rounds of queries
When you want to ensure no critical angles are missed
Practical Workflow Examples
Example 1: Alert Triage with Iterative Refinement
Real triage often involves back-and-forth to refine the assessment:
Your initial prompt:
I just received an alert that admin_user created 12 IAM access keys in
the last 30 minutes. Is this normal? Should I escalate this?Your AI's initial assessment: Your AI queries and responds: "This looks suspicious. 12 access keys in 30 minutes is unusual. The user hasn't created bulk access keys in the past 90 days. I'd recommend escalating this."
You refine based on context:
Wait—we're in the middle of migrating to a new IAM system. Can you check
if there's a related ticket or if this timing aligns with our migration window?
Also, which services are these keys for?Your AI investigates further and provides updated assessment: "Found ticket #4521 'IAM System Migration' created yesterday. The key names suggest they're for service accounts, not user accounts. This aligns with standard migration patterns. However, I still see something concerning: 3 of the 12 keys were created after the documented migration window ended. Can I investigate those 3 separately?"
You drill deeper:
Yes, please. Show me what those 3 keys were used for and if they accessed
anything unusual.Final assessment:
9 keys: ✅ Legitimate (migration-related, accessed expected services)
3 keys: ⚠️ Suspicious (created outside window, accessed production databases admin_user normally doesn't touch)
Result: Close most of the alert, but flag the 3 anomalous keys for deeper investigation.
Example 2: Data Exfiltration Investigation
This mirrors the real-world example from our blog post on human-AI collaboration in the SOC.
Initial observation:
I'm seeing unusual S3 access patterns from an engineering team member.
Can you help me investigate whether this is legitimate work or potential
data exfiltration?Phase 1: Your AI establishes baseline Your AI queries:
Normal S3 access patterns for this user (historical data)
Comparison to peer engineering team members
Time-of-day patterns and typical session duration
Phase 2: You guide deeper investigation
This is interesting. Their access pattern is very different from peers.
Can you dig into the specific buckets they're accessing and what they
downloaded?Your AI refines queries to identify which S3 buckets are being accessed, volume of data transferred per bucket, timing of access, and whether they're accessing buckets outside their normal scope.
Phase 3: You follow suspicious findings
They accessed the "sensitive-customer-data" bucket 47 times, which is
outside their normal scope. Can you explore what else this user was
doing around that time?Your AI correlates activity across authentication, AWS services, external IPs, and privilege escalation attempts.
Phase 4: Hypothesis testing
Based on evidence, your AI proposes conclusions:
Hypothesis A (High probability): User gained access through misconfigured bucket policy, accessed it systematically, no evidence of exfiltration
Hypothesis B (Medium probability): User intentionally exfiltrated sensitive data to external infrastructure
Hypothesis C (Low probability): Compromised account used by attacker
Your AI tests by searching for external C2 activity, log tampering, or threat intel on data sales.
Outcome: Clear classification (benign/suspicious/malicious) with evidence chain and recommended actions.
Example 3: Privilege Escalation Analysis
Your prompt:
I want to hunt for privilege escalation attempts in our AWS environment
over the last 7 days. Can you help me find suspicious IAM policy
modifications or access key creation?Your AI's approach:
Step 1: Identify privilege escalation patterns
@index=cloudtrail
eventSource: "iam.amazonaws.com"
eventName: (PutRolePolicy AttachRolePolicy CreateAccessKey)
| groupbycount userIdentity.userNameStep 2: You notice suspicious user activity
This user has 23 policy modification attempts. Is that normal or
concerning?Step 3: Your AI investigates the user's context
Historical IAM activity for this user
Role and responsibilities
Whether they typically manage policies
Whether the modifications were successful
Step 4: You ask your AI to correlate with other suspicious activity
I noticed several of their attempts failed. Can you check if there are
other signs of compromise—like failed authentication, unusual login
locations, or access to sensitive data they normally don't touch?Step 5: Your AI builds the complete picture
Correlates IAM attempts with authentication logs
Checks for lateral movement
Identifies what the attacker was trying to achieve
Assesses impact if successful
Example 4: Post-Incident Investigation & Postmortem
After an incident is contained, you need to understand what happened, how long it went undetected, and what to fix.
Your prompt:
We detected and contained a credential compromise yesterday (2025-11-17 at 14:30 UTC).
The compromised account is [email protected]. Can you help me build a comprehensive
postmortem? I need:
1. Timeline: When did the compromise start? How far back can we trace the attacker's activity?
2. Scope: Which systems/data were accessed?
3. Root cause: How did they get the credentials?
4. Detection gap: Why didn't our existing alerts catch this?
5. Impact: What's the blast radius?Your AI's investigation process:
Phase 1: Establish entry point
Query authentication logs for unusual login patterns before detection
Check for failed login attempts (brute force indicators)
Look for impossible travel (logins from different locations in short time)
Search for VPN or proxy usage that doesn't match baseline
Phase 2: Trace activity timeline
Find the first unusual action by this account
Track all activity chronologically
Identify when attacker likely had control vs. legitimate user activity
Note any "living off the land" techniques (using legitimate tools maliciously)
Phase 3: Assess scope
Which systems were accessed?
What data was touched? (databases, S3 buckets, file shares)
Were admin functions performed?
Were other accounts compromised?
Phase 4: Identify detection gaps
We have these existing detections: [list your rules]
Why didn't they catch this activity? Is it:
- A gap in data sources (missing logs)?
- Threshold was too high (10 failed logins instead of 3)?
- Time window too short (1 hour instead of 24 hour)?
- False positive tuning too aggressive?Phase 5: Generate findings
Your AI provides:
Timeline with specific timestamps and events
Attack chain with MITRE ATT&CK mapping
Affected assets with business impact assessment
Detection recommendations for similar future attacks
Root cause analysis of how credentials were compromised
Remediation checklist (reset credentials, audit logs, deploy detections, etc.)
Result: A comprehensive postmortem document that's presentable to leadership and actionable for your security team, generated in hours instead of days of manual log analysis.
Query Patterns for Interactive Investigation
These patterns help guide your AI toward effective investigations. Rather than learning query syntax, describe what you're looking for and your AI translates:
Finding High-Volume Anomalies
Natural language:
Search for users with unusual S3 access volume in the last 24 hours,
especially if they're accessing data outside their normal scope.Your AI translates to:
@index=cloudtrail
eventSource: "s3.amazonaws.com"
eventName: "GetObject"
| stats
sum(bytesTransferred) as total_bytes
by userIdentity.userName
| eval gb = total_bytes / (1024 * 1024 * 1024)
| where gb > 5Identifying Trial-and-Error Patterns
Pattern: Failures followed by success indicates both reconnaissance and successful exploitation.
Natural language:
Find cases where a user failed to perform an action multiple times, then
succeeded. This is a classic attack pattern.What your AI looks for:
AccessDenied errors followed by successful operations on the same resource
Failed attempts on multiple resources before success
Rapid-fire attempts suggesting automated probing rather than user mistake
Cross-Source Correlation
Pattern: Anomalies become clear when combining multiple data sources.
Natural language:
Find activity that looks suspicious when you combine information from
multiple sources—e.g., a user who normally uses Auth0 from a corporate
IP suddenly authenticating from a different provider and location.Your AI leverages ECS normalized fields to correlate across all your sources. The key is thinking about what normal looks like for each entity, then asking your AI to find deviations.
General Query-Building Principles
When asking your AI to query:
Specify baselines — What's normal for this user/system? ("normally logs in from US, not China")
Define anomalies — What would make it suspicious? ("accessing buckets outside their team's scope")
Set time windows — How far back? ("in the last 24 hours, but was anything similar in the past month?")
Request evidence — Always ask for specific log entries, not just summaries
Best Practices for Interactive Investigations
1. Establish Entity Context First
Before diving into an investigation, always ask your AI:
What's the history of this user? Have they been flagged for suspicious
activity before? What's their role and what should their normal activity
look like?This dramatically improves investigation efficiency. Understanding whether someone is a database administrator (heavy data access = normal) versus a developer (light data access) changes your interpretation entirely.
2. Test Multiple Hypotheses
Don't anchor on your first theory. Explicitly ask:
I think this is benign, but what would malicious activity look like in
this scenario? Can you check for those indicators too?This prevents confirmation bias and catches false negatives.
3. Follow Surprising Findings
When your AI finds something unexpected, investigate:
That's interesting—they accessed a service they've never used before.
Can you explore that further? What did they do with that service access?Unexpected findings often indicate compromises or insider threats that would otherwise go unnoticed.
4. Demand Evidence Chains
Ask your AI to cite specific events:
I want to see the actual log entries that support your conclusion. Can
you include timestamps and event details?This enables manual verification and builds institutional knowledge about what real suspicious activity looks like.
5. Use Progressive Refinement
Start broad, then narrow:
Show me all failed authentication attempts in the last 24 hours.
→ Now show me which users had the most failures.
→ Now show me details for this specific user's failures.
→ Now correlate with other suspicious activity from that user.Progressive refinement helps you understand patterns without overloading yourself with raw data.
6. Document Your Reasoning
As the investigation progresses, keep notes:
What hypotheses did you test?
What evidence did you find?
Why did you eliminate or accept each hypothesis?
What could have been missed?
This documentation becomes your institutional knowledge and improves future investigations.
7. When Your AI Gets It Wrong
Your AI sometimes misses context or makes incorrect assumptions. When this happens:
Scenario: Your AI proposes a bad hypothesis
Your AI says: "This looks like a brute force attack—100 failed logins from the same IP"
You know: "That IP is our office. We have a misconfigured application retrying constantly"
Your response: "That's from our office VPN. Here's the context: [explain]. Can you exclude that IP
and look for real authentication failures from different sources?"Your AI will adjust and refine its analysis.
Scenario: Your AI makes up field names or query logic
Your AI suggests: "Query for field 'user_anomaly_score' to find suspicious users"
You check: "That field doesn't exist in our CloudTrail logs"
Your response: "That field doesn't exist in our data. What fields ARE available for detecting anomalies?"Your AI will correct course and use only fields that exist.
Scenario: Your AI misses context about your environment
Your AI says: "This is suspicious—15 access keys created by the automation account"
You know: "That account creates keys daily as part of our provisioning system"
Your response: "That's normal for our provisioning automation. Can you exclude this account
and refocus on human users creating unusual numbers of keys?"Your AI adjusts and avoids flagging known legitimate activity.
Key principle: You're the expert. If your AI's analysis doesn't match your knowledge of the system, correct it. Your AI learns quickly from feedback and will refine its investigation path accordingly.
Where to Go From Here
Interactive investigations are one piece of a comprehensive security strategy. Use them alongside:
Detection Engineering — Turn investigation findings into permanent improvements. When you discover a new attack pattern, build a detection for it.
Autonomous Workflows — Scale investigations across your alert stream. Automated triage, response, and remediation for cases that don't need your expertise.
The most effective teams use all three: interactive for critical incidents and cases requiring judgment, detection engineering to prevent the same attacks from recurring, and autonomous workflows to handle routine operations at scale.
Last updated
Was this helpful?