Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions wiki/C13-Monitoring-and-Logging.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,8 @@ Known attacks, real-world incidents, and threat vectors relevant to this chapter
- **Multi-turn abuse trajectories** — Crescendo attacks (USENIX Security 2025) escalate through benign-seeming turns; Deceptive Delight achieves 65% success within three turns. Per-request monitoring misses these cross-turn patterns entirely.
- **Log tampering and forensic integrity** — PoisonedRAG showed 5 documents corrupt 90% of RAG outputs. Medical training data poisoning (0.001% of tokens) produces harmful models undetectable by standard benchmarks. Beyond data poisoning, AI agent logs are themselves attack targets — agents that make autonomous decisions without reliable logging create blind spots that attackers exploit to hide data exfiltration or unauthorized actions. Adversaries may also spoof agent identities in multi-agent systems to perform actions under another persona.
- **AI-powered attack automation** — GTG-1002 campaign (November 2025) targeted ~30 organizations with AI handling 80-90% of operations including reconnaissance, exploit development, and lateral movement. Only 4-6 human decision points per campaign. Traditional IR playbooks had no framework for this.
- **Alert fatigue** — 59% of organizations are drowning in telemetry but cannot get answers when needed. 36% are buried in alert fatigue. 39% have integration gaps between monitoring tools and workflows.
- **Limited availability of standardized AI-specific detection rule packs in SIEM platforms** — Standard SIEM platforms have limited support for detecting AI-specific attack patterns (e.g., prompt injection, multi-turn jailbreaks). Most AI-in-SIEM features focus on assisting investigation of traditional events rather than detecting attacks targeting AI systems.
- **Prompt injection dominates real incidents** — Adversa AI's 2025 AI Security Incidents Report found 35% of all real-world AI security incidents were caused by simple prompt injection, with some leading to $100K+ in real losses. GenAI was involved in 70% of incidents, but agentic AI caused the most dangerous failures including crypto thefts, API abuse, and supply chain attacks. System prompt extraction was the most common attacker objective in Q4 2025.
- **Alert fatigue and observability-action gap** — 59% of organizations are drowning in telemetry but cannot get answers when needed. 36% are buried in alert fatigue. 39% have integration gaps between monitoring tools and workflows. The monitoring system itself can be an attack target — since LLM-based monitoring inherits the same vulnerabilities it aims to detect, an attacker who compromises monitoring could blind the organization to ongoing attacks or create false alerts masking real threats.
- **AI-specific SIEM rules emerging but immature** — MITRE ATLAS Splunk app (2025) was the first production SIEM with AI attack detection rules. As of December 2025, ANY.RUN released AI-generated Sigma rules for threat detection. SigmaGen automates ATT&CK-mapped rule generation using LLMs. However, no major SIEM vendor ships comprehensive AI-attack-specific rule packs — the gap between "AI assists SOC analysts" and "SIEM detects attacks on AI" persists.
Expand Down
Loading