Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 11 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,17 +11,23 @@ This repository contains custom security detection rules designed to identify ma

Detection Engineering is a critical component of Security Operations that:
- Creates custom alerts for Incident Response teams
- Develops unit tests to confirm working detections & capabilities
- Develops tests to confirm working detections & capabilities (unit, replay, and emulation-based)
- Bridges the gap between threat intelligence and actionable security monitoring

Modern detection programs also emphasize:
- **Detection-as-code** practices (versioning, CI validation, peer review)
- **Schema normalization** (ECS, OCSF, or equivalent) to keep rules portable
- **Threat emulation coverage** (Atomic Red Team, CALDERA) to validate logic
- **Telemetry quality** (field completeness, logging policy, and data drift monitoring)

## Repository Structure

```
detection-engineering/
detection-engineering-lab/
├── detections/ # TOML-formatted detection rules
├── development/ # Python scripts for validation and conversion
├── metrics/ # Generated metrics, reports, and visualizations
├── theory/ # Documentation on detection engineering concepts
├── theory/ # Documentation on detection engineering concepts
└── .github/workflows/ # GitHub Actions workflows (currently disabled)
```

Expand Down Expand Up @@ -70,9 +76,11 @@ Explore detection engineering concepts in the `theory/` directory:
- Valid TOML syntax
- All required fields present
- Valid MITRE ATT&CK technique/tactic mappings
- Sub-techniques included when applicable
- Unique `rule_id` (UUID format)
- Descriptive `name` and `description`
- Appropriate `risk_score` and `severity`
- Documented data sources and assumptions (what logs/fields the rule relies on)

## License

Expand All @@ -86,4 +94,3 @@ MIT License - see [LICENSE](LICENSE) file for details.




4 changes: 3 additions & 1 deletion detections/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,8 @@ TOML-formatted detection rules for Elastic Security, mapped to the MITRE ATT&CK

Each detection file contains:
- **metadata**: Creation date and versioning info
- **rule**: Detection logic including query, severity, and MITRE mappings
- **rule**: Detection logic including query, severity, and MITRE mappings (including sub-techniques when applicable)
- **assumptions**: The data sources and field mappings the rule expects (documented in the rule description or metadata)

## Current Detections

Expand All @@ -29,3 +30,4 @@ Each detection file contains:
2. Ensure valid MITRE ATT&CK mappings
3. Run `python development/validation.py` to validate
4. Run `python development/mitre.py` to verify MITRE mappings
5. (Optional) Validate with emulation tooling (Atomic Red Team or CALDERA) to confirm coverage of technique variations
4 changes: 2 additions & 2 deletions theory/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,15 +8,15 @@ Documentation on detection engineering concepts and methodologies.
|----------|-------------|
| [Security Operations](security-operations.md) | Overview of SecOps functions and how detection engineering fits |
| [Detection Engineering Workflow](detection-engineering-workflow.md) | End-to-end process for creating and maintaining detections |
| [Frameworks](frameworks.md) | MITRE ATT&CK, Cyber Kill Chain, and F3EAD frameworks |
| [Frameworks](frameworks.md) | MITRE ATT&CK, Cyber Kill Chain, F3EAD, and modern standards |

## Key Concepts

- **Detection Engineering**: The practice of designing, building, and maintaining threat detection logic
- **Detection as Code**: Treating detections as software artifacts with version control, testing, and CI/CD
- **MITRE ATT&CK Mapping**: Aligning detections to adversary tactics, techniques, and procedures (TTPs)
- **Defensive Standards**: Using D3FEND, Sigma, and schema normalization to improve detection portability
- **Alert Tuning**: Iterative process of reducing false positives while maintaining detection coverage

<img width="590" height="352" alt="image" src="https://github.com/user-attachments/assets/824fc5d6-3024-4685-a9aa-46f0b0956b1c" />


61 changes: 42 additions & 19 deletions theory/detection-engineering-workflow.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ The detection engineering workflow is a repeatable, end-to-end process for creat
|-------|-------|--------|
| 1. Requirements | Threat intel, incident reports, hunt findings | Prioritized detection gap |
| 2. Research | Threat reports, ATT&CK techniques, log sources | Detection hypothesis |
| 3. Development | Hypothesis, query language, Wazuh XML template | Draft detection rule |
| 3. Development | Hypothesis, query language, TOML template | Draft detection rule |
| 4. Testing | Draft rule, sample data, lab environment | Validated detection rule |
| 5. Deployment | Validated rule, CI/CD pipeline | Production detection |
| 6. Tuning & Maintenance | Alert feedback, false positive data | Refined detection rule |
Expand All @@ -32,9 +32,9 @@ The output of this phase is a **prioritized detection gap** — a clear statemen

With a detection gap identified, research the adversary behavior in depth:

- **Map to MITRE ATT&CK** — Identify the relevant tactic, technique, and sub-technique. This drives both the detection logic and the metadata in the Wazuh rule file.
- **Map to MITRE ATT&CK** — Identify the relevant tactic, technique, and sub-technique. This drives both the detection logic and the metadata in the TOML rule file.
- **Identify data sources** — Determine which logs or telemetry provide visibility into the behavior. Common sources include endpoint logs (Sysmon, EDR), network traffic, authentication logs, and cloud audit trails.
- **Wazuh Decoder Development** — If the log source is not natively parsed by Wazuh, develop a custom XML decoder (`/detections/decoders/`) to extract relevant fields before rule creation.
- **Normalization & parsing** — Ensure telemetry is parsed into a consistent schema (ECS, OCSF, or your org’s standard). If needed, add parsers/decoders (`/detections/decoders/`) so rules rely on stable fields.
- **Study adversary tradecraft** — Review threat reports, malware samples, and red team tooling to understand how the technique is executed in practice. Look for observable artifacts like process command lines, file paths, registry keys, or network patterns.
- **Document assumptions** — Write down what conditions must be true for the detection to work (e.g., "Sysmon Process Create events are being collected from all endpoints").

Expand All @@ -44,20 +44,41 @@ The output is a **detection hypothesis**: a plain-language statement describing

## 3. Development

Translate the hypothesis into a detection rule. In this repo, detections follow the **Wazuh XML** format:

```xml
<group name="windows, detection_engineering,">
<rule id="100002" level="10">
<if_sid>60009</if_sid> <!-- Base Sysmon Rule -->
<field name="win.eventdata.image">powershell.exe</field>
<field name="win.eventdata.commandLine" type="pcre2">(-w hidden|-windowstyle hidden)</field>
<description>PowerShell Execution with Hidden Window</description>
<mitre>
<id>T1059.001</id>
</mitre>
</rule>
</group>
Translate the hypothesis into a detection rule. In this repo, detections follow a **TOML** format that is compatible with Elastic-style rule metadata:

```toml
[metadata]
creation_date = "2024/04/10"

[rule]
author = ["Detection Engineering Team"]
description = "PowerShell execution with hidden window and encoded command."
name = "PowerShell Hidden Window Execution"
risk_score = 73
severity = "high"
type = "query"
rule_id = "6ed5bba6-42e4-4c06-a6d8-4a2c5e48e4df"
query = '''
process.name:powershell.exe and
process.command_line:("*-WindowStyle Hidden*" or "*-w hidden*") and
process.command_line:("*-enc*" or "*-encodedcommand*")
'''

[[rule.threat]]
framework = "MITRE ATT&CK"
[[rule.threat.technique]]
id = "T1059"
name = "Command and Scripting Interpreter"
reference = "https://attack.mitre.org/techniques/T1059/"
[[rule.threat.technique.subtechnique]]
id = "T1059.001"
name = "PowerShell"
reference = "https://attack.mitre.org/techniques/T1059/001/"

[rule.threat.tactic]
id = "TA0002"
name = "Execution"
reference = "https://attack.mitre.org/tactics/TA0002/"
```

### What Makes a Good Detection
Expand All @@ -67,22 +88,24 @@ Translate the hypothesis into a detection rule. In this repo, detections follow
- **Described** — the `description` field explains what the rule detects and why it matters, not just what query it runs
- **Scored** — `risk_score` and `severity` reflect the actual risk to the organization, considering both impact and confidence
- **Testable** — the query logic can be triggered in a lab to verify it works
- **Portable** — fields are normalized to a schema and documented so the detection can be translated to other platforms if needed

### Detection as Code

Treating detections as code means they follow software engineering practices:

- **Version control** — all rules are stored in Git and changes are tracked through commits
- **Peer review** — new or modified detections go through pull requests before merging
- **Validation** — automated scripts check XML syntax and MITRE ATT&CK mappings
- **Validation** — automated scripts check TOML syntax and MITRE ATT&CK mappings
- **CI/CD** — GitHub Actions workflows can automate validation on every push (see `.github/workflows/`)

## 4. Testing

Before a detection reaches production, it must be tested:

- **Unit testing** — use the validation scripts in `development/` to confirm the rule has valid XML syntax, all required fields are present, and MITRE mappings are correct
- **Unit testing** — use the validation scripts in `development/` to confirm the rule has valid TOML syntax, all required fields are present, and MITRE mappings are correct
- **Lab validation** — execute the adversary technique in a controlled environment and verify the detection fires. The `setup/` directory contains Terraform configurations for deploying a lab environment
- **Emulation coverage** — run Atomic Red Team or CALDERA tests to confirm the detection covers known technique variations
- **False positive analysis** — run the query against production data (or a representative sample) to identify benign activity that would trigger the rule. Adjust the query logic or add exclusions as needed
- **Edge case review** — consider variations of the technique that might evade the detection (different tools, obfuscation, alternative execution methods)

Expand Down
27 changes: 25 additions & 2 deletions theory/frameworks.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Frameworks

Security frameworks provide structured models for understanding adversary behavior and organizing defensive operations. The three frameworks below are commonly used in detection engineering to prioritize coverage, map detections to real-world threats, and drive intelligence-led operations.
Security frameworks provide structured models for understanding adversary behavior and organizing defensive operations. The frameworks below are commonly used in detection engineering to prioritize coverage, map detections to real-world threats, and drive intelligence-led operations.

## The Cyber Kill Chain

Expand Down Expand Up @@ -82,6 +82,17 @@ ATT&CK provides separate matrices for different platforms:

Most detections in this repo target the Enterprise matrix, specifically Windows endpoint telemetry.

## MITRE D3FEND

**MITRE D3FEND** is a complementary knowledge base focused on defensive techniques. While ATT&CK describes adversary behavior, D3FEND describes defensive countermeasures and the artifacts they produce.

In detection engineering, D3FEND helps answer:
- **What defensive telemetry should exist** (e.g., process creation, file monitoring, network analytics)
- **Which mitigations enable stronger detections** (e.g., enabling PowerShell logging to improve visibility)
- **How to align detections to defensive outcomes** instead of only adversary actions

Pairing ATT&CK with D3FEND keeps detections grounded in what can be observed and instrumented in real environments.

## F3EAD

<img width="486" height="514" alt="image" src="https://github.com/user-attachments/assets/7d7239d0-4aeb-47a5-8683-9c99c8103134" />
Expand Down Expand Up @@ -111,6 +122,17 @@ The F3EAD cycle connects directly to the detection engineering workflow:

F3EAD is particularly useful for teams that want a tighter integration between threat intelligence and detection engineering, ensuring that intelligence outputs are always actionable and that detection outputs feed back into intelligence.

## Additional Standards Used in Modern Programs

These are commonly used alongside the frameworks above to improve portability and testability:

| Standard | Purpose | Example Use |
|----------|---------|-------------|
| Sigma | Platform-agnostic detection rules | Translate TOML rules to a portable Sigma equivalent |
| MITRE CAR | Curated analytics patterns | Seed detection ideas and validate logic |
| DeTT&CT | Coverage assessment | Measure detection coverage against ATT&CK techniques |
| OCSF / ECS | Schema normalization | Keep detection queries portable across data sources |

## Choosing a Framework

These frameworks are complementary, not competing:
Expand All @@ -119,6 +141,7 @@ These frameworks are complementary, not competing:
|-----------|----------|-------------|
| Cyber Kill Chain | Visualizing detection coverage across intrusion stages | High-level (7 stages) |
| MITRE ATT&CK | Mapping detections to specific adversary behaviors | Granular (hundreds of techniques) |
| MITRE D3FEND | Mapping defensive techniques and required telemetry | Defensive technique catalog |
| F3EAD | Driving intelligence-led detection operations | Process-oriented (6 phases) |

A mature detection engineering program uses all three: the **kill chain** for strategic coverage planning, **ATT&CK** for tactical detection mapping, and **F3EAD** for operational workflow between intelligence and engineering teams.
A mature detection engineering program uses multiple lenses: the **kill chain** for strategic coverage planning, **ATT&CK** for tactical detection mapping, **D3FEND** for telemetry and defensive controls, and **F3EAD** for operational workflow between intelligence and engineering teams.