diff --git a/amplifier-bundle/tools/amplihack/hooks/dev_intent_router.py b/.claude/tools/amplihack/hooks/dev_intent_router.py
similarity index 100%
rename from amplifier-bundle/tools/amplihack/hooks/dev_intent_router.py
rename to .claude/tools/amplihack/hooks/dev_intent_router.py
diff --git a/amplifier-bundle/tools/amplihack/hooks/templates/routing_prompt.txt b/.claude/tools/amplihack/hooks/templates/routing_prompt.txt
similarity index 100%
rename from amplifier-bundle/tools/amplihack/hooks/templates/routing_prompt.txt
rename to .claude/tools/amplihack/hooks/templates/routing_prompt.txt
diff --git a/amplifier-bundle/tools/amplihack/hooks/tests/test_dev_intent_router.py b/.claude/tools/amplihack/hooks/tests/test_dev_intent_router.py
similarity index 100%
rename from amplifier-bundle/tools/amplihack/hooks/tests/test_dev_intent_router.py
rename to .claude/tools/amplihack/hooks/tests/test_dev_intent_router.py
diff --git a/amplifier-bundle/tools/amplihack/hooks/tests/test_pre_tool_use_cwd_protection.py b/.claude/tools/amplihack/hooks/tests/test_pre_tool_use_cwd_protection.py
similarity index 100%
rename from amplifier-bundle/tools/amplihack/hooks/tests/test_pre_tool_use_cwd_protection.py
rename to .claude/tools/amplihack/hooks/tests/test_pre_tool_use_cwd_protection.py
diff --git a/.github/workflows/drift-detection.yml b/.github/workflows/drift-detection.yml
index c2a6965b1..8c4f8b506 100644
--- a/.github/workflows/drift-detection.yml
+++ b/.github/workflows/drift-detection.yml
@@ -11,7 +11,7 @@ concurrency:
jobs:
check-drift:
- name: Check skill/agent drift
+ name: Check skill/agent/hooks drift
runs-on: ubuntu-latest
timeout-minutes: 5
diff --git a/Makefile b/Makefile
index 07792714f..ba8259263 100644
--- a/Makefile
+++ b/Makefile
@@ -1,7 +1,7 @@
# Makefile for Scenarios Directory Pattern Tools
# Provides easy access to production-ready scenario tools
-.PHONY: help analyze-codebase scenarios-help list-scenarios docs-serve docs-build docs-deploy
+.PHONY: help analyze-codebase scenarios-help list-scenarios docs-serve docs-build docs-deploy check-drift verify-hooks-symlink
# Default target - show help
help:
@@ -158,3 +158,26 @@ docs-deploy:
@echo "🚀 Deploying documentation to GitHub Pages..."
@mkdocs gh-deploy --force
@echo "✅ Documentation deployed successfully"
+
+# Drift Detection
+# ===============
+
+# Run drift detection for skills, agents, and hooks
+check-drift:
+ @python scripts/check_drift.py
+
+# Verify hooks symlink is intact
+verify-hooks-symlink:
+ @if [ -L "amplifier-bundle/tools/amplihack/hooks" ]; then \
+ target=$$(readlink amplifier-bundle/tools/amplihack/hooks); \
+ if [ "$$target" = "../../../.claude/tools/amplihack/hooks" ]; then \
+ echo "OK: hooks symlink is correct"; \
+ else \
+ echo "ERROR: hooks symlink points to $$target (expected ../../../.claude/tools/amplihack/hooks)"; \
+ exit 1; \
+ fi; \
+ else \
+ echo "ERROR: amplifier-bundle/tools/amplihack/hooks is not a symlink"; \
+ echo "Fix: rm -rf amplifier-bundle/tools/amplihack/hooks && ln -s ../../../.claude/tools/amplihack/hooks amplifier-bundle/tools/amplihack/hooks"; \
+ exit 1; \
+ fi
diff --git a/amplifier-bundle/tools/amplihack/hooks b/amplifier-bundle/tools/amplihack/hooks
new file mode 120000
index 000000000..4ab95e9d9
--- /dev/null
+++ b/amplifier-bundle/tools/amplihack/hooks
@@ -0,0 +1 @@
+../../../.claude/tools/amplihack/hooks
\ No newline at end of file
diff --git a/amplifier-bundle/tools/amplihack/hooks/README.md b/amplifier-bundle/tools/amplihack/hooks/README.md
deleted file mode 100644
index 738a5612d..000000000
--- a/amplifier-bundle/tools/amplihack/hooks/README.md
+++ /dev/null
@@ -1,254 +0,0 @@
-# Claude Code Hook System
-
-This directory contains the hook system for Claude Code, which allows for customization and monitoring of the Claude Code runtime environment.
-
-## Overview
-
-The hook system uses a **unified HookProcessor** base class that provides common functionality for all hooks, reducing code duplication and improving maintainability.
-
-## Hook Files
-
-### Core Infrastructure
-
-- **`hook_processor.py`** - Base class providing common functionality for all hooks
- - JSON input/output handling
- - Logging to `~/.amplihack/.claude/runtime/logs/`
- - Metrics collection
- - Error handling and graceful fallback
- - Session data management
-
-### Active Hooks (Configured in .claude/settings.json)
-
-- **`session_start.py`** - Runs when a Claude Code session starts
- - Adds project context to the conversation
- - Reads and applies user preferences from USER_PREFERENCES.md
- - Logs session start metrics
-
-- **`stop.py`** - Runs when a session ends
- - Checks for lock flag (`~/.amplihack/.claude/tools/amplihack/.lock_active`)
- - Blocks stop if continuous work mode is enabled (lock active)
- - Logs stop attempts and lock status
-
-- **`post_tool_use.py`** - Runs after each tool use
- - Tracks tool usage metrics
- - Validates tool execution results
- - Categorizes tool types for analytics
-
-- **`pre_compact.py`** - Runs before context compaction
- - Manages context and prepares for compaction
- - Logs pre-compact events
-
-- **`pre_tool_use.py`** - Runs before each tool use (Bash only)
- - **CWD deletion protection**: blocks `rm -rf` / `rmdir` on the current working directory or any parent
- - **CWD rename/move protection**: blocks `mv` commands that would rename the CWD or any parent (prevents session crash from invalid CWD)
- - **Main branch protection**: blocks `git commit` directly to `main` or `master`
- - **No-verify bypass protection**: blocks `git commit --no-verify` and `git push --no-verify`
-
-## Architecture
-
-```
-┌─────────────────┐
-│ Claude Code │
-└────────┬────────┘
- │ JSON input
- ▼
-┌─────────────────┐
-│ Hook Script │
-├─────────────────┤
-│ HookProcessor │ ◄── Base class
-│ - read_input │
-│ - process │ ◄── Implemented by subclass
-│ - write_output│
-│ - logging │
-│ - metrics │
-└────────┬────────┘
- │ JSON output
- ▼
-┌─────────────────┐
-│ Claude Code │
-└─────────────────┘
-```
-
-## Creating a New Hook
-
-To create a new hook, extend the `HookProcessor` class:
-
-```python
-#!/usr/bin/env python3
-"""Your hook description."""
-
-from typing import Any, Dict
-import sys
-from pathlib import Path
-sys.path.insert(0, str(Path(__file__).parent))
-from hook_processor import HookProcessor
-
-
-class YourHook(HookProcessor):
- """Your hook processor."""
-
- def __init__(self):
- super().__init__("your_hook_name")
-
- def process(self, input_data: Dict[str, Any]) -> Dict[str, Any]:
- """Process the hook input.
-
- Args:
- input_data: Input from Claude Code
-
- Returns:
- Output to return to Claude Code
- """
- # Your processing logic here
- self.log("Processing something")
- self.save_metric("metric_name", value)
-
- return {"result": "success"}
-
-
-def main():
- """Entry point."""
- hook = YourHook()
- hook.run()
-
-
-if __name__ == "__main__":
- main()
-```
-
-## Data Storage
-
-The hook system creates and manages several directories:
-
-```
-.claude/runtime/
-├── logs/ # Log files for each hook
-│ ├── session_start.log
-│ ├── stop.log
-│ └── post_tool_use.log
-├── metrics/ # Metrics in JSONL format
-│ ├── session_start_metrics.jsonl
-│ ├── stop_metrics.jsonl
-│ └── post_tool_use_metrics.jsonl
-└── analysis/ # Session analysis files
- └── session_YYYYMMDD_HHMMSS.json
-```
-
-## Testing
-
-Run tests to verify the hook system:
-
-```bash
-# Unit tests for HookProcessor
-python -m pytest test_hook_processor.py -v
-
-# Integration tests for all hooks
-python test_integration.py
-
-# Test Azure continuation hook
-python test_stop_azure_continuation.py
-
-# Test individual hooks manually
-echo '{"prompt": "test"}' | python session_start.py
-```
-
-## Metrics Collected
-
-### session_start
-
-- `prompt_length` - Length of the initial prompt
-
-### stop
-
-- `lock_blocks` - Count of stop attempts blocked by lock flag
-
-### post_tool_use
-
-- `tool_usage` - Name of tool used (with optional duration)
-- `bash_commands` - Count of Bash executions
-- `file_operations` - Count of file operations (Read/Write/Edit)
-- `search_operations` - Count of search operations (Grep/Glob)
-
-## Error Handling
-
-All hooks implement graceful error handling:
-
-1. **Invalid JSON input** - Returns error message in output
-2. **Processing exceptions** - Logs error, returns empty dict
-3. **File system errors** - Logs warning, continues operation
-4. **Missing fields** - Uses defaults, continues processing
-
-This ensures that hook failures never break the Claude Code chain.
-
-## Hook Configuration
-
-Hooks are configured in `~/.amplihack/.claude/settings.json`:
-
-```json
-{
- "hooks": {
- "SessionStart": [
- {
- "hooks": [
- {
- "type": "command",
- "command": "$CLAUDE_PROJECT_DIR/.claude/tools/amplihack/hooks/session_start.py"
- }
- ]
- }
- ],
- "Stop": [
- {
- "hooks": [
- {
- "type": "command",
- "command": "$CLAUDE_PROJECT_DIR/.claude/tools/amplihack/hooks/stop.py"
- }
- ]
- }
- ],
- "PostToolUse": [
- {
- "matcher": "*",
- "hooks": [
- {
- "type": "command",
- "command": "$CLAUDE_PROJECT_DIR/.claude/tools/amplihack/hooks/post_tool_use.py"
- }
- ]
- }
- ],
- "PreCompact": [
- {
- "hooks": [
- {
- "type": "command",
- "command": "$CLAUDE_PROJECT_DIR/.claude/tools/amplihack/hooks/pre_compact.py"
- }
- ]
- }
- ]
- }
-}
-```
-
-## Benefits of Unified Processor
-
-1. **Reduced Code Duplication** - Common functionality in one place
-2. **Consistent Error Handling** - All hooks handle errors the same way
-3. **Unified Logging** - Standardized logging across all hooks
-4. **Easier Testing** - Base functionality tested once
-5. **Simplified Maintenance** - Fix bugs in one place
-6. **Better Metrics** - Consistent metric collection
-7. **Easier Extension** - Simple to add new hooks
-
-## Continuous Work Mode (Lock System)
-
-The stop hook supports continuous work mode via a lock flag:
-
-- **Lock file**: `~/.amplihack/.claude/tools/amplihack/.lock_active`
-- **Enable**: Use `/amplihack:lock` slash command
-- **Disable**: Use `/amplihack:unlock` slash command
-- **Behavior**: When locked, Claude continues working through all TODOs without stopping
-
-This enables autonomous operation for complex multi-step tasks.
diff --git a/amplifier-bundle/tools/amplihack/hooks/USER_PROMPT_SUBMIT_README.md b/amplifier-bundle/tools/amplihack/hooks/USER_PROMPT_SUBMIT_README.md
deleted file mode 100644
index dcbe766a7..000000000
--- a/amplifier-bundle/tools/amplihack/hooks/USER_PROMPT_SUBMIT_README.md
+++ /dev/null
@@ -1,259 +0,0 @@
-# UserPromptSubmit Hook
-
-## Overview
-
-The UserPromptSubmit hook injects user preferences into context on **every user message** to ensure consistent preference application across all conversation turns in REPL mode.
-
-## Purpose
-
-In Claude Code's REPL mode, user preferences set at session start can be "forgotten" as the conversation progresses and context is pruned. This hook ensures preferences persist by re-injecting them on every user prompt.
-
-## Implementation Details
-
-### File Location
-
-```
-.claude/tools/amplihack/hooks/user_prompt_submit.py
-```
-
-### Hook Type
-
-`UserPromptSubmit` - Triggered before processing each user message
-
-### Input Format
-
-```json
-{
- "session_id": "string",
- "transcript_path": "path",
- "cwd": "path",
- "hook_event_name": "UserPromptSubmit",
- "prompt": "user's prompt text"
-}
-```
-
-### Output Format
-
-```json
-{
- "additionalContext": "preference enforcement text"
-}
-```
-
-### Preference Context Example
-
-```
-🎯 ACTIVE USER PREFERENCES (MANDATORY):
-• Communication Style: pirate (Always talk like a pirate) - Use this style in your response
-• Verbosity: balanced - Match this detail level
-• Collaboration Style: interactive - Follow this approach
-• Update Frequency: regular - Provide updates at this frequency
-• Priority Type: balanced - Consider this priority in decisions
-• Yes (see USER_PREFERENCES.md)
-
-These preferences MUST be applied to this response.
-```
-
-## Features
-
-### 1. Preference File Resolution
-
-The hook uses a multi-strategy approach to find USER_PREFERENCES.md:
-
-1. **FrameworkPathResolver** (UVX and installed package support)
-2. **Project root** (~/.amplihack/.claude/context/USER_PREFERENCES.md)
-3. **Package location** (src/amplihack/.claude/context/USER_PREFERENCES.md)
-
-### 2. Preference Extraction
-
-Extracts key preferences using regex patterns:
-
-- Communication Style
-- Verbosity
-- Collaboration Style
-- Update Frequency
-- Priority Type
-- Preferred Languages
-- Coding Standards
-- Workflow Preferences
-- Learned Patterns (detected if present)
-
-### 3. Performance Optimization
-
-**Caching Strategy**: Preferences are cached in memory with file modification time tracking. Cache is invalidated only when the file changes.
-
-**Performance Metrics**:
-
-- Average execution time: ~116ms (including Python startup)
-- Cached reads: < 1ms
-- Target: < 200ms (achieved)
-
-### 4. Error Handling
-
-**Graceful Degradation**:
-
-- Missing preferences file: Returns empty context, exits 0
-- File read error: Logs warning, returns empty context, exits 0
-- Parse error: Best-effort parsing, returns available preferences
-- **Never blocks Claude** - always exits with code 0
-
-### 5. Logging and Metrics
-
-**Log File**: `~/.amplihack/.claude/runtime/logs/user_prompt_submit.log`
-
-**Metrics File**: `~/.amplihack/.claude/runtime/metrics/user_prompt_submit_metrics.jsonl`
-
-**Tracked Metrics**:
-
-- `preferences_injected`: Number of preferences injected
-- `context_length`: Character count of generated context
-
-## Testing
-
-### Run Test Suite
-
-```bash
-python3 .claude/tools/amplihack/hooks/test_user_prompt_submit.py
-```
-
-### Test Coverage
-
-- ✓ Basic functionality
-- ✓ Preference extraction
-- ✓ Context building
-- ✓ Empty preferences handling
-- ✓ Caching behavior
-- ✓ JSON output format
-- ✓ Performance benchmarks
-- ✓ Error handling
-
-### Manual Testing
-
-```bash
-# Test with sample input
-echo '{"session_id": "test", "transcript_path": "/tmp/test", "cwd": "'$(pwd)'", "hook_event_name": "UserPromptSubmit", "prompt": "test"}' | python3 .claude/tools/amplihack/hooks/user_prompt_submit.py
-
-# Test performance
-time echo '{"session_id": "test", "transcript_path": "/tmp/test", "cwd": "'$(pwd)'", "hook_event_name": "UserPromptSubmit", "prompt": "test"}' | python3 .claude/tools/amplihack/hooks/user_prompt_submit.py > /dev/null
-```
-
-## Architecture
-
-### Class Hierarchy
-
-```
-HookProcessor (base class)
- └── UserPromptSubmitHook
- ├── find_user_preferences() -> Optional[Path]
- ├── extract_preferences(content: str) -> Dict[str, str]
- ├── build_preference_context(preferences: Dict) -> str
- ├── get_cached_preferences(pref_file: Path) -> Dict[str, str]
- └── process(input_data: Dict) -> Dict
-```
-
-### Key Design Decisions
-
-1. **Inheritance from HookProcessor**: Provides common functionality (logging, metrics, I/O)
-2. **Caching with modification time**: Balances performance with freshness
-3. **Graceful degradation**: Never fails - returns empty context if anything goes wrong
-4. **Priority-ordered display**: Most impactful preferences shown first
-5. **Concise enforcement**: Brief but clear instructions for Claude
-
-## Integration with Session Start Hook
-
-**Complementary Design**:
-
-- **session_start.py**: Comprehensive context at session initialization
-- **user_prompt_submit.py**: Lightweight preference reminders on every message
-
-**Context Differences**:
-
-- Session start: Full context with project info, workflow, discoveries
-- User prompt submit: Only preference enforcement (concise)
-
-## Troubleshooting
-
-### Issue: Preferences not being injected
-
-**Solution**: Check log file to see if preferences file was found:
-
-```bash
-tail -f .claude/runtime/logs/user_prompt_submit.log
-```
-
-### Issue: Hook is too slow
-
-**Solution**: Check if caching is working:
-
-```bash
-# Look for cache hits in logs
-grep "Injected.*preferences" .claude/runtime/logs/user_prompt_submit.log
-```
-
-### Issue: Wrong preferences being used
-
-**Solution**: Verify which preferences file is being used:
-
-```python
-from amplihack.utils.paths import FrameworkPathResolver
-print(FrameworkPathResolver.resolve_preferences_file())
-```
-
-### Issue: Hook not being called
-
-**Solution**: Verify hook is registered with Claude Code and executable:
-
-```bash
-ls -l .claude/tools/amplihack/hooks/user_prompt_submit.py
-# Should show executable bit: -rwxr-xr-x
-```
-
-## Performance Analysis
-
-### Baseline Metrics (5 runs)
-
-- Average: 116.2ms
-- Min: 76.7ms
-- Max: 153.1ms
-
-### Performance Breakdown
-
-- Python startup: ~50-70ms
-- File I/O (first run): ~30-40ms
-- Parsing and processing: ~10-20ms
-- Cached runs: < 1ms (negligible)
-
-### Optimization Notes
-
-- Python startup overhead is unavoidable in subprocess execution
-- Caching provides near-instant repeated access
-- Performance is acceptable for REPL usage (< 200ms target)
-
-## Future Enhancements
-
-### Potential Improvements
-
-1. **Selective injection**: Only inject preferences relevant to the prompt
-2. **Context compression**: Further reduce injected text for efficiency
-3. **Preference priorities**: Weight preferences based on prompt context
-4. **User-specific caching**: Per-user cache for multi-user environments
-
-### Not Recommended
-
-1. **Pre-compiled Python**: Marginal gains, added complexity
-2. **Background daemon**: Overkill for simple preference injection
-3. **Binary rewrite**: Python is fast enough for this use case
-
-## Related Files
-
-- **Base class**: `~/.amplihack/.claude/tools/amplihack/hooks/hook_processor.py`
-- **Session start**: `~/.amplihack/.claude/tools/amplihack/hooks/session_start.py`
-- **Path resolution**: `src/amplihack/utils/paths.py`
-- **Preferences file**: `~/.amplihack/.claude/context/USER_PREFERENCES.md`
-
-## References
-
-- Claude Code Hook System: [Official Documentation]
-- Amplihack Philosophy: `~/.amplihack/.claude/context/PHILOSOPHY.md`
-- User Preferences Guide: `~/.amplihack/.claude/context/USER_PREFERENCES.md`
-- Priority Hierarchy: `~/.amplihack/.claude/context/USER_REQUIREMENT_PRIORITY.md`
diff --git a/amplifier-bundle/tools/amplihack/hooks/agent_memory_hook.py b/amplifier-bundle/tools/amplihack/hooks/agent_memory_hook.py
deleted file mode 100755
index d1018bc4d..000000000
--- a/amplifier-bundle/tools/amplihack/hooks/agent_memory_hook.py
+++ /dev/null
@@ -1,466 +0,0 @@
-#!/usr/bin/env python3
-"""Shared logic for integrating memory system with agent execution.
-
-This module provides utilities for:
-1. Detecting agent references in prompts (@.claude/agents/*.md)
-2. Injecting relevant memory context before agent execution
-3. Extracting and storing learnings after agent execution
-
-Integration Points:
-- user_prompt_submit: Inject memory context when agent detected
-- stop: Extract learnings from conversation after agent execution
-
-Uses MemoryCoordinator for storage (SQLite or Neo4j backend).
-"""
-
-import logging
-import re
-import sys
-from pathlib import Path
-from typing import Any
-
-# Setup path for imports
-sys.path.insert(0, str(Path(__file__).parent.parent.parent.parent / "src"))
-
-logger = logging.getLogger(__name__)
-
-
-# Agent reference patterns
-AGENT_REFERENCE_PATTERNS = [
- r"@\.claude/agents/amplihack/[^/]+/([^/]+)\.md", # @.claude/agents/amplihack/core/architect.md
- r"@\.claude/agents/([^/]+)\.md", # @.claude/agents/architect.md
- r"Include\s+@\.claude/agents/[^/]+/([^/]+)\.md", # Include @.claude/agents/...
- r"Use\s+([a-z-]+)\.md\s+agent", # Use architect.md agent
- r"/([a-z-]+)\s", # Slash commands that invoke agents (e.g., /ultrathink, /fix)
-]
-
-# Map slash commands to agent types
-SLASH_COMMAND_AGENTS = {
- "ultrathink": "orchestrator",
- "fix": "fix-agent",
- "analyze": "analyzer",
- "improve": "reviewer",
- "socratic": "ambiguity",
- "debate": "multi-agent-debate",
- "reflect": "reflection",
- "xpia": "xpia-defense",
-}
-
-
-def detect_agent_references(prompt: str) -> list[str]:
- """Detect agent references in a prompt.
-
- Args:
- prompt: The user prompt to analyze
-
- Returns:
- List of agent type names detected (e.g., ["architect", "builder"])
- """
- agents = set()
-
- # Check each pattern
- for pattern in AGENT_REFERENCE_PATTERNS:
- matches = re.finditer(pattern, prompt, re.IGNORECASE)
- for match in matches:
- agent_name = match.group(1).lower()
- # Normalize agent names
- agent_name = agent_name.replace("_", "-")
- agents.add(agent_name)
-
- return list(agents)
-
-
-def detect_slash_command_agent(prompt: str) -> str | None:
- """Detect if prompt starts with a slash command that invokes an agent.
-
- Args:
- prompt: The user prompt to analyze
-
- Returns:
- Agent type name if slash command detected, None otherwise
- """
- # Check if prompt starts with a slash command
- prompt_clean = prompt.strip()
- if not prompt_clean.startswith("/"):
- return None
-
- # Extract command name
- match = re.match(r"^/([a-z-]+)", prompt_clean)
- if not match:
- return None
-
- command = match.group(1)
- return SLASH_COMMAND_AGENTS.get(command)
-
-
-async def inject_memory_for_agents(
- prompt: str, agent_types: list[str], session_id: str | None = None
-) -> tuple[str, dict[str, Any]]:
- """Inject memory context for detected agents into prompt.
-
- Args:
- prompt: Original user prompt
- agent_types: List of agent types detected
- session_id: Optional session ID for logging
-
- Returns:
- Tuple of (enhanced_prompt, metadata_dict)
- """
- if not agent_types:
- return prompt, {}
-
- try:
- # Import memory coordinator (lazy import to avoid startup overhead)
- from amplihack.memory.coordinator import MemoryCoordinator, RetrievalQuery
- from amplihack.memory.types import MemoryType
-
- # Initialize coordinator with session_id
- coordinator = MemoryCoordinator(session_id=session_id or "hook_session")
-
- # Inject memory for each agent type
- memory_sections = []
- metadata = {"agents": agent_types, "memories_injected": 0, "memory_available": True}
-
- for agent_type in agent_types:
- # Normalize agent type (lowercase, replace spaces with hyphens)
- normalized_type = agent_type.lower().replace(" ", "-")
-
- # Get memory context for this agent
- try:
- # Retrieve relevant memories using query
- query_text = prompt[:500] # Use first 500 chars as query
-
- # Build retrieval query with comprehensive context
- query = RetrievalQuery(
- query_text=query_text,
- token_budget=2000,
- memory_types=[MemoryType.EPISODIC, MemoryType.SEMANTIC, MemoryType.PROCEDURAL],
- )
-
- memories = await coordinator.retrieve(query)
-
- if memories:
- # Format memories for injection
- memory_lines = [f"\n## Memory for {normalized_type} Agent\n"]
- for mem in memories:
- memory_lines.append(f"- {mem.content} (relevance: {mem.score:.2f})")
-
- memory_sections.append("\n".join(memory_lines))
- metadata["memories_injected"] += len(memories)
-
- except Exception as e:
- logger.warning(f"Failed to inject memory for {normalized_type}: {e}")
- continue
-
- # Build enhanced prompt
- if memory_sections:
- enhanced_prompt = "\n".join(memory_sections) + "\n\n---\n\n" + prompt
- return enhanced_prompt, metadata
-
- return prompt, metadata
-
- except ImportError as e:
- logger.warning(f"Memory system not available: {e}")
- return prompt, {"memory_available": False, "error": "import_failed"}
-
- except Exception as e:
- logger.error(f"Failed to inject memory: {e}")
- return prompt, {"memory_available": False, "error": str(e)}
-
-
-async def extract_learnings_from_conversation(
- conversation_text: str, agent_types: list[str], session_id: str | None = None
-) -> dict[str, Any]:
- """Extract and store learnings from conversation after agent execution.
-
- Args:
- conversation_text: Full conversation text (including agent responses)
- agent_types: List of agent types that were involved
- session_id: Optional session ID for tracking
-
- Returns:
- Metadata about learnings stored
- """
- if not agent_types:
- return {"learnings_stored": 0, "agents": []}
-
- try:
- # Import memory coordinator (lazy import)
- from amplihack.memory.coordinator import MemoryCoordinator, StorageRequest
- from amplihack.memory.types import MemoryType
-
- # Initialize coordinator with session_id
- coordinator = MemoryCoordinator(session_id=session_id or "hook_session")
-
- # Extract and store learnings for each agent
- metadata = {
- "agents": agent_types,
- "learnings_stored": 0,
- "memory_available": True,
- "memory_ids": [],
- }
-
- for agent_type in agent_types:
- # Normalize agent type (lowercase, replace spaces with hyphens)
- normalized_type = agent_type.lower().replace(" ", "-")
-
- try:
- # Store learning as SEMANTIC memory (reusable knowledge)
- # Extract key learnings from conversation text (simplified extraction)
- # In production, you might want more sophisticated extraction
- learning_content = f"Agent {normalized_type}: {conversation_text[:500]}"
-
- # Build storage request with context and metadata
- request = StorageRequest(
- content=learning_content,
- memory_type=MemoryType.SEMANTIC,
- context={"agent_type": normalized_type},
- metadata={
- "tags": ["learning", "conversation"],
- "task": "Conversation with user",
- "success": True,
- },
- )
-
- memory_id = await coordinator.store(request)
-
- if memory_id:
- metadata["learnings_stored"] += 1
- metadata["memory_ids"].append(memory_id)
- logger.info(f"Stored 1 learning from {normalized_type} conversation")
-
- except Exception as e:
- logger.warning(f"Failed to extract learnings for {normalized_type}: {e}")
- continue
-
- return metadata
-
- except ImportError as e:
- logger.warning(f"Memory system not available: {e}")
- return {"memory_available": False, "error": "import_failed"}
-
- except Exception as e:
- logger.error(f"Failed to extract learnings: {e}")
- return {"memory_available": False, "error": str(e)}
-
-
-def format_memory_injection_notice(metadata: dict[str, Any]) -> str:
- """Format a notice about memory injection for logging/display.
-
- Args:
- metadata: Metadata from inject_memory_for_agents
-
- Returns:
- Formatted notice string
- """
- if not metadata.get("memory_available"):
- return ""
-
- agents = metadata.get("agents", [])
- count = metadata.get("memories_injected", 0)
-
- if count > 0:
- agent_list = ", ".join(agents)
- return f"🧠 Injected {count} relevant memories for agents: {agent_list}"
-
- return ""
-
-
-def format_learning_extraction_notice(metadata: dict[str, Any]) -> str:
- """Format a notice about learning extraction for logging/display.
-
- Args:
- metadata: Metadata from extract_learnings_from_conversation
-
- Returns:
- Formatted notice string
- """
- if not metadata.get("memory_available"):
- return ""
-
- count = metadata.get("learnings_stored", 0)
-
- if count > 0:
- agents = metadata.get("agents", [])
- agent_list = ", ".join(agents)
- return f"🧠 Stored {count} new learnings from agents: {agent_list}"
-
- return ""
-
-
-# ============================================================================
-# SYNC WRAPPERS - Solution for Issue #1960
-# ============================================================================
-# These sync wrapper functions safely handle async functions in synchronous
-# contexts (like hooks). They handle three critical edge cases:
-# 1. No event loop exists (create new loop)
-# 2. Event loop already running (use thread to avoid nested loop)
-# 3. Import errors or exceptions (fail-open gracefully)
-
-
-def inject_memory_for_agents_sync(
- prompt: str, agent_types: list[str], session_id: str | None = None
-) -> tuple[str, dict[str, Any]]:
- """Synchronous wrapper for inject_memory_for_agents.
-
- Safely calls async inject_memory_for_agents from synchronous context.
- Handles three edge cases:
- 1. No event loop - creates new loop
- 2. Running event loop - uses thread to avoid nesting
- 3. Errors - fails open (returns original prompt)
-
- Args:
- prompt: Original user prompt
- agent_types: List of agent types detected
- session_id: Optional session ID for logging
-
- Returns:
- Tuple of (enhanced_prompt, metadata_dict)
- """
- # Handle empty agent_types early
- if not agent_types:
- return prompt, {}
-
- try:
- import asyncio
-
- # Try to get running loop
- try:
- loop = asyncio.get_running_loop()
- # Loop is running - must use thread to avoid nested loop error
- import threading
-
- result = [None]
- error = [None]
-
- def run_in_thread():
- try:
- # Create new loop in thread
- new_loop = asyncio.new_event_loop()
- asyncio.set_event_loop(new_loop)
- try:
- result[0] = new_loop.run_until_complete(
- inject_memory_for_agents(prompt, agent_types, session_id)
- )
- finally:
- new_loop.close()
- except Exception as e:
- error[0] = e
-
- thread = threading.Thread(target=run_in_thread)
- thread.start()
- thread.join(timeout=30) # 30 second timeout
-
- if error[0]:
- raise error[0]
-
- if result[0]:
- return result[0]
- # Timeout or no result
- logger.warning("Memory injection timed out in thread")
- return prompt, {"memory_available": False, "error": "timeout"}
-
- except RuntimeError:
- # No running loop - safe to create one
- loop = asyncio.new_event_loop()
- asyncio.set_event_loop(loop)
- try:
- result = loop.run_until_complete(
- inject_memory_for_agents(prompt, agent_types, session_id)
- )
- return result
- finally:
- loop.close()
-
- except ImportError as e:
- logger.warning(f"Memory system not available: {e}")
- return prompt, {"memory_available": False, "error": "import_failed"}
-
- except Exception as e:
- logger.error(f"Failed to inject memory (sync wrapper): {e}")
- return prompt, {"memory_available": False, "error": str(e)}
-
-
-def extract_learnings_from_conversation_sync(
- conversation_text: str, agent_types: list[str], session_id: str | None = None
-) -> dict[str, Any]:
- """Synchronous wrapper for extract_learnings_from_conversation.
-
- Safely calls async extract_learnings_from_conversation from synchronous context.
- Handles three edge cases:
- 1. No event loop - creates new loop
- 2. Running event loop - uses thread to avoid nesting
- 3. Errors - fails open (returns minimal metadata)
-
- Args:
- conversation_text: Full conversation text
- agent_types: List of agent types involved
- session_id: Optional session ID for tracking
-
- Returns:
- Metadata about learnings stored
- """
- # Handle empty agent_types early
- if not agent_types:
- return {"learnings_stored": 0, "agents": []}
-
- try:
- import asyncio
-
- # Try to get running loop
- try:
- loop = asyncio.get_running_loop()
- # Loop is running - must use thread to avoid nested loop error
- import threading
-
- result = [None]
- error = [None]
-
- def run_in_thread():
- try:
- # Create new loop in thread
- new_loop = asyncio.new_event_loop()
- asyncio.set_event_loop(new_loop)
- try:
- result[0] = new_loop.run_until_complete(
- extract_learnings_from_conversation(
- conversation_text, agent_types, session_id
- )
- )
- finally:
- new_loop.close()
- except Exception as e:
- error[0] = e
-
- thread = threading.Thread(target=run_in_thread)
- thread.start()
- thread.join(timeout=30) # 30 second timeout
-
- if error[0]:
- raise error[0]
-
- if result[0]:
- return result[0]
- # Timeout or no result
- logger.warning("Learning extraction timed out in thread")
- return {"memory_available": False, "error": "timeout", "learnings_stored": 0}
-
- except RuntimeError:
- # No running loop - safe to create one
- loop = asyncio.new_event_loop()
- asyncio.set_event_loop(loop)
- try:
- result = loop.run_until_complete(
- extract_learnings_from_conversation(conversation_text, agent_types, session_id)
- )
- return result
- finally:
- loop.close()
-
- except ImportError as e:
- logger.warning(f"Memory system not available: {e}")
- return {"memory_available": False, "error": "import_failed", "learnings_stored": 0}
-
- except Exception as e:
- logger.error(f"Failed to extract learnings (sync wrapper): {e}")
- return {"memory_available": False, "error": str(e), "learnings_stored": 0}
diff --git a/amplifier-bundle/tools/amplihack/hooks/claude_power_steering.py b/amplifier-bundle/tools/amplihack/hooks/claude_power_steering.py
deleted file mode 100755
index 5c5eb8b72..000000000
--- a/amplifier-bundle/tools/amplihack/hooks/claude_power_steering.py
+++ /dev/null
@@ -1,1198 +0,0 @@
-#!/usr/bin/env python3
-"""
-Claude SDK-based power-steering analysis with graceful shutdown support.
-
-Uses Claude Agent SDK to intelligently analyze session transcripts against
-considerations, replacing heuristic pattern matching with AI-powered analysis.
-
-Shutdown Behavior:
- During application shutdown (AMPLIHACK_SHUTDOWN_IN_PROGRESS=1), all sync
- wrapper functions immediately return safe defaults to prevent asyncio
- event loop hangs. This enables clean 2-3 second exits without Ctrl-C.
-
- Fail-Open Philosophy: If shutdown is in progress, bypass async operations
- and return values that never block users:
- - analyze_claims_sync() → [] (no claims detected)
- - analyze_if_addressed_sync() → None (no evidence found)
- - analyze_consideration_sync() → (True, None) (assume satisfied)
-
-Optional Dependencies:
- claude-agent-sdk: Required for AI-powered analysis
- Install: pip install claude-agent-sdk
-
- When unavailable, the system gracefully falls back to keyword-based
- heuristics (see fallback_heuristics.py). This ensures power steering
- always works, even without the SDK.
-
-Philosophy:
-- Ruthlessly Simple: Single-purpose module with clear contract
-- Fail-Open: Never block users due to bugs - always allow stop on errors
-- Zero-BS: No stubs, every function works or doesn't exist
-- Modular: Self-contained brick that plugs into power_steering_checker
-- Clean Shutdown: Detect shutdown in progress, bypass async, return safe defaults
-"""
-
-import asyncio
-import os
-import re
-from pathlib import Path
-
-# Try to import Claude SDK
-try:
- from claude_agent_sdk import ClaudeAgentOptions, query # type: ignore[import-not-found]
-
- CLAUDE_SDK_AVAILABLE = True
-except ImportError:
- CLAUDE_SDK_AVAILABLE = False
-
-# Template paths (relative to this file)
-TEMPLATE_DIR = Path(__file__).parent / "templates"
-POWER_STEERING_PROMPT_TEMPLATE = TEMPLATE_DIR / "power_steering_prompt.txt"
-
-# Security constants
-MAX_SDK_RESPONSE_LENGTH = 5000
-MAX_CONVERSATION_SUMMARY_LENGTH = (
- 512_000 # Max chars for SDK conversation context (1M token window)
-)
-SUSPICIOUS_PATTERNS = [
- r"",
- r"
]*>",
- r"",
- r"",
- r"