diff --git a/.planning/PROJECT.md b/.planning/PROJECT.md index 36358c2..6733e94 100644 --- a/.planning/PROJECT.md +++ b/.planning/PROJECT.md @@ -29,7 +29,7 @@ Users can manage all their AI assistants from one place with consistent configur ### Out of Scope - Cloud services — fully local, no external dependencies -- Non-Ubuntu distros — Ubuntu only for v1, other distros later +- Non-Debian-based distros — Ubuntu and Debian for v1, other distros later - Other claw types — OpenClaw only for v1, ZeroClaw/NemoClaw later - GUI — CLI/TUI only, no web interface - Multi-user/auth — single user for v1 @@ -59,7 +59,7 @@ Users can manage all their AI assistants from one place with consistent configur - **Tech stack**: Python + Typer CLI, ansible-runner for execution, uv/uvx for packaging - **Security**: No sudo permissions — Clawrium prompts user when privileged commands needed -- **Platform**: Ubuntu only for v1 +- **Platform**: Ubuntu and Debian for v1 - **Claw support**: OpenClaw only for v1 - **Deployment**: Fully local, no cloud dependencies diff --git a/.planning/REQUIREMENTS.md b/.planning/REQUIREMENTS.md index f60e1f7..113305a 100644 --- a/.planning/REQUIREMENTS.md +++ b/.planning/REQUIREMENTS.md @@ -28,10 +28,10 @@ Requirements for initial release: OpenClaw on single Ubuntu host. ### Claw Installation -- [ ] **INST-01**: User can install OpenClaw via interactive flow (`clm install`) -- [ ] **INST-02**: Installation validates compatibility before proceeding -- [ ] **INST-03**: Installation streams progress in real-time -- [ ] **INST-04**: Installation fails fast with clear error messages +- [x] **INST-01**: User can install OpenClaw via interactive flow (`clm install`) +- [x] **INST-02**: Installation validates compatibility before proceeding +- [x] **INST-03**: Installation streams progress in real-time +- [x] **INST-04**: Installation fails fast with clear error messages ### Secrets Management @@ -41,7 +41,7 @@ Requirements for initial release: OpenClaw on single Ubuntu host. ### Fleet Status -- [ ] **STAT-01**: User can view fleet status (`clm status`) +- [x] **STAT-01**: User can view fleet status (`clm status`) ## v2 Requirements @@ -99,14 +99,14 @@ Which phases cover which requirements. Updated during roadmap creation. | REG-01 | Phase 3 | Complete | | REG-02 | Phase 3 | Complete | | REG-03 | Phase 3 | Complete | -| SEC-01 | Phase 4 | Pending | -| SEC-02 | Phase 4 | Pending | -| SEC-03 | Phase 4 | Pending | -| INST-01 | Phase 5 | Pending | -| INST-02 | Phase 5 | Pending | -| INST-03 | Phase 5 | Pending | -| INST-04 | Phase 5 | Pending | -| STAT-01 | Phase 5 | Pending | +| INST-01 | Phase 4 | Complete | +| INST-02 | Phase 4 | Complete | +| INST-03 | Phase 4 | Complete | +| INST-04 | Phase 4 | Complete | +| STAT-01 | Phase 4 | Complete | +| SEC-01 | Phase 5 | Pending | +| SEC-02 | Phase 5 | Pending | +| SEC-03 | Phase 5 | Pending | **Coverage:** - v1 requirements: 17 total @@ -115,4 +115,4 @@ Which phases cover which requirements. Updated during roadmap creation. --- *Requirements defined: 2026-03-20* -*Last updated: 2026-03-20 after initial definition* +*Last updated: 2026-03-21 after phase 4 planning* diff --git a/.planning/ROADMAP.md b/.planning/ROADMAP.md index 2cc1617..773c0b3 100644 --- a/.planning/ROADMAP.md +++ b/.planning/ROADMAP.md @@ -14,9 +14,9 @@ Decimal phases appear between their surrounding integers in numeric order. - [x] **Phase 1: Foundation Setup** - Initialize Clawrium configuration and verify dependencies (completed 2026-03-21) - [x] **Phase 2: Host Management** - Add, list, remove hosts with hardware capability detection (completed 2026-03-21) -- [ ] **Phase 3: Registry & Compatibility** - Load claw manifests and validate hardware compatibility -- [ ] **Phase 4: Secrets Management** - Secure storage and retrieval of API keys and credentials -- [ ] **Phase 5: Installation & Fleet Status** - Install OpenClaw instances and view fleet status +- [x] **Phase 3: Registry & Compatibility** - Load claw manifests and validate hardware compatibility (completed 2026-03-21) +- [ ] **Phase 4: Installation & Fleet Status** - Install OpenClaw instances and view fleet status +- [ ] **Phase 5: Secrets Management** - Secure storage and retrieval of API keys and credentials ## Phase Details @@ -68,29 +68,32 @@ Plans: - [x] 03-03-PLAN.md — Implement compatibility checking function (REG-03) - [x] 03-04-PLAN.md — Registry CLI commands (list, show) (REG-02) -### Phase 4: Secrets Management -**Goal**: Users can securely store and manage secrets for claw instances -**Depends on**: Phase 1 -**Requirements**: SEC-01, SEC-02, SEC-03 -**Success Criteria** (what must be TRUE): - 1. User can set a secret with `clm secret set` and it's stored with mode 600 - 2. User can list secret keys with `clm secret list` and values are never displayed - 3. Secrets file is created with correct permissions (600) on first write -**Plans**: TBD - -Plans: -- [ ] 04-01: TBD - -### Phase 5: Installation & Fleet Status +### Phase 4: Installation & Fleet Status **Goal**: Users can install OpenClaw on Ubuntu hosts and view fleet status -**Depends on**: Phase 2, Phase 3, Phase 4 +**Depends on**: Phase 2, Phase 3 **Requirements**: INST-01, INST-02, INST-03, INST-04, STAT-01 **Success Criteria** (what must be TRUE): - 1. User runs `clm install` and flows through: pick claw → pick host → validate compatibility → configure → install + 1. User runs `clm install` and flows through: pick claw → pick host → validate compatibility → install 2. Installation validates compatibility before proceeding and fails fast if host is incompatible 3. User sees real-time progress during installation (base setup, dependencies, claw installation) 4. Installation fails fast with clear error messages if any step fails 5. User runs `clm status` and sees all hosts with their claw instances, agents, and status +**Plans**: 4 plans + +Plans: +- [x] 04-01-PLAN.md — Core install module and Ansible playbooks (INST-02, INST-04) +- [x] 04-02-PLAN.md — Install CLI with interactive flow and progress (INST-01, INST-03) +- [x] 04-03-PLAN.md — Install state tracking and health check module (INST-04, STAT-01) +- [x] 04-04-PLAN.md — Fleet status CLI command (STAT-01) + +### Phase 5: Secrets Management +**Goal**: Users can securely store and manage secrets for claw instances +**Depends on**: Phase 1 +**Requirements**: SEC-01, SEC-02, SEC-03 +**Success Criteria** (what must be TRUE): + 1. User can set a secret with `clm secret set` and it's stored with mode 600 + 2. User can list secret keys with `clm secret list` and values are never displayed + 3. Secrets file is created with correct permissions (600) on first write **Plans**: TBD Plans: @@ -105,6 +108,6 @@ Phases execute in numeric order: 1 → 2 → 3 → 4 → 5 |-------|----------------|--------|-----------| | 1. Foundation Setup | 2/2 | Complete | 2026-03-21 | | 2. Host Management | 4/4 | Complete | 2026-03-21 | -| 3. Registry & Compatibility | 0/4 | Planned | - | -| 4. Secrets Management | 0/0 | Not started | - | -| 5. Installation & Fleet Status | 0/0 | Not started | - | +| 3. Registry & Compatibility | 4/4 | Complete | 2026-03-21 | +| 4. Installation & Fleet Status | 0/4 | Planning complete | - | +| 5. Secrets Management | 0/0 | Not started | - | diff --git a/.planning/STATE.md b/.planning/STATE.md index e6a9ef0..3be778d 100644 --- a/.planning/STATE.md +++ b/.planning/STATE.md @@ -3,14 +3,14 @@ gsd_state_version: 1.0 milestone: v1.0 milestone_name: milestone status: unknown -stopped_at: Completed 03-04-PLAN.md -last_updated: "2026-03-21T22:42:48.527Z" -last_activity: 2026-03-21 +stopped_at: Completed 04-04-PLAN.md +last_updated: "2026-03-22T04:38:50.124Z" +last_activity: 2026-03-22 progress: total_phases: 5 - completed_phases: 3 - total_plans: 10 - completed_plans: 10 + completed_phases: 4 + total_plans: 14 + completed_plans: 14 --- # Project State @@ -20,11 +20,11 @@ progress: See: .planning/PROJECT.md (updated 2026-03-20) **Core value:** Users can manage all their AI assistants from one place with consistent configuration and security practices. -**Current focus:** Phase 03 — registry-compatibility +**Current focus:** Phase 04 — installation-fleet-status ## Current Position -Phase: 4 +Phase: 5 Plan: Not started ## Performance Metrics @@ -51,6 +51,10 @@ Plan: Not started | Phase 03 P02 | 127 | 1 tasks | 2 files | | Phase 03 P03 | 128 | 1 tasks | 2 files | | Phase 03 P04 | 156 | 2 tasks | 3 files | +| Phase 04 P01 | 237 | 2 tasks | 5 files | +| Phase 04 P02 | 280 | 1 tasks | 3 files | +| Phase 04 P03 | 336 | 2 tasks | 5 files | +| Phase 04-installation-fleet-status P04 | 128 | 1 tasks | 3 files | ## Accumulated Context @@ -77,6 +81,14 @@ Recent decisions affecting current work: - [Phase 03]: OS names normalized to lowercase (Ubuntu → ubuntu) for consistent compatibility checking - [Phase 03-03]: Sparse matrix compatibility matching: only explicit manifest entries valid, no partial matches - [Phase 03-04]: Implemented both list and show commands in single module following existing CLI patterns +- [Phase 04-01]: Base playbook located at project root (platform/) not in src/ for easier discovery +- [Phase 04-01]: OpenClaw user naming pattern: opc- using inventory_hostname variable +- [Phase 04-02]: Hybrid invocation pattern for CLI commands (interactive prompts when flags missing, direct with flags) +- [Phase 04-02]: Rich Panel for confirmation dialogs, Rich Progress for long-running operations +- [Phase 04-03]: Use ISO 8601 timestamps for installed_at field in claw tracking +- [Phase 04-03]: Use pgrep for process detection in health checks (simple, portable) +- [Phase 04-installation-fleet-status]: Claw-centric grouping: display organized by claw type rather than by host for better fleet visibility +- [Phase 04-installation-fleet-status]: Rich Progress spinner for health checks provides UX feedback on potentially slow SSH operations ### Pending Todos @@ -97,6 +109,6 @@ None yet. ## Session Continuity -Last activity: 2026-03-21 -Stopped at: Completed 03-04-PLAN.md +Last activity: 2026-03-22 +Stopped at: Completed 04-04-PLAN.md Resume file: None diff --git a/.planning/phases/04-installation-fleet-status/04-01-PLAN.md b/.planning/phases/04-installation-fleet-status/04-01-PLAN.md new file mode 100644 index 0000000..c97724c --- /dev/null +++ b/.planning/phases/04-installation-fleet-status/04-01-PLAN.md @@ -0,0 +1,415 @@ +--- +phase: 04-installation-fleet-status +plan: 01 +type: execute +wave: 1 +depends_on: [] +files_modified: + - platform/playbooks/base.yaml + - src/clawrium/platform/registry/openclaw/playbooks/install.yaml + - src/clawrium/core/install.py + - tests/test_install.py +autonomous: true +requirements: [INST-02, INST-04] + +must_haves: + truths: + - "Compatibility is validated before any installation starts" + - "Installation fails with clear error if host is incompatible" + - "Base playbook installs system dependencies without claw-specific code" + - "OpenClaw playbook installs claw-specific components" + artifacts: + - path: "platform/playbooks/base.yaml" + provides: "System dependency installation (Node.js, build tools)" + contains: "- hosts:" + - path: "src/clawrium/platform/registry/openclaw/playbooks/install.yaml" + provides: "OpenClaw-specific installation tasks" + contains: "- hosts:" + - path: "src/clawrium/core/install.py" + provides: "Installation orchestration with validation" + exports: ["run_installation", "InstallationError"] + - path: "tests/test_install.py" + provides: "Installation module tests" + min_lines: 50 + key_links: + - from: "src/clawrium/core/install.py" + to: "src/clawrium/core/registry.py" + via: "check_compatibility import" + pattern: "from clawrium.core.registry import check_compatibility" + - from: "src/clawrium/core/install.py" + to: "platform/playbooks/base.yaml" + via: "ansible-runner playbook execution" + pattern: "ansible_runner.run" +--- + + +Create the core installation infrastructure: Ansible playbooks for base system setup and OpenClaw installation, plus the Python orchestration module that validates compatibility and runs playbooks in sequence. + +Purpose: Provides the foundation for the install command - playbooks define WHAT to install, core module orchestrates HOW. +Output: Working playbooks and install.py that can be called by the CLI layer. + + + +@/home/devashish/workspace/ric03uec/clawrium/.claude/get-shit-done/workflows/execute-plan.md +@/home/devashish/workspace/ric03uec/clawrium/.claude/get-shit-done/templates/summary.md + + + +@.planning/PROJECT.md +@.planning/ROADMAP.md +@.planning/STATE.md +@.planning/phases/04-installation-fleet-status/04-CONTEXT.md + + + + +From src/clawrium/core/registry.py: +```python +class CompatibilityResult(TypedDict): + compatible: bool + matched_entry: ManifestEntry | None + reasons: list[str] + +def check_compatibility(claw_name: str, hardware: dict, version: str | None = None) -> CompatibilityResult +def load_manifest(claw_name: str) -> ClawManifest +def get_claw_info(claw_name: str) -> dict +``` + +From src/clawrium/core/hosts.py: +```python +def load_hosts() -> list[dict] +def get_host(identifier: str) -> dict | None +def update_host(hostname: str, updater: Callable[[dict], dict]) -> bool +``` + +From src/clawrium/core/hardware.py: +```python +def gather_hardware(hostname: str, user: str = "xclm", port: int = 22, ssh_key: str | None = None) -> HardwareInfo +``` + +From src/clawrium/platform/registry/openclaw/manifest.yaml: +```yaml +name: openclaw +entries: + - version: "0.1.0" + os: ubuntu + os_version: "24.04" + arch: x86_64 + requirements: + min_memory_mb: 2048 + gpu_required: false + dependencies: + nodejs: ">=20.0.0" +``` + + + + + + + Task 1: Create base system playbook and OpenClaw install playbook + platform/playbooks/base.yaml, src/clawrium/platform/registry/openclaw/playbooks/install.yaml + + - src/clawrium/platform/registry/openclaw/manifest.yaml (requirements to fulfill) + - src/clawrium/core/hardware.py (understand Ansible facts patterns used) + + + - base.yaml creates directory structure for playbook + - base.yaml installs Node.js 20+ via NodeSource repo on Ubuntu + - base.yaml installs build-essential for native modules + - base.yaml is idempotent (safe to rerun) + - openclaw/install.yaml creates opc- user + - openclaw/install.yaml clones OpenClaw repo to user home + - openclaw/install.yaml runs npm install + + +Create `platform/playbooks/` directory and base.yaml with these tasks: +1. Ensure apt cache is updated (cache_valid_time: 3600) +2. Install required packages: curl, ca-certificates, gnupg +3. Add NodeSource GPG key and repository for Node.js 20 +4. Install nodejs package +5. Install build-essential + +All tasks use `become: yes` since xclm has passwordless sudo (per D-08). + +Create `src/clawrium/platform/registry/openclaw/playbooks/` directory and install.yaml with these tasks: +1. Create claw user `opc-{{ inventory_hostname }}` with home directory +2. Clone OpenClaw repo (https://github.com/openclaw/openclaw.git) to /home/opc-{{ inventory_hostname }}/openclaw +3. Run npm install in cloned directory as claw user +4. Create workspace directory at /home/opc-{{ inventory_hostname }}/workspace + +Note: Use `inventory_hostname` variable for claw user naming per D-07. +Both playbooks target `all` hosts and will be run against single hosts at invocation time. + + + ls -la platform/playbooks/base.yaml src/clawrium/platform/registry/openclaw/playbooks/install.yaml && grep -q "hosts:" platform/playbooks/base.yaml && grep -q "hosts:" src/clawrium/platform/registry/openclaw/playbooks/install.yaml + + + - platform/playbooks/base.yaml exists and contains "- hosts:" + - platform/playbooks/base.yaml contains "nodejs" installation task + - platform/playbooks/base.yaml contains "build-essential" + - platform/playbooks/base.yaml contains "become: yes" (requires sudo) + - src/clawrium/platform/registry/openclaw/playbooks/install.yaml exists + - openclaw install.yaml contains "opc-" user creation + - openclaw install.yaml contains "npm install" task + - openclaw install.yaml contains git clone for openclaw repo + + Both playbooks exist with complete task definitions + + + + Task 2: Create core install module with validation and orchestration + src/clawrium/core/install.py, tests/test_install.py + + - src/clawrium/core/registry.py (check_compatibility function signature and return type) + - src/clawrium/core/hardware.py (gather_hardware pattern and ansible_runner usage) + - src/clawrium/core/hosts.py (get_host, update_host patterns) + - src/clawrium/core/keys.py (get_host_private_key for SSH key retrieval) + - tests/test_hardware.py (ansible_runner mocking patterns) + + + - Test: run_installation("invalid_claw", host) raises InstallationError with "not found" + - Test: run_installation("openclaw", incompatible_host) raises InstallationError with compatibility reasons + - Test: run_installation("openclaw", compatible_host) returns success with playbook paths executed + - Test: run_installation streams events via callback function + + +Create `src/clawrium/core/install.py` with: + +```python +"""Installation orchestration for claw deployment. + +This module handles the end-to-end installation flow: +1. Validate claw exists in registry +2. Check host compatibility +3. Run base playbook (system dependencies) +4. Run claw-specific playbook +""" + +import logging +import os +import tempfile +from pathlib import Path +from typing import Callable, TypedDict + +import ansible_runner + +from clawrium.core.hosts import get_host, update_host +from clawrium.core.keys import get_host_private_key +from clawrium.core.registry import ( + check_compatibility, + load_manifest, + ManifestNotFoundError, +) + +logger = logging.getLogger(__name__) + + +class InstallationError(Exception): + """Raised when installation fails.""" + pass + + +class InstallResult(TypedDict): + """Result of installation operation.""" + success: bool + claw: str + version: str + host: str + playbooks_run: list[str] + error: str | None + + +def _get_base_playbook_path() -> Path: + """Get path to base system playbook.""" + # Use package-relative path + return Path(__file__).parent.parent / "platform" / "playbooks" / "base.yaml" + + +def _get_claw_playbook_path(claw_name: str) -> Path: + """Get path to claw-specific install playbook.""" + return ( + Path(__file__).parent.parent + / "platform" + / "registry" + / claw_name + / "playbooks" + / "install.yaml" + ) + + +def run_installation( + claw_name: str, + hostname: str, + on_event: Callable[[str, str], None] | None = None, +) -> InstallResult: + """Run full installation of a claw on a host. + + Args: + claw_name: Name of claw to install (e.g., "openclaw") + hostname: Hostname or alias of target host + on_event: Optional callback for progress events (stage, message) + + Returns: + InstallResult with success status and details + + Raises: + InstallationError: If validation fails or playbook execution fails + """ + def emit(stage: str, message: str) -> None: + if on_event: + on_event(stage, message) + logger.info("[%s] %s", stage, message) + + # Step 1: Load manifest (validates claw exists) + emit("validate", f"Checking {claw_name} manifest...") + try: + manifest = load_manifest(claw_name) + except ManifestNotFoundError as e: + raise InstallationError(f"Claw '{claw_name}' not found in registry") from e + + # Step 2: Get host record + emit("validate", f"Loading host {hostname}...") + host = get_host(hostname) + if not host: + raise InstallationError(f"Host '{hostname}' not found. Run 'clm host add' first.") + + # Step 3: Check compatibility + emit("validate", "Checking compatibility...") + hardware = host.get("hardware", {}) + compat = check_compatibility(claw_name, hardware) + + if not compat["compatible"]: + reasons = ", ".join(compat["reasons"]) + raise InstallationError(f"Host is incompatible: {reasons}") + + matched_version = compat["matched_entry"]["version"] + emit("validate", f"Compatible with {claw_name} v{matched_version}") + + # Step 4: Get SSH credentials + key_id = host.get("key_id") or host["hostname"] + ssh_key = get_host_private_key(key_id) + if not ssh_key: + raise InstallationError(f"No SSH key found for host. Run 'clm host init {key_id}'.") + + # Step 5: Build inventory + inventory = { + "all": { + "hosts": { + host["hostname"]: { + "ansible_user": host.get("user", "xclm"), + "ansible_port": host.get("port", 22), + "ansible_ssh_private_key_file": str(ssh_key), + } + } + } + } + + # Step 6: Run base playbook + base_playbook = _get_base_playbook_path() + if not base_playbook.exists(): + raise InstallationError(f"Base playbook not found: {base_playbook}") + + emit("base", "Installing system dependencies...") + playbooks_run = [] + + with tempfile.TemporaryDirectory() as tmpdir: + os.chmod(tmpdir, 0o700) + + result = ansible_runner.run( + private_data_dir=tmpdir, + inventory=inventory, + playbook=str(base_playbook), + quiet=True, + timeout=300, # 5 min timeout for base install + ) + + if result.status != "successful": + raise InstallationError( + f"Base playbook failed: {result.status}. " + f"Check logs at {tmpdir}/artifacts/" + ) + playbooks_run.append(str(base_playbook)) + emit("base", "System dependencies installed") + + # Step 7: Run claw playbook + claw_playbook = _get_claw_playbook_path(claw_name) + if not claw_playbook.exists(): + raise InstallationError(f"Claw playbook not found: {claw_playbook}") + + emit("claw", f"Installing {claw_name}...") + + with tempfile.TemporaryDirectory() as tmpdir: + os.chmod(tmpdir, 0o700) + + result = ansible_runner.run( + private_data_dir=tmpdir, + inventory=inventory, + playbook=str(claw_playbook), + quiet=True, + timeout=600, # 10 min timeout for claw install + ) + + if result.status != "successful": + raise InstallationError( + f"Claw playbook failed: {result.status}. " + f"Check logs at {tmpdir}/artifacts/" + ) + playbooks_run.append(str(claw_playbook)) + emit("claw", f"{claw_name} installed successfully") + + return { + "success": True, + "claw": claw_name, + "version": matched_version, + "host": host["hostname"], + "playbooks_run": playbooks_run, + "error": None, + } +``` + +Create `tests/test_install.py` with tests: +1. test_install_invalid_claw - raises InstallationError with "not found" +2. test_install_incompatible_host - raises InstallationError with compatibility reasons +3. test_install_success - mocks ansible_runner, returns success result +4. test_install_emits_events - verifies on_event callback is called with stages +5. test_install_base_playbook_fails - raises InstallationError on base failure +6. test_install_host_not_found - raises InstallationError for unknown host + + + cd /home/devashish/workspace/ric03uec/clawrium && python -m pytest tests/test_install.py -v --tb=short 2>&1 | tail -30 + + + - src/clawrium/core/install.py exists + - install.py contains "class InstallationError" + - install.py contains "def run_installation" + - install.py contains "from clawrium.core.registry import check_compatibility" + - install.py contains "ansible_runner.run" + - tests/test_install.py exists + - tests/test_install.py contains at least 4 test functions + - All tests in test_install.py pass + + Core install module with validation, playbook orchestration, and passing tests + + + + + +- platform/playbooks/base.yaml exists with Node.js and build-essential tasks +- src/clawrium/platform/registry/openclaw/playbooks/install.yaml exists with user creation and npm install +- src/clawrium/core/install.py provides run_installation function +- tests/test_install.py has at least 4 tests, all passing +- Compatibility validation happens before any playbook runs +- Error messages include reasons for failure + + + +- Playbooks define complete installation steps for base system and OpenClaw +- run_installation validates compatibility and fails fast with clear errors +- Tests verify both success and failure paths +- Module ready for CLI integration in Plan 02 + + + +After completion, create `.planning/phases/04-installation-fleet-status/04-01-SUMMARY.md` + diff --git a/.planning/phases/04-installation-fleet-status/04-01-SUMMARY.md b/.planning/phases/04-installation-fleet-status/04-01-SUMMARY.md new file mode 100644 index 0000000..7d2f5d9 --- /dev/null +++ b/.planning/phases/04-installation-fleet-status/04-01-SUMMARY.md @@ -0,0 +1,193 @@ +--- +phase: 04-installation-fleet-status +plan: 01 +subsystem: installation +tags: [ansible, playbooks, orchestration, tdd] +dependency_graph: + requires: [registry-compatibility, host-management, ssh-keys] + provides: [installation-playbooks, installation-orchestration] + affects: [cli-install-command] +tech_stack: + added: [ansible-playbooks] + patterns: [tdd-red-green-refactor, event-streaming] +key_files: + created: + - platform/playbooks/base.yaml + - src/clawrium/platform/registry/openclaw/playbooks/install.yaml + - src/clawrium/core/install.py + - tests/test_playbooks.py + - tests/test_install.py + modified: [] +decisions: + - "Base playbook located at project root (platform/) not in src/ for easier discovery" + - "Ansible playbooks use become:yes - assumes xclm user has passwordless sudo per D-08" + - "OpenClaw user naming pattern: opc- using inventory_hostname variable" + - "Event streaming via optional callback for progress tracking in CLI" +metrics: + duration_minutes: 3 + tasks_completed: 2 + files_created: 5 + test_coverage: 11 + commits: 2 + completed: 2026-03-22 +--- + +# Phase 04 Plan 01: Installation Infrastructure Summary + +**One-liner:** Created Ansible playbooks for base system setup and OpenClaw installation, plus Python orchestration module with compatibility validation and event streaming + +## What Was Built + +### Task 1: Ansible Playbooks +Created two idempotent playbooks following Ansible best practices: + +**Base playbook** (`platform/playbooks/base.yaml`): +- Updates apt cache with validity check +- Installs NodeSource GPG key and Node.js 20 repository +- Installs nodejs package +- Installs build-essential for native module compilation +- All tasks use `become: yes` (assumes xclm has passwordless sudo) + +**OpenClaw install playbook** (`src/clawrium/platform/registry/openclaw/playbooks/install.yaml`): +- Creates claw user `opc-{{ inventory_hostname }}` with home directory +- Clones OpenClaw repository from GitHub +- Runs npm install as claw user +- Creates workspace directory with correct permissions +- Uses `inventory_hostname` variable for per-host user naming (D-07) + +### Task 2: Installation Orchestration +Created `install.py` core module with full validation and error handling: + +**Validation flow:** +1. Load claw manifest (validates claw exists) +2. Get host record (validates host exists) +3. Check compatibility (validates hardware requirements) +4. Get SSH key (validates credentials available) + +**Execution flow:** +1. Build Ansible inventory with SSH credentials +2. Run base playbook (system dependencies) +3. Run claw-specific playbook (application installation) +4. Return structured result with playbooks executed + +**Error handling:** +- `InstallationError` with clear messages for all failure modes +- Compatibility reasons included in error messages +- Playbook failure status captured and reported + +**Event streaming:** +- Optional `on_event(stage, message)` callback for progress tracking +- Stages: validate, base, claw +- Ready for CLI integration with live progress display + +## Tests Written + +**test_playbooks.py** (4 tests): +- Verifies both playbooks exist and are valid YAML +- Checks for required elements (hosts, become, nodejs, opc-, etc.) + +**test_install.py** (7 tests): +- Invalid claw raises InstallationError +- Host not found raises InstallationError +- Incompatible host raises InstallationError with reasons +- Successful installation returns correct result +- Event callback receives progress updates +- Base playbook failure raises InstallationError +- Missing SSH key raises InstallationError + +All tests use mocking patterns from existing hardware.py tests. + +## Deviations from Plan + +### Auto-fixed Issues + +**1. [Rule 1 - Bug] Fixed base playbook path calculation** +- **Found during:** Task 2 testing (GREEN phase) +- **Issue:** `_get_base_playbook_path()` calculated wrong path (src/platform/ instead of platform/) +- **Fix:** Changed from `parent.parent.parent` to `parent.parent.parent.parent` to reach project root +- **Files modified:** src/clawrium/core/install.py +- **Commit:** ec2cd9f (part of Task 2) +- **Reason:** Bug - tests failed because playbook path was incorrect + +No other deviations - plan executed exactly as written. + +## Key Decisions + +1. **Base playbook location:** Placed at project root `platform/playbooks/` instead of `src/clawrium/platform/playbooks/` for easier discovery and separation of concerns (playbooks are deployment artifacts, not Python code) + +2. **Sudo assumption:** Playbooks use `become: yes` without password prompts, following Decision D-08 that xclm user has passwordless sudo configured during host setup + +3. **Claw user naming:** Using `opc-{{ inventory_hostname }}` pattern ensures unique users per host per claw type, following Decision D-07 + +4. **Event streaming design:** Optional callback pattern allows CLI to display live progress while keeping core module decoupled from UI concerns + +## Integration Points + +**Ready for CLI integration** (Plan 02): +- `run_installation(claw_name, hostname, on_event)` provides complete installation flow +- Returns structured `InstallResult` with success status, version, playbooks run +- Raises `InstallationError` with user-friendly messages for all failure modes + +**Depends on:** +- `clawrium.core.registry.check_compatibility()` - Phase 03 +- `clawrium.core.hosts.get_host()` - Phase 02 +- `clawrium.core.keys.get_host_private_key()` - Phase 02 + +**Provides for:** +- Phase 04 Plan 02: CLI install command implementation +- Phase 04 Plan 03: Fleet status tracking (installed claw versions) + +## Verification + +### Must-Have Truths +- ✅ Compatibility is validated before any installation starts +- ✅ Installation fails with clear error if host is incompatible +- ✅ Base playbook installs system dependencies without claw-specific code +- ✅ OpenClaw playbook installs claw-specific components + +### Artifacts Verified +- ✅ `platform/playbooks/base.yaml` exists with "- hosts:" and nodejs tasks +- ✅ `src/clawrium/platform/registry/openclaw/playbooks/install.yaml` exists with opc- user and npm install +- ✅ `src/clawrium/core/install.py` exports `run_installation` and `InstallationError` +- ✅ Tests cover both success and failure paths +- ✅ All 11 tests passing (4 playbook + 7 install) + +### Key Links Verified +- ✅ install.py imports `check_compatibility` from registry.py +- ✅ install.py uses `ansible_runner.run` for playbook execution +- ✅ Playbooks use correct variable references (inventory_hostname) + +## Self-Check: PASSED + +**Files created:** +- ✅ platform/playbooks/base.yaml exists +- ✅ src/clawrium/platform/registry/openclaw/playbooks/install.yaml exists +- ✅ src/clawrium/core/install.py exists +- ✅ tests/test_playbooks.py exists +- ✅ tests/test_install.py exists + +**Commits verified:** +- ✅ d3cd802: feat(04-01): create base and openclaw installation playbooks +- ✅ ec2cd9f: feat(04-01): create installation orchestration module + +**Tests verified:** +```bash +make test +# 166 passed in 2.45s +``` + +All claims verified. Plan execution successful. + +## Known Stubs + +None - all functionality is fully implemented and wired. No placeholder data or hardcoded stubs. + +## Next Steps + +Plan 02 will create the CLI `install` command that: +1. Parses user input (claw name, host identifier) +2. Calls `run_installation()` with event callback for progress display +3. Displays results in user-friendly format +4. Handles errors gracefully with actionable messages + +The infrastructure is ready for integration. diff --git a/.planning/phases/04-installation-fleet-status/04-02-PLAN.md b/.planning/phases/04-installation-fleet-status/04-02-PLAN.md new file mode 100644 index 0000000..f8df7f8 --- /dev/null +++ b/.planning/phases/04-installation-fleet-status/04-02-PLAN.md @@ -0,0 +1,377 @@ +--- +phase: 04-installation-fleet-status +plan: 02 +type: execute +wave: 2 +depends_on: [04-01] +files_modified: + - src/clawrium/cli/install.py + - src/clawrium/cli/main.py + - tests/test_cli_install.py +autonomous: true +requirements: [INST-01, INST-03] + +must_haves: + truths: + - "User can run clm install and complete installation via prompts" + - "User can run clm install --claw openclaw --host kevin to skip prompts" + - "User sees step-by-step progress with spinners during installation" + - "User sees confirmation prompt with summary before installation starts" + artifacts: + - path: "src/clawrium/cli/install.py" + provides: "Interactive install command with progress display" + exports: ["install"] + - path: "src/clawrium/cli/main.py" + provides: "Main CLI with install command registered" + contains: "import install" + - path: "tests/test_cli_install.py" + provides: "CLI install command tests" + min_lines: 80 + key_links: + - from: "src/clawrium/cli/install.py" + to: "src/clawrium/core/install.py" + via: "run_installation import" + pattern: "from clawrium.core.install import run_installation" + - from: "src/clawrium/cli/main.py" + to: "src/clawrium/cli/install.py" + via: "command registration" + pattern: "@app.command" +--- + + +Create the `clm install` CLI command with interactive prompts, flag overrides, confirmation dialog, and Rich spinner progress display during installation. + +Purpose: User-facing installation flow that wraps core install module with polished UX. +Output: Working `clm install` command that guides users through installation or runs directly with flags. + + + +@/home/devashish/workspace/ric03uec/clawrium/.claude/get-shit-done/workflows/execute-plan.md +@/home/devashish/workspace/ric03uec/clawrium/.claude/get-shit-done/templates/summary.md + + + +@.planning/PROJECT.md +@.planning/ROADMAP.md +@.planning/phases/04-installation-fleet-status/04-CONTEXT.md + + + +```python +class InstallationError(Exception): + """Raised when installation fails.""" + pass + +class InstallResult(TypedDict): + success: bool + claw: str + version: str + host: str + playbooks_run: list[str] + error: str | None + +def run_installation( + claw_name: str, + hostname: str, + on_event: Callable[[str, str], None] | None = None, +) -> InstallResult +``` + + +From src/clawrium/cli/host.py: +```python +console = Console() +host_app = typer.Typer(name="host", help="...", no_args_is_help=True) + +@host_app.command() +def add( + hostname: str = typer.Argument(..., help="..."), + port: Optional[int] = typer.Option(None, "--port", "-p", help="..."), +) -> None: +``` + +From src/clawrium/core/registry.py: +```python +def list_claws() -> list[str] +def get_claw_info(claw_name: str) -> dict # returns {name, description, latest_version, supported_platforms} +def check_compatibility(claw_name: str, hardware: dict, version: str | None = None) -> CompatibilityResult +``` + +From src/clawrium/core/hosts.py: +```python +def load_hosts() -> list[dict] +def get_host(identifier: str) -> dict | None +``` + + + + + + + Task 1: Create install CLI command with interactive flow and progress display + src/clawrium/cli/install.py, src/clawrium/cli/main.py, tests/test_cli_install.py + + - src/clawrium/cli/host.py (CLI patterns, Rich console usage, Typer options) + - src/clawrium/cli/registry.py (list_claws, get_claw_info usage) + - src/clawrium/cli/main.py (how to register new commands) + - src/clawrium/core/install.py (run_installation signature from Plan 01) + - tests/test_cli_host.py (CLI testing patterns with CliRunner) + + + - Test: clm install with no args prompts for claw selection from list_claws() + - Test: clm install with no host prompts for host selection from load_hosts() + - Test: clm install --claw openclaw --host kevin skips prompts + - Test: clm install shows confirmation with claw name, version, host before proceeding + - Test: clm install shows Rich spinner during installation phases + - Test: clm install exits 1 on InstallationError with error message + + +Create `src/clawrium/cli/install.py`: + +```python +"""Install command for deploying claws to hosts.""" + +from typing import Optional + +import typer +from rich.console import Console +from rich.markup import escape +from rich.panel import Panel +from rich.progress import Progress, SpinnerColumn, TextColumn + +from clawrium.core.hosts import load_hosts, get_host, HostsFileCorruptedError +from clawrium.core.install import run_installation, InstallationError +from clawrium.core.registry import ( + list_claws, + get_claw_info, + check_compatibility, + ManifestNotFoundError, +) + +__all__ = ["install"] + +console = Console() + + +def _select_claw() -> str: + """Prompt user to select a claw from registry.""" + claws = list_claws() + if not claws: + console.print("[red]Error:[/red] No claws available in registry") + raise typer.Exit(code=1) + + console.print("\n[bold]Available claws:[/bold]") + for i, claw in enumerate(claws, 1): + try: + info = get_claw_info(claw) + console.print(f" {i}. {escape(claw)} (v{info['latest_version']}) - {escape(info['description'])}") + except ManifestNotFoundError: + console.print(f" {i}. {escape(claw)} (manifest error)") + + console.print() + choice = typer.prompt("Select claw", type=int) + if choice < 1 or choice > len(claws): + console.print("[red]Invalid selection[/red]") + raise typer.Exit(code=1) + + return claws[choice - 1] + + +def _select_host() -> str: + """Prompt user to select a host from registered hosts.""" + try: + hosts = load_hosts() + except HostsFileCorruptedError as e: + console.print(f"[red]Error:[/red] {e}") + raise typer.Exit(code=1) + + if not hosts: + console.print("[red]Error:[/red] No hosts registered. Run 'clm host add' first.") + raise typer.Exit(code=1) + + console.print("\n[bold]Available hosts:[/bold]") + for i, host in enumerate(hosts, 1): + name = host.get("alias") or host["hostname"] + hw = host.get("hardware", {}) + arch = hw.get("architecture", "?") + mem_gb = round(hw.get("memtotal_mb", 0) / 1024, 1) if hw.get("memtotal_mb") else "?" + console.print(f" {i}. {escape(name)} ({arch}, {mem_gb}GB)") + + console.print() + choice = typer.prompt("Select host", type=int) + if choice < 1 or choice > len(hosts): + console.print("[red]Invalid selection[/red]") + raise typer.Exit(code=1) + + # Return hostname for lookup + selected = hosts[choice - 1] + return selected.get("alias") or selected["hostname"] + + +def install( + claw: Optional[str] = typer.Option( + None, "--claw", "-c", help="Claw type to install (e.g., openclaw)" + ), + host: Optional[str] = typer.Option( + None, "--host", "-H", help="Target host (hostname or alias)" + ), + yes: bool = typer.Option( + False, "--yes", "-y", help="Skip confirmation prompt" + ), +) -> None: + """Install a claw on a host. + + Without flags, prompts for claw and host selection interactively. + With --claw and --host flags, runs directly (per D-01 hybrid invocation). + """ + # Step 1: Get claw (prompt if not provided per D-01) + selected_claw = claw or _select_claw() + + # Step 2: Validate claw exists + try: + claw_info = get_claw_info(selected_claw) + except ManifestNotFoundError: + console.print(f"[red]Error:[/red] Claw '{escape(selected_claw)}' not found in registry") + raise typer.Exit(code=1) + + # Step 3: Get host (prompt if not provided per D-01) + selected_host = host or _select_host() + + # Step 4: Load host and check compatibility + host_record = get_host(selected_host) + if not host_record: + console.print(f"[red]Error:[/red] Host '{escape(selected_host)}' not found") + raise typer.Exit(code=1) + + hardware = host_record.get("hardware", {}) + compat = check_compatibility(selected_claw, hardware) + + if not compat["compatible"]: + console.print(f"[red]Error:[/red] Host is incompatible with {selected_claw}:") + for reason in compat["reasons"]: + console.print(f" - {reason}") + raise typer.Exit(code=1) + + matched_version = compat["matched_entry"]["version"] + display_host = host_record.get("alias") or host_record["hostname"] + + # Step 5: Show confirmation summary (per D-03) + summary = Panel( + f"[bold]Claw:[/bold] {selected_claw}\n" + f"[bold]Version:[/bold] {matched_version}\n" + f"[bold]Host:[/bold] {display_host}\n" + f"[bold]Architecture:[/bold] {hardware.get('architecture', 'unknown')}\n" + f"[bold]Memory:[/bold] {round(hardware.get('memtotal_mb', 0) / 1024, 1)}GB", + title="Installation Summary", + border_style="cyan", + ) + console.print(summary) + + if not yes and not typer.confirm("\nProceed with installation?", default=False): + console.print("Installation cancelled.") + raise typer.Exit(code=0) + + # Step 6: Run installation with progress spinner (per D-02) + current_stage = [""] # Mutable to update from callback + + def on_event(stage: str, message: str) -> None: + current_stage[0] = f"[{stage}] {message}" + + console.print() # Blank line before progress + + try: + with Progress( + SpinnerColumn(), + TextColumn("[progress.description]{task.description}"), + console=console, + transient=True, + ) as progress: + task = progress.add_task("Starting installation...", total=None) + + def update_progress(stage: str, message: str) -> None: + progress.update(task, description=f"[{stage}] {message}") + + result = run_installation( + claw_name=selected_claw, + hostname=selected_host, + on_event=update_progress, + ) + + # Success + console.print(f"[green]Success![/green] {selected_claw} v{result['version']} installed on {display_host}") + + except InstallationError as e: + # Error display per D-10 + console.print(f"[red]Installation failed:[/red] {e}") + raise typer.Exit(code=1) +``` + +Update `src/clawrium/cli/main.py` to register the install command: + +```python +# Add at imports: +from clawrium.cli.install import install as install_command + +# Add after registry_app registration: +@app.command() +def install( + claw: Optional[str] = typer.Option(None, "--claw", "-c", help="Claw type to install"), + host: Optional[str] = typer.Option(None, "--host", "-H", help="Target host"), + yes: bool = typer.Option(False, "--yes", "-y", help="Skip confirmation"), +) -> None: + """Install a claw on a host.""" + install_command(claw=claw, host=host, yes=yes) +``` + +Create `tests/test_cli_install.py` with tests: +1. test_install_prompts_for_claw - no --claw flag triggers prompt +2. test_install_prompts_for_host - no --host flag triggers prompt +3. test_install_with_flags_skips_prompts - --claw and --host go direct +4. test_install_shows_confirmation - output contains "Installation Summary" +5. test_install_yes_skips_confirmation - --yes proceeds without confirm +6. test_install_cancelled_exits_0 - declining confirm exits cleanly +7. test_install_error_exits_1 - InstallationError shows error, exits 1 +8. test_install_incompatible_exits_1 - incompatible host shows reasons, exits 1 + + + cd /home/devashish/workspace/ric03uec/clawrium && python -m pytest tests/test_cli_install.py -v --tb=short 2>&1 | tail -40 + + + - src/clawrium/cli/install.py exists + - install.py contains "def install" function + - install.py contains "def _select_claw" for interactive selection + - install.py contains "def _select_host" for interactive selection + - install.py contains "from clawrium.core.install import run_installation" + - install.py contains "Progress" from rich.progress for spinner + - install.py contains "Panel" for confirmation summary + - src/clawrium/cli/main.py contains "from clawrium.cli.install import" + - main.py contains "@app.command" decorator for install + - tests/test_cli_install.py exists with at least 6 test functions + - All tests in test_cli_install.py pass + - clm install --help shows --claw, --host, --yes options + + clm install command with interactive prompts, progress display, and confirmation + + + + + +- clm install --help shows all options +- clm install (no args) prompts for claw and host selection +- clm install --claw openclaw --host myhost shows confirmation and proceeds +- clm install --claw openclaw --host myhost --yes skips confirmation +- Progress spinner visible during installation +- Error messages clear and actionable +- All CLI tests pass + + + +- Users can complete installation interactively or via flags (D-01) +- Step-by-step progress shown during install (D-02) +- Confirmation with summary displayed before install (D-03) +- Tests cover success, error, and cancellation paths + + + +After completion, create `.planning/phases/04-installation-fleet-status/04-02-SUMMARY.md` + diff --git a/.planning/phases/04-installation-fleet-status/04-02-SUMMARY.md b/.planning/phases/04-installation-fleet-status/04-02-SUMMARY.md new file mode 100644 index 0000000..00f6f3a --- /dev/null +++ b/.planning/phases/04-installation-fleet-status/04-02-SUMMARY.md @@ -0,0 +1,181 @@ +--- +phase: 04-installation-fleet-status +plan: 02 +subsystem: cli +tags: [installation, cli, interactive, progress-display] +requires: [04-01] +provides: [install-command, interactive-install-flow] +affects: [cli-ux, install-experience] +tech-stack: + added: [typer-prompts, rich-panel, rich-progress] + patterns: [hybrid-invocation, confirmation-dialog, progress-spinner] +key-files: + created: + - src/clawrium/cli/install.py + - tests/test_cli_install.py + modified: + - src/clawrium/cli/main.py +decisions: + - Hybrid invocation pattern: interactive prompts when flags missing, direct execution with flags + - Rich Panel for confirmation summary with installation details + - Rich Progress spinner for real-time installation feedback + - Exit 0 on cancellation to distinguish from errors (exit 1) + - Compatibility checking before confirmation to fail fast +metrics: + duration_seconds: 280 + tasks_completed: 1 + tests_added: 8 + commits: 3 +completed_at: "2026-03-22T04:27:24Z" +--- + +# Phase 04 Plan 02: Install CLI Command Summary + +**One-liner:** Interactive install command with Rich progress display and confirmation dialog wrapping core install module + +## What Was Built + +Created `clm install` CLI command providing polished UX for claw installation with: +- Interactive prompts for claw and host selection when flags not provided +- Confirmation dialog showing installation summary (claw, version, host, specs) +- Rich spinner progress display during installation phases +- Support for `--claw`, `--host`, `--yes` flags for non-interactive automation +- Clear error messages for incompatibility and installation failures +- Proper exit codes (0 for cancellation, 1 for errors) + +The command bridges user interaction with the core install module, implementing the D-01 hybrid invocation pattern and D-02/D-03 UX requirements. + +## Implementation Notes + +### CLI Flow +1. **Selection Phase**: Prompt for claw/host if flags not provided + - Claw list from registry with version and description + - Host list with architecture and memory + - Input validation with clear error messages + +2. **Validation Phase**: Check compatibility before showing confirmation + - Load claw manifest + - Get host record + - Run compatibility check + - Fail fast with clear incompatibility reasons + +3. **Confirmation Phase**: Show summary panel and confirm + - Panel displays claw, version, host, architecture, memory + - `--yes` flag skips confirmation for automation + - Cancellation exits cleanly with code 0 + +4. **Installation Phase**: Run with progress feedback + - Rich spinner with stage/message updates + - Progress updates via callback from core install + - Success message on completion + - Error display on failure (exit 1) + +### Testing Strategy + +All tests follow CliRunner pattern from test_cli_host.py: +- Mock isolation using `isolated_config` fixture +- Test helper `create_host()` for host record setup +- Test helper `create_test_keypair()` for SSH key setup +- Patch `run_installation` to avoid actual playbook execution +- Input simulation via `input` parameter for prompts + +8 test scenarios covering: +- Claw selection prompt when `--claw` not provided +- Host selection prompt when `--host` not provided +- Flag overrides skip prompts +- Confirmation summary display +- `--yes` skips confirmation +- Cancellation exits 0 +- InstallationError handling (exit 1) +- Incompatibility detection and rejection + +## Deviations from Plan + +### Auto-fixed Issues + +**1. [Rule 3 - Bug] Fixed indentation error in core/install.py** +- **Found during:** Test execution (GREEN phase) +- **Issue:** Line 164 had malformed `try:` block from previous task, causing ImportError +- **Fix:** Removed empty try block, adjusted indentation for Step 7/8 comments +- **Files modified:** src/clawrium/core/install.py +- **Commit:** (part of GREEN phase, linter fixed it) +- **Reason:** Blocking issue preventing test execution (Deviation Rule 3) + +**2. [Refactor - REFACTOR phase] Removed dead code** +- **Found during:** Code review after GREEN phase +- **Issue:** `current_stage` variable and `on_event` callback defined but never used +- **Fix:** Removed unused variables, simplified to use `update_progress` directly +- **Files modified:** src/clawrium/cli/install.py +- **Commit:** c36d3f3 +- **Reason:** Code cleanup during TDD REFACTOR phase + +## Verification + +### Automated Tests +```bash +.venv/bin/pytest tests/test_cli_install.py -v +# Result: 8/8 passed +``` + +### Manual Verification +```bash +# Help display +.venv/bin/clm install --help +# Shows: --claw, --host, --yes options + +# Interactive flow (requires actual hosts/keys, not run) +# .venv/bin/clm install +# Expected: Prompts for claw, then host, shows confirmation + +# Direct invocation (requires actual hosts/keys, not run) +# .venv/bin/clm install --claw openclaw --host myhost +# Expected: Shows confirmation, installs on acceptance +``` + +## Success Criteria + +- ✓ Users can complete installation interactively or via flags (D-01) +- ✓ Step-by-step progress shown during install (D-02) +- ✓ Confirmation with summary displayed before install (D-03) +- ✓ Tests cover success, error, and cancellation paths +- ✓ All 8 CLI install tests pass +- ✓ Install command registered in main CLI +- ✓ Help text shows all options + +## Files Modified + +### Created +- `src/clawrium/cli/install.py` (171 lines) - Install CLI command implementation +- `tests/test_cli_install.py` (254 lines) - Comprehensive CLI tests + +### Modified +- `src/clawrium/cli/main.py` - Added install command registration and import + +## Commits + +| Hash | Type | Message | +|------|------|---------| +| 0b05173 | test | Add failing tests for install CLI command (RED phase) | +| b64b56d | feat | Implement install CLI command with interactive flow (GREEN phase) | +| c36d3f3 | refactor | Remove unused callback variable in install command (REFACTOR phase) | + +## Known Limitations + +None identified. Command follows existing CLI patterns and integrates cleanly with core install module. + +## Next Steps + +Per roadmap, next plans are: +- 04-03: Fleet status command for viewing installed claws +- 04-04: E2E installation test for OpenClaw on real/mock host + +The install command is now ready for integration testing in 04-04. + +## Self-Check: PASSED + +All files created and commits verified: +- ✓ src/clawrium/cli/install.py exists +- ✓ tests/test_cli_install.py exists +- ✓ Commit 0b05173 (test) exists +- ✓ Commit b64b56d (feat) exists +- ✓ Commit c36d3f3 (refactor) exists diff --git a/.planning/phases/04-installation-fleet-status/04-03-PLAN.md b/.planning/phases/04-installation-fleet-status/04-03-PLAN.md new file mode 100644 index 0000000..3480ad1 --- /dev/null +++ b/.planning/phases/04-installation-fleet-status/04-03-PLAN.md @@ -0,0 +1,567 @@ +--- +phase: 04-installation-fleet-status +plan: 03 +type: execute +wave: 2 +depends_on: [04-01] +files_modified: + - src/clawrium/core/hosts.py + - src/clawrium/core/install.py + - src/clawrium/core/health.py + - tests/test_hosts.py + - tests/test_health.py +autonomous: true +requirements: [INST-04, STAT-01] + +must_haves: + truths: + - "Install state is tracked in host records (installed claws, status)" + - "Partial/failed installations are recorded and visible" + - "Health check determines if a claw process is running on host" + - "Health check uses SSH to query process state, not cached data" + artifacts: + - path: "src/clawrium/core/hosts.py" + provides: "Extended host records with claw installation tracking" + contains: "claws" + - path: "src/clawrium/core/health.py" + provides: "Live health checking via SSH" + exports: ["check_claw_health", "ClawStatus"] + - path: "tests/test_health.py" + provides: "Health check tests" + min_lines: 40 + key_links: + - from: "src/clawrium/core/install.py" + to: "src/clawrium/core/hosts.py" + via: "update_host for install state" + pattern: "update_host.*claws" + - from: "src/clawrium/core/health.py" + to: "ansible_runner" + via: "remote process check" + pattern: "ansible_runner.run" +--- + + +Extend host records to track installed claws with their state, and add live health checking to determine if claw processes are running. This completes INST-04 (error tracking) and provides foundation for STAT-01 (fleet status). + +Purpose: Track installation state for error visibility and enable live health checks for status display. +Output: Updated hosts.py with claw tracking, new health.py module for process checking. + + + +@/home/devashish/workspace/ric03uec/clawrium/.claude/get-shit-done/workflows/execute-plan.md +@/home/devashish/workspace/ric03uec/clawrium/.claude/get-shit-done/templates/summary.md + + + +@.planning/PROJECT.md +@.planning/ROADMAP.md +@.planning/phases/04-installation-fleet-status/04-CONTEXT.md + + + +```python +def load_hosts() -> list[dict] +def save_hosts(hosts: list[dict]) -> None +def update_host(hostname: str, updater: Callable[[dict], dict]) -> bool +def get_host(identifier: str) -> dict | None + +# Current host record structure: +{ + "hostname": "192.168.1.100", + "key_id": "myhost", + "port": 22, + "user": "xclm", + "auth_method": "key", + "hardware": {...}, + "metadata": {"added_at": "...", "last_seen": "...", "tags": []} +} +``` + + +```python +class InstallResult(TypedDict): + success: bool + claw: str + version: str + host: str + playbooks_run: list[str] + error: str | None +``` + + +```python +def gather_hardware(hostname: str, user: str, port: int, ssh_key: str | None) -> HardwareInfo +# Uses ansible_runner.run with module="shell" for command execution +``` + + + + + + + Task 1: Add claw installation tracking to host records + src/clawrium/core/hosts.py, src/clawrium/core/install.py, tests/test_hosts.py + + - src/clawrium/core/hosts.py (current host record structure, update_host function) + - src/clawrium/core/install.py (run_installation function to modify) + - tests/test_hosts.py (existing test patterns) + + + - Test: After successful install, host record contains claws[claw_name] with status="installed" + - Test: After failed install, host record contains claws[claw_name] with status="failed" and error message + - Test: install.py updates host record on success via update_host + - Test: install.py updates host record on failure via update_host + + +Update `src/clawrium/core/hosts.py` to document the extended host schema (no code changes needed - update_host handles arbitrary dict updates): + +Add docstring update to document the claw tracking schema: +```python +# Host record schema (extended): +# { +# "hostname": str, +# "claws": { +# "openclaw": { +# "version": "0.1.0", +# "status": "installed" | "failed" | "installing", +# "installed_at": "ISO timestamp", +# "error": str | None, +# "user": "opc-hostname" # per D-07 +# } +# }, +# ...existing fields... +# } +``` + +Update `src/clawrium/core/install.py` run_installation to track state: + +1. Before running playbooks, update host with status="installing": +```python +def set_installing(h: dict) -> dict: + if "claws" not in h: + h["claws"] = {} + h["claws"][claw_name] = { + "version": matched_version, + "status": "installing", + "installed_at": None, + "error": None, + "user": f"opc-{h['hostname'].split('.')[0]}" # Simple hostname + } + return h + +update_host(host["hostname"], set_installing) +``` + +2. On success, update status to "installed" with timestamp: +```python +from datetime import datetime, timezone + +def set_installed(h: dict) -> dict: + h["claws"][claw_name]["status"] = "installed" + h["claws"][claw_name]["installed_at"] = datetime.now(timezone.utc).isoformat() + return h + +update_host(host["hostname"], set_installed) +``` + +3. On failure (in except block), update status to "failed" with error: +```python +def set_failed(h: dict) -> dict: + if "claws" not in h: + h["claws"] = {} + if claw_name not in h["claws"]: + h["claws"][claw_name] = {"version": matched_version, "user": None} + h["claws"][claw_name]["status"] = "failed" + h["claws"][claw_name]["error"] = str(e) + h["claws"][claw_name]["installed_at"] = datetime.now(timezone.utc).isoformat() + return h + +update_host(host["hostname"], set_failed) +``` + +Add tests to `tests/test_hosts.py`: +1. test_host_claw_tracking_installed - verify claws dict structure after install +2. test_host_claw_tracking_failed - verify error is recorded on failure + +Update `tests/test_install.py` to verify state tracking: +1. test_install_updates_host_on_success - check update_host called with installed status +2. test_install_updates_host_on_failure - check update_host called with failed status + + + cd /home/devashish/workspace/ric03uec/clawrium && python -m pytest tests/test_hosts.py tests/test_install.py -v -k "claw_tracking or updates_host" --tb=short 2>&1 | tail -30 + + + - src/clawrium/core/install.py contains "update_host" calls for state tracking + - install.py contains status assignment: "installing", "installed", "failed" + - install.py contains "claws" dict manipulation in update callbacks + - install.py imports datetime for timestamp + - tests/test_hosts.py contains test_host_claw_tracking tests (or tests in test_install.py) + - Tests verify claws dict contains version, status, installed_at, error fields + + Host records track claw installation state with success/failure status + + + + Task 2: Create health check module for live claw status + src/clawrium/core/health.py, tests/test_health.py + + - src/clawrium/core/hardware.py (ansible_runner pattern for remote shell commands) + - src/clawrium/core/hosts.py (host record structure with claws field) + - src/clawrium/core/keys.py (get_host_private_key for SSH key) + + + - Test: check_claw_health returns "running" if process found + - Test: check_claw_health returns "stopped" if process not found + - Test: check_claw_health returns "unknown" if SSH fails + - Test: check_claw_health uses SSH to query remote host (not cached) + + +Create `src/clawrium/core/health.py`: + +```python +"""Live health checking for claw instances. + +This module provides functions to check if claw processes are running +on remote hosts via SSH. Per D-13, this performs live checks, not cached data. +""" + +import logging +import os +import tempfile +from enum import Enum +from typing import TypedDict + +import ansible_runner + +from clawrium.core.keys import get_host_private_key + +logger = logging.getLogger(__name__) + + +class ClawStatus(str, Enum): + """Status of a claw instance.""" + RUNNING = "running" + STOPPED = "stopped" + UNKNOWN = "unknown" + NOT_INSTALLED = "not_installed" + + +class HealthResult(TypedDict): + """Result of health check for a claw on a host.""" + claw: str + host: str + status: ClawStatus + user: str | None + error: str | None + + +def check_claw_health( + claw_name: str, + host: dict, +) -> HealthResult: + """Check if a claw process is running on a host. + + Performs live SSH check per D-13. Does not use cached data. + + Args: + claw_name: Name of claw to check (e.g., "openclaw") + host: Host record dict with hostname, port, user, key_id, claws + + Returns: + HealthResult with status and any error message + """ + hostname = host["hostname"] + port = host.get("port", 22) + user = host.get("user", "xclm") + + # Get claw record from host + claws = host.get("claws", {}) + claw_record = claws.get(claw_name) + + if not claw_record: + return { + "claw": claw_name, + "host": hostname, + "status": ClawStatus.NOT_INSTALLED, + "user": None, + "error": None, + } + + claw_user = claw_record.get("user") + if not claw_user: + # No user set - can't check + return { + "claw": claw_name, + "host": hostname, + "status": ClawStatus.UNKNOWN, + "user": None, + "error": "No claw user recorded", + } + + # Get SSH key + key_id = host.get("key_id") or hostname + ssh_key = get_host_private_key(key_id) + if not ssh_key: + return { + "claw": claw_name, + "host": hostname, + "status": ClawStatus.UNKNOWN, + "user": claw_user, + "error": "SSH key not found", + } + + # Build inventory + inventory = { + "all": { + "hosts": { + hostname: { + "ansible_user": user, + "ansible_port": port, + "ansible_ssh_private_key_file": str(ssh_key), + } + } + } + } + + # Check for node process owned by claw user + # OpenClaw runs as a Node.js process + check_cmd = f"pgrep -u {claw_user} node >/dev/null 2>&1 && echo RUNNING || echo STOPPED" + + with tempfile.TemporaryDirectory() as tmpdir: + os.chmod(tmpdir, 0o700) + + result = ansible_runner.run( + private_data_dir=tmpdir, + inventory=inventory, + host_pattern=hostname, + module="shell", + module_args=check_cmd, + quiet=True, + timeout=15, + ) + + if result.status == "timeout": + return { + "claw": claw_name, + "host": hostname, + "status": ClawStatus.UNKNOWN, + "user": claw_user, + "error": "Health check timed out", + } + + if result.status != "successful": + return { + "claw": claw_name, + "host": hostname, + "status": ClawStatus.UNKNOWN, + "user": claw_user, + "error": f"SSH failed: {result.status}", + } + + # Parse output from events + output = "" + for event in result.events: + if event.get("event") == "runner_on_ok": + output = event.get("event_data", {}).get("res", {}).get("stdout", "") + break + + if "RUNNING" in output: + return { + "claw": claw_name, + "host": hostname, + "status": ClawStatus.RUNNING, + "user": claw_user, + "error": None, + } + else: + return { + "claw": claw_name, + "host": hostname, + "status": ClawStatus.STOPPED, + "user": claw_user, + "error": None, + } + + +def check_all_claws_on_host(host: dict) -> list[HealthResult]: + """Check health of all installed claws on a host. + + Args: + host: Host record dict + + Returns: + List of HealthResult for each installed claw + """ + results = [] + claws = host.get("claws", {}) + + for claw_name in claws: + result = check_claw_health(claw_name, host) + results.append(result) + + return results +``` + +Create `tests/test_health.py`: + +```python +"""Tests for claw health checking.""" + +import pytest +from unittest.mock import patch, MagicMock + +from clawrium.core.health import ( + check_claw_health, + check_all_claws_on_host, + ClawStatus, +) + + +@pytest.fixture +def mock_host(): + """Host record with installed claw.""" + return { + "hostname": "192.168.1.100", + "port": 22, + "user": "xclm", + "key_id": "testhost", + "claws": { + "openclaw": { + "version": "0.1.0", + "status": "installed", + "user": "opc-testhost", + } + }, + } + + +def test_health_check_running(mock_host): + """Process running returns RUNNING status.""" + mock_runner = MagicMock() + mock_runner.status = "successful" + mock_runner.events = [ + {"event": "runner_on_ok", "event_data": {"res": {"stdout": "RUNNING"}}} + ] + + with patch("clawrium.core.health.get_host_private_key", return_value="/fake/key"): + with patch("clawrium.core.health.ansible_runner.run", return_value=mock_runner): + result = check_claw_health("openclaw", mock_host) + + assert result["status"] == ClawStatus.RUNNING + assert result["claw"] == "openclaw" + assert result["user"] == "opc-testhost" + assert result["error"] is None + + +def test_health_check_stopped(mock_host): + """Process not running returns STOPPED status.""" + mock_runner = MagicMock() + mock_runner.status = "successful" + mock_runner.events = [ + {"event": "runner_on_ok", "event_data": {"res": {"stdout": "STOPPED"}}} + ] + + with patch("clawrium.core.health.get_host_private_key", return_value="/fake/key"): + with patch("clawrium.core.health.ansible_runner.run", return_value=mock_runner): + result = check_claw_health("openclaw", mock_host) + + assert result["status"] == ClawStatus.STOPPED + + +def test_health_check_ssh_fails(mock_host): + """SSH failure returns UNKNOWN status with error.""" + mock_runner = MagicMock() + mock_runner.status = "failed" + mock_runner.events = [] + + with patch("clawrium.core.health.get_host_private_key", return_value="/fake/key"): + with patch("clawrium.core.health.ansible_runner.run", return_value=mock_runner): + result = check_claw_health("openclaw", mock_host) + + assert result["status"] == ClawStatus.UNKNOWN + assert "SSH failed" in result["error"] + + +def test_health_check_not_installed(mock_host): + """Claw not in host record returns NOT_INSTALLED.""" + result = check_claw_health("zeroclaw", mock_host) + + assert result["status"] == ClawStatus.NOT_INSTALLED + + +def test_health_check_no_ssh_key(mock_host): + """Missing SSH key returns UNKNOWN.""" + with patch("clawrium.core.health.get_host_private_key", return_value=None): + result = check_claw_health("openclaw", mock_host) + + assert result["status"] == ClawStatus.UNKNOWN + assert "SSH key not found" in result["error"] + + +def test_health_check_timeout(mock_host): + """Timeout returns UNKNOWN status.""" + mock_runner = MagicMock() + mock_runner.status = "timeout" + mock_runner.events = [] + + with patch("clawrium.core.health.get_host_private_key", return_value="/fake/key"): + with patch("clawrium.core.health.ansible_runner.run", return_value=mock_runner): + result = check_claw_health("openclaw", mock_host) + + assert result["status"] == ClawStatus.UNKNOWN + assert "timed out" in result["error"] + + +def test_check_all_claws_on_host(mock_host): + """check_all_claws_on_host returns results for each claw.""" + mock_runner = MagicMock() + mock_runner.status = "successful" + mock_runner.events = [ + {"event": "runner_on_ok", "event_data": {"res": {"stdout": "RUNNING"}}} + ] + + with patch("clawrium.core.health.get_host_private_key", return_value="/fake/key"): + with patch("clawrium.core.health.ansible_runner.run", return_value=mock_runner): + results = check_all_claws_on_host(mock_host) + + assert len(results) == 1 + assert results[0]["claw"] == "openclaw" + assert results[0]["status"] == ClawStatus.RUNNING +``` + + + cd /home/devashish/workspace/ric03uec/clawrium && python -m pytest tests/test_health.py -v --tb=short 2>&1 | tail -30 + + + - src/clawrium/core/health.py exists + - health.py contains "class ClawStatus" enum with RUNNING, STOPPED, UNKNOWN, NOT_INSTALLED + - health.py contains "def check_claw_health" + - health.py contains "def check_all_claws_on_host" + - health.py contains "ansible_runner.run" for SSH execution + - health.py contains "pgrep" command for process detection + - tests/test_health.py exists with at least 6 test functions + - All tests in test_health.py pass + + Health check module with live SSH-based process status verification + + + + + +- Host records contain claws dict after installation +- Failed installations have status="failed" with error message +- check_claw_health performs live SSH check +- Health check returns correct status for running/stopped/unknown states +- All new tests pass + + + +- Installation state tracked in host records (D-11) +- Partial/failed states visible via claws dict +- Live health check via SSH (D-13) +- Tests cover all health check scenarios + + + +After completion, create `.planning/phases/04-installation-fleet-status/04-03-SUMMARY.md` + diff --git a/.planning/phases/04-installation-fleet-status/04-03-SUMMARY.md b/.planning/phases/04-installation-fleet-status/04-03-SUMMARY.md new file mode 100644 index 0000000..b52e2cb --- /dev/null +++ b/.planning/phases/04-installation-fleet-status/04-03-SUMMARY.md @@ -0,0 +1,185 @@ +--- +phase: 04-installation-fleet-status +plan: 03 +subsystem: core +tags: [installation, health-check, state-tracking] +requirements: [INST-04, STAT-01] + +dependency_graph: + requires: + - 04-01 (base playbook and installation orchestration) + provides: + - claw installation state tracking in host records + - live health checking via SSH + affects: + - host records (now include claws dict) + - installation flow (tracks state transitions) + +tech_stack: + added: + - datetime (Python stdlib) for ISO timestamps + patterns: + - State machine (installing → installed/failed) + - Live SSH checks via ansible_runner + - Process detection with pgrep + +key_files: + created: + - src/clawrium/core/health.py (live health checking module) + - tests/test_health.py (health check tests) + modified: + - src/clawrium/core/install.py (state tracking in run_installation) + - tests/test_hosts.py (claw tracking tests) + - tests/test_install.py (install state update tests) + +decisions: + - Use ISO 8601 timestamps for installed_at field + - Track claw user in host record for process ownership checking + - Use pgrep for process detection (simple, portable) + - Health check timeout set to 15s (shorter than install timeout) + - Return NOT_INSTALLED status when claw not in host record + +metrics: + duration_seconds: 336 + tasks_completed: 2 + files_created: 2 + files_modified: 3 + tests_added: 9 + completed_at: "2026-03-22T04:28:14Z" +--- + +# Phase 04 Plan 03: Installation State Tracking and Health Checks Summary + +Claw installation state tracking with success/failure recording and live SSH-based health checking for process status. + +## Objective Achieved + +Extended host records to track installed claws with their state (installing/installed/failed), and implemented live health checking to determine if claw processes are running on remote hosts. + +## Execution Flow + +### Task 1: Add claw installation tracking to host records +**Status:** Complete ✓ +**Commit:** 8e9c0f2 + +**Approach:** +- Extended host record schema with `claws` dict containing per-claw state +- Modified `run_installation()` to set `installing` status before playbook execution +- Wrapped playbook execution in try-except to track failures +- Set `installed` status on success with ISO timestamp +- Set `failed` status on exception with error message + +**Schema added:** +```python +{ + "hostname": str, + "claws": { + "openclaw": { + "version": "0.1.0", + "status": "installed" | "failed" | "installing", + "installed_at": "ISO timestamp", + "error": str | None, + "user": "opc-hostname" # per D-07 + } + } +} +``` + +**Key changes:** +- `install.py` imports `datetime` and `timezone` for timestamps +- Three state update functions: `set_installing`, `set_installed`, `set_failed` +- All use `update_host()` for atomic updates +- Claw user extracted from hostname (e.g., `opc-testhost` for `testhost`) + +**Tests added:** +- `test_host_claw_tracking_installed` - verify installed state structure +- `test_host_claw_tracking_failed` - verify failed state with error +- `test_install_updates_host_on_success` - verify state transitions +- `test_install_updates_host_on_failure` - verify failure tracking + +**Test challenges:** +- Mock needed to simulate persistent state between `update_host` calls +- Captured before/after status to verify transitions +- All tests pass (185 total) + +### Task 2: Create health check module for live claw status +**Status:** Complete ✓ +**Commit:** 288e82a + +**Approach:** +- Created new module `health.py` with live SSH-based health checking +- Implemented `ClawStatus` enum (RUNNING, STOPPED, UNKNOWN, NOT_INSTALLED) +- Used ansible_runner with shell module to run `pgrep` for process detection +- Returns structured `HealthResult` with status, user, error + +**Implementation:** +```python +def check_claw_health(claw_name: str, host: dict) -> HealthResult +def check_all_claws_on_host(host: dict) -> list[HealthResult] +``` + +**Process detection:** +- Command: `pgrep -u {claw_user} node >/dev/null 2>&1 && echo RUNNING || echo STOPPED` +- Checks for node process owned by claw user +- 15-second timeout for SSH check + +**Error handling:** +- NOT_INSTALLED: claw not in host record +- UNKNOWN: missing SSH key, SSH timeout, SSH failure, no claw user +- Errors include descriptive messages for debugging + +**Tests added:** +- `test_health_check_running` - process found +- `test_health_check_stopped` - process not found +- `test_health_check_ssh_fails` - SSH error handling +- `test_health_check_not_installed` - claw not tracked +- `test_health_check_no_ssh_key` - missing credentials +- `test_health_check_timeout` - timeout handling +- `test_check_all_claws_on_host` - bulk checking + +**All tests use mocks for:** +- `get_host_private_key` (returns fake path) +- `ansible_runner.run` (returns mock result with events) + +## Deviations from Plan + +None - plan executed exactly as written. + +## Verification + +- [x] Host records contain `claws` dict after installation +- [x] Failed installations have `status="failed"` with error message +- [x] `check_claw_health` performs live SSH check (not cached) +- [x] Health check returns correct status for all scenarios +- [x] All 185 tests pass + +## Requirements Completed + +- **INST-04:** Error tracking - ✓ Failed installations recorded in host record with error details +- **STAT-01:** Fleet status foundation - ✓ Health checking infrastructure in place + +## Next Steps + +These modules provide the foundation for: +- Plan 04-04: Fleet status display (will use `check_all_claws_on_host`) +- Future claw lifecycle commands (start/stop/restart - will use health checks) +- Installation error diagnostics (query failed installations from host records) + +## Known Limitations + +- Health check only detects node processes (OpenClaw-specific) +- No differentiation between multiple node processes owned by same user +- No version verification (assumes any node process is the claw) + +These limitations are acceptable for v1 (OpenClaw-only). Future enhancements can add: +- Process name matching (`openclaw` in command line) +- PID file validation +- Version query via claw API + +## Self-Check: PASSED + +All files created and commits verified: +- FOUND: src/clawrium/core/health.py +- FOUND: tests/test_health.py +- FOUND: 8e9c0f2 (Task 1 commit) +- FOUND: 288e82a (Task 2 commit) diff --git a/.planning/phases/04-installation-fleet-status/04-04-PLAN.md b/.planning/phases/04-installation-fleet-status/04-04-PLAN.md new file mode 100644 index 0000000..e0673ef --- /dev/null +++ b/.planning/phases/04-installation-fleet-status/04-04-PLAN.md @@ -0,0 +1,508 @@ +--- +phase: 04-installation-fleet-status +plan: 04 +type: execute +wave: 3 +depends_on: [04-02, 04-03] +files_modified: + - src/clawrium/cli/status.py + - src/clawrium/cli/main.py + - tests/test_cli_status.py +autonomous: true +requirements: [STAT-01] + +must_haves: + truths: + - "User can run clm status and see all claws across all hosts" + - "Status display is claw-centric, grouped by claw type" + - "Status shows live health check results (running/stopped/unknown)" + - "Status includes claw name, version, host, and status" + artifacts: + - path: "src/clawrium/cli/status.py" + provides: "Fleet status command with claw-centric display" + exports: ["status"] + - path: "src/clawrium/cli/main.py" + provides: "Main CLI with status command registered" + contains: "from clawrium.cli.status" + - path: "tests/test_cli_status.py" + provides: "CLI status command tests" + min_lines: 60 + key_links: + - from: "src/clawrium/cli/status.py" + to: "src/clawrium/core/health.py" + via: "check_claw_health import" + pattern: "from clawrium.core.health import" + - from: "src/clawrium/cli/status.py" + to: "src/clawrium/core/hosts.py" + via: "load_hosts for fleet enumeration" + pattern: "from clawrium.core.hosts import load_hosts" +--- + + +Create the `clm status` CLI command that displays fleet-wide claw status with live health checks, grouped by claw type per D-12. + +Purpose: Users can see at a glance what's running across their entire fleet. +Output: Working `clm status` command showing claw instances with live status. + + + +@/home/devashish/workspace/ric03uec/clawrium/.claude/get-shit-done/workflows/execute-plan.md +@/home/devashish/workspace/ric03uec/clawrium/.claude/get-shit-done/templates/summary.md + + + +@.planning/PROJECT.md +@.planning/ROADMAP.md +@.planning/phases/04-installation-fleet-status/04-CONTEXT.md + + + +```python +class ClawStatus(str, Enum): + RUNNING = "running" + STOPPED = "stopped" + UNKNOWN = "unknown" + NOT_INSTALLED = "not_installed" + +class HealthResult(TypedDict): + claw: str + host: str + status: ClawStatus + user: str | None + error: str | None + +def check_claw_health(claw_name: str, host: dict) -> HealthResult +def check_all_claws_on_host(host: dict) -> list[HealthResult] +``` + + +```python +# Host record with claws tracking: +{ + "hostname": "192.168.1.100", + "claws": { + "openclaw": { + "version": "0.1.0", + "status": "installed", + "installed_at": "...", + "user": "opc-192" + } + }, + ... +} + +def load_hosts() -> list[dict] +``` + + +From src/clawrium/cli/host.py: +```python +console = Console() +table = Table(title="...") +table.add_column("Name", style="cyan") +# Rich table output pattern +``` + + + + + + + Task 1: Create status CLI command with claw-centric fleet view + src/clawrium/cli/status.py, src/clawrium/cli/main.py, tests/test_cli_status.py + + - src/clawrium/cli/host.py (Rich table patterns, console usage) + - src/clawrium/cli/registry.py (simple command structure) + - src/clawrium/cli/main.py (command registration pattern) + - src/clawrium/core/health.py (check_claw_health, ClawStatus from Plan 03) + - src/clawrium/core/hosts.py (load_hosts to enumerate fleet) + - tests/test_cli_host.py (CLI testing patterns) + + + - Test: clm status with no claws shows "No claws installed" message + - Test: clm status shows table grouped by claw type (per D-12) + - Test: clm status shows claw name, version, host, status columns (per D-14) + - Test: clm status performs live health check (mocked) + - Test: clm status colors: green for running, red for stopped, yellow for unknown + - Test: clm status with --host filter shows only that host's claws + + +Create `src/clawrium/cli/status.py`: + +```python +"""Fleet status command for viewing claw instances across hosts.""" + +from collections import defaultdict +from typing import Optional + +import typer +from rich.console import Console +from rich.markup import escape +from rich.progress import Progress, SpinnerColumn, TextColumn +from rich.table import Table + +from clawrium.core.hosts import load_hosts, HostsFileCorruptedError +from clawrium.core.health import check_claw_health, ClawStatus + +__all__ = ["status"] + +console = Console() + + +def status( + host: Optional[str] = typer.Option( + None, "--host", "-H", help="Filter to specific host (hostname or alias)" + ), +) -> None: + """Show fleet status across all hosts. + + Displays claw instances grouped by claw type (per D-12) with live + health check (per D-13). Shows name, version, host, status (per D-14). + """ + # Load all hosts + try: + hosts = load_hosts() + except HostsFileCorruptedError as e: + console.print(f"[red]Error:[/red] {e}") + raise typer.Exit(code=1) + + if not hosts: + console.print("No hosts registered. Run 'clm host add' to add a host.") + return + + # Filter to specific host if requested + if host: + hosts = [h for h in hosts if h.get("hostname") == host or h.get("alias") == host] + if not hosts: + console.print(f"[red]Error:[/red] Host '{escape(host)}' not found") + raise typer.Exit(code=1) + + # Collect all claws across hosts, grouped by claw type + # Structure: {claw_name: [(host_record, claw_record), ...]} + claws_by_type: dict[str, list[tuple[dict, dict]]] = defaultdict(list) + + for h in hosts: + for claw_name, claw_record in h.get("claws", {}).items(): + claws_by_type[claw_name].append((h, claw_record)) + + if not claws_by_type: + console.print("No claws installed on any host.") + console.print("Run 'clm install' to install a claw.") + return + + # Perform live health checks with progress spinner + health_results: dict[tuple[str, str], ClawStatus] = {} # (claw, hostname) -> status + + with Progress( + SpinnerColumn(), + TextColumn("[progress.description]{task.description}"), + console=console, + transient=True, + ) as progress: + task = progress.add_task("Checking fleet health...", total=None) + + for claw_name, instances in claws_by_type.items(): + for h, claw_record in instances: + progress.update(task, description=f"Checking {claw_name} on {h.get('alias') or h['hostname']}...") + result = check_claw_health(claw_name, h) + health_results[(claw_name, h["hostname"])] = result["status"] + + console.print() # Blank line after progress + + # Display claw-centric view (per D-12) + for claw_name in sorted(claws_by_type.keys()): + instances = claws_by_type[claw_name] + + table = Table(title=f"[bold cyan]{escape(claw_name)}[/bold cyan]") + table.add_column("Host", style="white") + table.add_column("Version", style="green") + table.add_column("User", style="dim") + table.add_column("Status") + table.add_column("Installed", style="dim") + + for h, claw_record in instances: + display_host = h.get("alias") or h["hostname"] + version = claw_record.get("version", "?") + user = claw_record.get("user", "-") + installed_at = claw_record.get("installed_at", "-") + if installed_at and installed_at != "-": + # Format as date only for readability + installed_at = installed_at.split("T")[0] + + # Get live status with color coding + live_status = health_results.get((claw_name, h["hostname"]), ClawStatus.UNKNOWN) + + if live_status == ClawStatus.RUNNING: + status_display = "[green]running[/green]" + elif live_status == ClawStatus.STOPPED: + status_display = "[red]stopped[/red]" + elif live_status == ClawStatus.NOT_INSTALLED: + status_display = "[yellow]not installed[/yellow]" + else: + status_display = "[yellow]unknown[/yellow]" + + # Also show install state if failed + install_status = claw_record.get("status", "") + if install_status == "failed": + status_display = f"[red]install failed[/red]" + elif install_status == "installing": + status_display = "[yellow]installing...[/yellow]" + + table.add_row( + escape(display_host), + version, + escape(user) if user else "-", + status_display, + installed_at, + ) + + console.print(table) + console.print() # Space between claw types +``` + +Update `src/clawrium/cli/main.py` to add status command: + +```python +# Add at imports: +from clawrium.cli.status import status as status_command + +# Add after install command: +@app.command() +def status( + host: Optional[str] = typer.Option(None, "--host", "-H", help="Filter to specific host"), +) -> None: + """Show fleet status across all hosts.""" + status_command(host=host) +``` + +Create `tests/test_cli_status.py`: + +```python +"""Tests for fleet status CLI command.""" + +import pytest +from unittest.mock import patch, MagicMock +from typer.testing import CliRunner + +from clawrium.cli.main import app +from clawrium.core.health import ClawStatus + + +runner = CliRunner() + + +@pytest.fixture +def mock_hosts_with_claws(): + """Hosts with installed claws.""" + return [ + { + "hostname": "192.168.1.100", + "alias": "server1", + "port": 22, + "user": "xclm", + "key_id": "server1", + "claws": { + "openclaw": { + "version": "0.1.0", + "status": "installed", + "installed_at": "2026-03-21T10:00:00Z", + "user": "opc-server1", + } + }, + }, + { + "hostname": "192.168.1.101", + "alias": "server2", + "port": 22, + "user": "xclm", + "key_id": "server2", + "claws": { + "openclaw": { + "version": "0.1.0", + "status": "installed", + "installed_at": "2026-03-21T11:00:00Z", + "user": "opc-server2", + } + }, + }, + ] + + +def test_status_no_hosts(): + """No hosts shows message to add hosts.""" + with patch("clawrium.cli.status.load_hosts", return_value=[]): + result = runner.invoke(app, ["status"]) + + assert result.exit_code == 0 + assert "No hosts registered" in result.output + + +def test_status_no_claws(): + """Hosts with no claws shows install message.""" + hosts = [{"hostname": "192.168.1.100", "claws": {}}] + + with patch("clawrium.cli.status.load_hosts", return_value=hosts): + result = runner.invoke(app, ["status"]) + + assert result.exit_code == 0 + assert "No claws installed" in result.output + + +def test_status_shows_claw_table(mock_hosts_with_claws): + """Status shows table grouped by claw type.""" + mock_health = MagicMock(return_value={ + "claw": "openclaw", + "host": "192.168.1.100", + "status": ClawStatus.RUNNING, + "user": "opc-server1", + "error": None, + }) + + with patch("clawrium.cli.status.load_hosts", return_value=mock_hosts_with_claws): + with patch("clawrium.cli.status.check_claw_health", mock_health): + result = runner.invoke(app, ["status"]) + + assert result.exit_code == 0 + assert "openclaw" in result.output + assert "server1" in result.output + assert "0.1.0" in result.output + + +def test_status_shows_running_status(mock_hosts_with_claws): + """Running claw shows green status.""" + mock_health = MagicMock(return_value={ + "claw": "openclaw", + "host": "192.168.1.100", + "status": ClawStatus.RUNNING, + "user": "opc-server1", + "error": None, + }) + + with patch("clawrium.cli.status.load_hosts", return_value=mock_hosts_with_claws): + with patch("clawrium.cli.status.check_claw_health", mock_health): + result = runner.invoke(app, ["status"]) + + assert "running" in result.output + + +def test_status_shows_stopped_status(mock_hosts_with_claws): + """Stopped claw shows red status.""" + mock_health = MagicMock(return_value={ + "claw": "openclaw", + "host": "192.168.1.100", + "status": ClawStatus.STOPPED, + "user": "opc-server1", + "error": None, + }) + + with patch("clawrium.cli.status.load_hosts", return_value=mock_hosts_with_claws): + with patch("clawrium.cli.status.check_claw_health", mock_health): + result = runner.invoke(app, ["status"]) + + assert "stopped" in result.output + + +def test_status_host_filter(mock_hosts_with_claws): + """--host flag filters to specific host.""" + mock_health = MagicMock(return_value={ + "claw": "openclaw", + "host": "192.168.1.100", + "status": ClawStatus.RUNNING, + "user": "opc-server1", + "error": None, + }) + + with patch("clawrium.cli.status.load_hosts", return_value=mock_hosts_with_claws): + with patch("clawrium.cli.status.check_claw_health", mock_health): + result = runner.invoke(app, ["status", "--host", "server1"]) + + assert result.exit_code == 0 + assert "server1" in result.output + # Health check should only be called once (for server1) + assert mock_health.call_count == 1 + + +def test_status_host_filter_not_found(mock_hosts_with_claws): + """--host with unknown host shows error.""" + with patch("clawrium.cli.status.load_hosts", return_value=mock_hosts_with_claws): + result = runner.invoke(app, ["status", "--host", "unknown"]) + + assert result.exit_code == 1 + assert "not found" in result.output + + +def test_status_shows_failed_install(): + """Failed installation shows install failed status.""" + hosts = [{ + "hostname": "192.168.1.100", + "alias": "server1", + "claws": { + "openclaw": { + "version": "0.1.0", + "status": "failed", + "error": "Playbook failed", + "user": "opc-server1", + } + }, + }] + + # Health check returns unknown since not really installed + mock_health = MagicMock(return_value={ + "claw": "openclaw", + "host": "192.168.1.100", + "status": ClawStatus.STOPPED, + "user": "opc-server1", + "error": None, + }) + + with patch("clawrium.cli.status.load_hosts", return_value=hosts): + with patch("clawrium.cli.status.check_claw_health", mock_health): + result = runner.invoke(app, ["status"]) + + assert "install failed" in result.output +``` + + + cd /home/devashish/workspace/ric03uec/clawrium && python -m pytest tests/test_cli_status.py -v --tb=short 2>&1 | tail -40 + + + - src/clawrium/cli/status.py exists + - status.py contains "def status" function + - status.py contains "from clawrium.core.health import check_claw_health" + - status.py contains "from clawrium.core.hosts import load_hosts" + - status.py contains "Table" from rich.table for display + - status.py contains color coding: "[green]running", "[red]stopped", "[yellow]unknown" + - status.py groups output by claw type (claws_by_type dict) + - src/clawrium/cli/main.py contains "from clawrium.cli.status import" + - main.py contains "@app.command" for status + - tests/test_cli_status.py exists with at least 6 test functions + - All tests in test_cli_status.py pass + - clm status --help shows --host option + + clm status command with claw-centric view and live health checks + + + + + +- clm status --help shows available options +- clm status (no args) shows all claws across all hosts +- clm status --host myhost filters to that host +- Output is grouped by claw type (D-12) +- Shows claw name, version, host, status (D-14) +- Live health check performed (D-13) +- Color coding for running/stopped/unknown +- All tests pass + + + +- Users can view fleet status with `clm status` (STAT-01) +- Claw-centric grouping (D-12) +- Live health checks (D-13) +- Essential info displayed (D-14) +- Tests cover all display scenarios + + + +After completion, create `.planning/phases/04-installation-fleet-status/04-04-SUMMARY.md` + diff --git a/.planning/phases/04-installation-fleet-status/04-04-SUMMARY.md b/.planning/phases/04-installation-fleet-status/04-04-SUMMARY.md new file mode 100644 index 0000000..7edb4bd --- /dev/null +++ b/.planning/phases/04-installation-fleet-status/04-04-SUMMARY.md @@ -0,0 +1,136 @@ +--- +phase: 04-installation-fleet-status +plan: 04 +subsystem: cli +tags: [typer, rich, fleet-management, health-check, status-display] + +# Dependency graph +requires: + - phase: 04-03 + provides: "Live health check implementation with ClawStatus enum and HealthResult types" + - phase: 04-02 + provides: "Install command that tracks claw installations in host records" +provides: + - "Fleet status command showing all claws across all hosts" + - "Claw-centric grouping pattern (D-12) for multi-host display" + - "Live health check integration (D-13) showing running/stopped/unknown states" + - "Host filter capability for single-host status view" +affects: [fleet-management, monitoring, operational-visibility] + +# Tech tracking +tech-stack: + added: [rich.progress.Progress, rich.progress.SpinnerColumn, collections.defaultdict] + patterns: + - "Claw-centric grouping: organize fleet view by claw type, not by host" + - "Live health check integration with progress spinner for UX" + - "Color-coded status display: green (running), red (stopped), yellow (unknown)" + +key-files: + created: + - src/clawrium/cli/status.py + - tests/test_cli_status.py + modified: + - src/clawrium/cli/main.py + +key-decisions: + - "Display claw-centric view (grouped by claw type) per D-12 instead of host-centric view" + - "Show install status (failed, installing) alongside live health check status" + - "Use Rich Progress with spinner for health check operation feedback" + - "Format installed_at as date-only (YYYY-MM-DD) for readability" + +patterns-established: + - "Fleet status commands group by entity type (claw-centric) for multi-host visibility" + - "Live operations use Rich Progress with transient spinners for UX" + - "Status displays combine static metadata (version, install state) with live checks (process status)" + +requirements-completed: [STAT-01] + +# Metrics +duration: 128s +completed: 2026-03-22 +--- + +# Phase 04 Plan 04: Fleet Status Command Summary + +**Fleet status command with claw-centric display, live health checks, and color-coded process status across all hosts** + +## Performance + +- **Duration:** 2min 8s +- **Started:** 2026-03-22T04:31:22Z +- **Completed:** 2026-03-22T04:33:30Z +- **Tasks:** 1 +- **Files modified:** 3 + +## Accomplishments +- Users can view fleet-wide claw status with `clm status` command (STAT-01) +- Claw-centric grouping shows all instances of each claw type across hosts (D-12) +- Live health checks performed via SSH for real-time process status (D-13) +- Display includes host, version, user, status, and install date (D-14) +- Host filter (`--host`) enables single-host status view +- Color-coded status: green (running), red (stopped), yellow (unknown/not installed) + +## Task Commits + +Each task was committed atomically: + +1. **Task 1: Create status CLI command with claw-centric fleet view** - `7b36995` (feat) + - Created `src/clawrium/cli/status.py` with claw-centric grouping + - Integrated live health checks via `check_claw_health` + - Added Rich Progress spinner for health check feedback + - Implemented `--host` filter for single-host view + - Added 8 comprehensive test cases covering all scenarios + +## Files Created/Modified +- `src/clawrium/cli/status.py` - Fleet status command with claw-centric display and live health checks +- `src/clawrium/cli/main.py` - Registered status command in main CLI +- `tests/test_cli_status.py` - 8 test cases covering empty fleet, claw display, status colors, host filtering + +## Decisions Made + +1. **Claw-centric grouping**: Display organized by claw type (openclaw, zeroclaw, etc.) rather than by host, making it easy to see all instances of each claw type at a glance (per D-12) + +2. **Install status priority**: Show install state (failed, installing) even when live health check returns different status, since install failures need immediate visibility + +3. **Date-only formatting**: Display installed_at as YYYY-MM-DD instead of full ISO timestamp for better readability in table view + +4. **Progress spinner UX**: Use Rich Progress with transient spinner during health checks to provide feedback on potentially slow SSH operations + +## Deviations from Plan + +None - plan executed exactly as written. + +## Issues Encountered + +None + +## User Setup Required + +None - no external service configuration required. + +## Next Phase Readiness + +- Fleet status command complete and functional +- All health check infrastructure from Plan 03 working correctly +- Ready for Phase 5 or additional fleet management features +- No blockers + +## Stub Tracking + +No stubs present. All data is sourced from: +- Host records from `load_hosts()` +- Live health checks via `check_claw_health()` +- All display fields populated from actual data + +## Self-Check: PASSED + +Files verified: +- FOUND: src/clawrium/cli/status.py +- FOUND: tests/test_cli_status.py + +Commits verified: +- FOUND: 7b36995 + +--- +*Phase: 04-installation-fleet-status* +*Completed: 2026-03-22* diff --git a/.planning/phases/04-installation-fleet-status/04-CONTEXT.md b/.planning/phases/04-installation-fleet-status/04-CONTEXT.md new file mode 100644 index 0000000..68391e6 --- /dev/null +++ b/.planning/phases/04-installation-fleet-status/04-CONTEXT.md @@ -0,0 +1,122 @@ +# Phase 4: Installation & Fleet Status - Context + +**Gathered:** 2026-03-21 +**Status:** Ready for planning + + +## Phase Boundary + +Install OpenClaw on Ubuntu hosts and view fleet status. Users run `clm install`, flow through claw selection → host selection → compatibility validation → installation. They can view fleet-wide claw status with `clm status`. Configuration of secrets/API keys is deferred to Phase 5. + + + + +## Implementation Decisions + +### Install Flow UX +- **D-01:** Hybrid invocation — flags override prompts. `clm install` prompts for missing values, `clm install --claw openclaw --host kevin` runs directly. +- **D-02:** Step-by-step progress with Rich spinners — show each phase: "Installing dependencies...", "Creating user...", "Configuring OpenClaw..." +- **D-03:** Confirmation required before install — display summary (claw, version, host, capabilities) and ask "Proceed? [y/N]" + +### Ansible Playbook Structure +- **D-04:** Two-layer playbook architecture: + - Base layer: OS/hardware packages (Node.js, Rust, etc.) — runs as `xclm` with sudo + - Claw layer: Claw-specific setup (npm install, workspace) — runs as claw user (`opc-`) +- **D-05:** Single base playbook with Ansible conditionals handles OS+arch variations via `when:` conditions based on facts +- **D-06:** Playbook directory structure: + - `platform/playbooks/base.yaml` — shared base system setup + - `platform/registry//playbooks/install.yaml` — claw-specific installation +- **D-07:** Dedicated user per claw — create `opc-` user for OpenClaw (isolates claw from system) +- **D-08:** xclm user assumed to have passwordless sudo on hosts (pre-configured by user) + +### Error Handling +- **D-09:** Fail fast, no rollback — stop on first error, leave system in partial state. Playbooks are idempotent so user can retry. +- **D-10:** Error display: summary message + path to full Ansible log for debugging +- **D-11:** Track install state in host record — mark as 'install_failed' or 'partial'. `clm status` shows it. + +### Fleet Status Display +- **D-12:** Claw-centric view — list claws across all hosts, grouped by claw type +- **D-13:** Live health check — SSH to hosts and check if claw process is running (not cached) +- **D-14:** Essential info per claw: name, version, host, status (running/stopped/unknown) + +### Claude's Discretion +- Exact Ansible task structure and module choices +- Progress spinner styling and timing +- Table column layout and widths +- Log file location and format + + + + +## Canonical References + +**Downstream agents MUST read these before planning or implementing.** + +### Requirements +- `.planning/REQUIREMENTS.md` — INST-01 through INST-04, STAT-01 specifications + +### Existing Code +- `src/clawrium/core/registry.py` — `check_compatibility()` for pre-install validation +- `src/clawrium/core/hosts.py` — Host storage, will need install state tracking +- `src/clawrium/core/hardware.py` — `gather_hardware()` pattern for Ansible facts +- `src/clawrium/platform/registry/openclaw/manifest.yaml` — OpenClaw requirements + +### Prior Context +- `.planning/phases/02-host-management/02-CONTEXT.md` — D-12 (two-user model), D-14 (Ansible facts pattern) +- `.planning/phases/03-registry-compatibility/03-CONTEXT.md` — D-10 (binary compatibility), D-13 (no separate check command) + +### Project Constraints +- `.planning/PROJECT.md` — Tech stack (Typer, ansible-runner), no-sudo policy in Clawrium itself, Ubuntu only + + + + +## Existing Code Insights + +### Reusable Assets +- `core/registry.py`: `check_compatibility()` — validates host against manifest before install +- `core/hosts.py`: `load_hosts()`, `save_hosts()`, `update_host()` — extend for install state +- `core/hardware.py`: `gather_hardware()` — Ansible facts pattern to reuse for process checks +- `cli/host.py`: Rich table output pattern for status display + +### Established Patterns +- Typer subcommand structure: `clm ` (e.g., `clm install`, `clm status`) +- ansible-runner for remote execution +- Rich spinners and tables for CLI output +- JSON for user data storage + +### Integration Points +- New `core/install.py` for installation orchestration +- New `cli/install.py` command with hybrid prompts +- New `cli/status.py` command for fleet view +- Extend `core/hosts.py` with install state fields +- New `platform/playbooks/base.yaml` for system setup +- New `platform/registry/openclaw/playbooks/install.yaml` for OpenClaw setup + + + + +## Specific Ideas + +- Two-layer playbook keeps concerns separate: OS team can update base.yaml, claw maintainers update their install.yaml +- Base playbook is idempotent — safe to rerun if installing second claw on same host +- Install state tracking prevents "is it installed?" confusion in multi-claw scenarios +- Claw-centric status view answers "what's running in my fleet?" at a glance + + + + +## Deferred Ideas + +- Secrets/API key configuration — Phase 5 +- `--yes` flag to skip confirmation — v2 feature for scripting +- Rollback on failure — complexity not worth it for v1, playbooks are idempotent +- Uptime/restart tracking — nice-to-have, not essential for v1 +- Install from specific version — use latest for v1 + + + +--- + +*Phase: 04-installation-fleet-status* +*Context gathered: 2026-03-21* diff --git a/.planning/phases/04-installation-fleet-status/04-DISCUSSION-LOG.md b/.planning/phases/04-installation-fleet-status/04-DISCUSSION-LOG.md new file mode 100644 index 0000000..822a243 --- /dev/null +++ b/.planning/phases/04-installation-fleet-status/04-DISCUSSION-LOG.md @@ -0,0 +1,144 @@ +# Phase 4: Installation & Fleet Status - Discussion Log + +> **Audit trail only.** Do not use as input to planning, research, or execution agents. +> Decisions are captured in CONTEXT.md — this log preserves the alternatives considered. + +**Date:** 2026-03-21 +**Phase:** 04-installation-fleet-status +**Areas discussed:** Install flow UX, Ansible playbook structure, Error handling, Fleet status display + +--- + +## Install Flow UX + +| Option | Description | Selected | +|--------|-------------|----------| +| Fully interactive | Run `clm install`, get prompted for claw type and host. | | +| Flags only | `clm install --claw openclaw --host kevin`. Fails if missing required args. | | +| Hybrid | Flags override prompts. `clm install --claw openclaw` prompts only for host. | ✓ | + +**User's choice:** Hybrid +**Notes:** Allows both scripting and exploration. + +| Option | Description | Selected | +|--------|-------------|----------| +| Step-by-step with spinners | Show each phase with Rich spinners. | ✓ | +| Single progress bar | One progress bar with percentage. | | +| Verbose log stream | Show all Ansible output in real-time. | | + +**User's choice:** Step-by-step with spinners + +| Option | Description | Selected | +|--------|-------------|----------| +| Yes, show summary and confirm | Display summary and ask 'Proceed? [y/N]' | ✓ | +| No confirmation | Start immediately after validation. | | +| Only with --yes flag to skip | Default confirms, `--yes` bypasses. | | + +**User's choice:** Yes, show summary and confirm + +--- + +## Ansible Playbook Structure + +| Option | Description | Selected | +|--------|-------------|----------| +| Full setup | Install deps, create user, install claw, create workspace. | ✓ | +| Minimal — deps only | Just install dependencies. | | +| Deps + claw only | No user creation or workspace setup. | | + +**User's choice:** Full setup + +| Option | Description | Selected | +|--------|-------------|----------| +| Dedicated user per claw | Create `opc-` for OpenClaw. | ✓ | +| Use existing xclm user | Run claw under system admin user. | | +| User provides username | Prompt for username during install. | | + +**User's choice:** Dedicated user per claw + +**User clarification on playbook structure:** +- Two-layer architecture: base (OS/hardware packages) + claw (claw-specific setup) +- Base layer runs as xclm with sudo, claw layer runs as claw user +- Node.js install differs by OS+arch (e.g., Ubuntu x86 vs arm/Raspberry Pi) +- Single base playbook with Ansible conditionals handles variations via `when:` conditions + +| Option | Description | Selected | +|--------|-------------|----------| +| Assume xclm has sudo | xclm expected to have passwordless sudo. | ✓ | +| Prompt user when needed | Pause and ask user to run commands manually. | | +| Separate privileged playbook | Generate script user runs with sudo first. | | + +**User's choice:** Assume xclm has sudo + +--- + +## Error Handling + +| Option | Description | Selected | +|--------|-------------|----------| +| Fail fast, no rollback | Stop on first error. Leave system in partial state. | ✓ | +| Fail fast with rollback | Stop on error, attempt to undo completed steps. | | +| Retry then fail | Retry failed tasks 2-3 times before giving up. | | + +**User's choice:** Fail fast, no rollback + +| Option | Description | Selected | +|--------|-------------|----------| +| Summary + log path | Show summary + path to full Ansible log. | ✓ | +| Full Ansible output | Stream all Ansible output including errors. | | +| Summary only | Just 'Installation failed'. | | + +**User's choice:** Summary + log path + +| Option | Description | Selected | +|--------|-------------|----------| +| Record in host state | Mark host as 'install_failed' or 'partial'. | ✓ | +| No tracking | Don't track failures. | | +| Separate failed installs list | Keep a `failed_installs.json` log. | | + +**User's choice:** Record in host state + +--- + +## Fleet Status Display + +| Option | Description | Selected | +|--------|-------------|----------| +| Host-centric view | List hosts, each showing installed claws. | | +| Claw-centric view | List claws across all hosts, group by claw type. | ✓ | +| Combined dashboard | Summary stats + detailed table. | | + +**User's choice:** Claw-centric view + +| Option | Description | Selected | +|--------|-------------|----------| +| Process check only | Check if claw process is running. | | +| SSH + process check | Verify SSH connectivity, then check process. | ✓ | +| Cached status | Show last-known status, use --refresh for live. | | + +**User's choice:** SSH + process check + +| Option | Description | Selected | +|--------|-------------|----------| +| Essential only | Claw name, version, host, status. | ✓ | +| With uptime | Add uptime, last restart time. | | +| Full details | Include user, install path, port, PID. | | + +**User's choice:** Essential only + +--- + +## Claude's Discretion + +- Exact Ansible task structure and module choices +- Progress spinner styling and timing +- Table column layout and widths +- Log file location and format + +## Deferred Ideas + +- Secrets/API key configuration — Phase 5 +- `--yes` flag to skip confirmation — v2 feature +- Rollback on failure — not needed for v1 +- Uptime/restart tracking — nice-to-have +- Install from specific version — use latest for v1 diff --git a/.planning/phases/04-installation-fleet-status/04-VERIFICATION.md b/.planning/phases/04-installation-fleet-status/04-VERIFICATION.md new file mode 100644 index 0000000..d1b3c0d --- /dev/null +++ b/.planning/phases/04-installation-fleet-status/04-VERIFICATION.md @@ -0,0 +1,139 @@ +--- +phase: 04-installation-fleet-status +verified: 2026-03-21T21:45:00Z +status: passed +score: 5/5 must-haves verified +re_verification: false +--- + +# Phase 4: Installation & Fleet Status Verification Report + +**Phase Goal:** Users can install OpenClaw on Ubuntu hosts and view fleet status +**Verified:** 2026-03-21T21:45:00Z +**Status:** passed +**Re-verification:** No — initial verification + +## Goal Achievement + +### Observable Truths + +| # | Truth | Status | Evidence | +|---|-------|--------|----------| +| 1 | User runs `clm install` and flows through: pick claw → pick host → validate compatibility → install | ✓ VERIFIED | CLI command exists with interactive prompts (_select_claw, _select_host) and flag overrides (--claw, --host). Tests verify both flows. Compatibility checked before installation (line 118 install.py). | +| 2 | Installation validates compatibility before proceeding and fails fast if host is incompatible | ✓ VERIFIED | check_compatibility called at line 118 in install.py. Raises InstallationError with reasons if incompatible (lines 120-122). Tests verify incompatibility detection. | +| 3 | User sees real-time progress during installation (base setup, dependencies, claw installation) | ✓ VERIFIED | Rich Progress spinner implemented with on_event callback. Stages: validate, base, claw. Progress updated via callback at lines 278-297 in cli/install.py. Tests verify event streaming. | +| 4 | Installation fails fast with clear error messages if any step fails | ✓ VERIFIED | InstallationError raised at validation (lines 108, 113, 120-122, 148), base playbook failure (line 187), claw playbook failure (line 211). Error messages include specific reasons. Tests verify all error paths. | +| 5 | User runs `clm status` and sees all hosts with their claw instances, agents, and status | ✓ VERIFIED | Status command groups by claw type (D-12), shows host/version/user/status/installed_at. Live health checks performed via check_claw_health (line 75 status.py). Tests verify display and filtering. | + +**Score:** 5/5 truths verified + +### Required Artifacts + +| Artifact | Expected | Status | Details | +|----------|----------|--------|---------| +| `platform/playbooks/base.yaml` | System dependency installation (Node.js, build tools) | ✓ VERIFIED | 39 lines. Contains hosts, become:yes, nodejs installation via NodeSource, build-essential. All tasks substantive. | +| `src/clawrium/platform/registry/openclaw/playbooks/install.yaml` | OpenClaw-specific installation tasks | ✓ VERIFIED | 32 lines. Contains user creation (opc-{{inventory_hostname}}), git clone, npm install, workspace creation. All tasks substantive. | +| `src/clawrium/core/install.py` | Installation orchestration with validation | ✓ VERIFIED | 256 lines. Exports run_installation, InstallationError. Implements validation, state tracking, playbook execution. Wired to registry.check_compatibility, hosts.update_host, ansible_runner. | +| `src/clawrium/core/health.py` | Live health checking via SSH | ✓ VERIFIED | 177 lines. Exports ClawStatus enum, check_claw_health, check_all_claws_on_host. Uses ansible_runner with pgrep for process detection. Live SSH checks, not cached. | +| `src/clawrium/cli/install.py` | Interactive install command with progress display | ✓ VERIFIED | 171 lines. Exports install command. Implements interactive prompts, confirmation dialog, Rich progress spinner. Wired to core.install.run_installation. | +| `src/clawrium/cli/status.py` | Fleet status command with claw-centric display | ✓ VERIFIED | 130 lines. Exports status command. Groups by claw type, performs live health checks, displays results with color coding. Wired to core.health.check_claw_health. | +| `tests/test_install.py` | Installation module tests | ✓ VERIFIED | 9 tests covering validation, compatibility, success, failure, state tracking. All passing. | +| `tests/test_cli_install.py` | CLI install command tests | ✓ VERIFIED | 8 tests covering prompts, flags, confirmation, cancellation, errors. All passing. | +| `tests/test_health.py` | Health check tests | ✓ VERIFIED | 7 tests covering running, stopped, unknown, SSH failures, timeouts. All passing. | +| `tests/test_cli_status.py` | CLI status command tests | ✓ VERIFIED | 8 tests covering empty fleet, display, filtering, color coding. All passing. | + +### Key Link Verification + +| From | To | Via | Status | Details | +|------|----|----|--------|---------| +| install.py | registry.py | check_compatibility import | ✓ WIRED | Line 37: `from clawrium.core.registry import check_compatibility`. Used at line 118 before installation. | +| install.py | ansible_runner | playbook execution | ✓ WIRED | Lines 176, 202: ansible_runner.run with playbook paths. Both base and claw playbooks executed. | +| install.py | hosts.py | update_host for state tracking | ✓ WIRED | Lines 142, 225, 253: update_host called with set_installing, set_installed, set_failed callbacks. State persisted to hosts.yaml. | +| health.py | ansible_runner | remote process check | ✓ WIRED | Line 112: ansible_runner.run with shell module and pgrep command. Live SSH execution. | +| cli/install.py | core/install.py | run_installation import | ✓ WIRED | Line 12: import. Called at line 294 with on_event callback for progress. | +| cli/status.py | core/health.py | check_claw_health import | ✓ WIRED | Line 13: import. Called at line 75 in loop over all claws. Results displayed in table. | +| cli/main.py | cli/install.py | command registration | ✓ WIRED | Line 9: import install_command. Line 38: @app.command() decorator. Command shows in --help. | +| cli/main.py | cli/status.py | command registration | ✓ WIRED | Line 11: import status_command. Line 48: @app.command() decorator. Command shows in --help. | + +### Requirements Coverage + +| Requirement | Source Plan | Description | Status | Evidence | +|-------------|------------|-------------|--------|----------| +| INST-01 | 04-02 | User can install OpenClaw via interactive flow | ✓ SATISFIED | `clm install` command with interactive claw/host selection. Tests verify prompts and flag overrides work. | +| INST-02 | 04-01 | Installation validates compatibility before proceeding | ✓ SATISFIED | check_compatibility called at line 118. Raises InstallationError if incompatible with reasons. Tests verify rejection. | +| INST-03 | 04-02 | Installation streams progress in real-time | ✓ SATISFIED | Rich Progress spinner with on_event callback. Stages: validate, base, claw. Tests verify event streaming. | +| INST-04 | 04-01, 04-03 | Installation fails fast with clear error messages | ✓ SATISFIED | InstallationError raised at all validation points and playbook failures. Failed state tracked in host record with error message. | +| STAT-01 | 04-03, 04-04 | User can view fleet status | ✓ SATISFIED | `clm status` command shows all claws across hosts. Claw-centric grouping, live health checks, color-coded status display. Tests verify all scenarios. | + +**Coverage:** 5/5 requirements satisfied (100%) + +### Anti-Patterns Found + +No anti-patterns detected. + +**Scan results:** +- No TODO/FIXME/XXX/HACK/PLACEHOLDER comments in any module +- No empty implementations (return null, return {}, return []) +- No hardcoded empty data flowing to user-visible output +- No console.log-only implementations +- All data structures properly initialized and populated +- All functions have substantive implementations + +### Human Verification Required + +#### 1. End-to-End Installation Test + +**Test:** +1. Set up Ubuntu 24.04 host with xclm user and passwordless sudo +2. Run `clm host add` to register host with SSH key +3. Run `clm install --claw openclaw --host ` +4. Verify installation completes successfully +5. SSH to host and verify: + - opc- user exists + - /home/opc-/openclaw directory exists + - npm dependencies installed + - Node.js 20 installed + - build-essential installed + +**Expected:** Installation completes without errors. All components installed correctly. User can start OpenClaw. + +**Why human:** Requires actual Ubuntu host with network access. Ansible playbooks execute real system changes. Cannot be fully mocked. + +#### 2. Fleet Status Live Health Check + +**Test:** +1. Install OpenClaw on host (per test 1) +2. Start OpenClaw process as opc- user +3. Run `clm status` +4. Verify status shows "running" in green +5. Stop OpenClaw process +6. Run `clm status` again +7. Verify status shows "stopped" in red + +**Expected:** Status accurately reflects live process state. SSH checks execute without timeout. Display updates correctly. + +**Why human:** Requires running OpenClaw process. Live SSH execution to remote host. Real-time process state detection. + +#### 3. Installation Error Handling + +**Test:** +1. Attempt to install OpenClaw on incompatible host (e.g., 32-bit arch, Ubuntu 22.04, insufficient memory) +2. Verify installation rejected with clear compatibility reasons before any playbook runs +3. Attempt installation with missing SSH key +4. Verify clear error message about missing key +5. Simulate playbook failure (e.g., network timeout during npm install) +6. Verify failed state recorded in host record +7. Run `clm status` and verify "install failed" shown in red + +**Expected:** All error paths provide clear, actionable messages. No partial installations leave system in broken state. Status correctly shows failed installations. + +**Why human:** Requires testing edge cases and failure scenarios. Network conditions, permission issues, and system state variations difficult to fully simulate. + +### Gaps Summary + +No gaps found. All must-haves verified, all requirements satisfied, all artifacts substantive and wired. + +--- + +_Verified: 2026-03-21T21:45:00Z_ +_Verifier: Claude (gsd-verifier)_ diff --git a/CLAUDE.md b/CLAUDE.md index d866d6b..4589bcf 100644 --- a/CLAUDE.md +++ b/CLAUDE.md @@ -13,7 +13,7 @@ Clawrium is a CLI/TUI tool for managing AI assistant fleets on local networks. I - **Tech stack**: Python + Typer CLI, ansible-runner for execution, uv/uvx for packaging - **Security**: No sudo permissions — Clawrium prompts user when privileged commands needed -- **Platform**: Ubuntu only for v1 +- **Platform**: Ubuntu and Debian for v1 - **Claw support**: OpenClaw only for v1 - **Deployment**: Fully local, no cloud dependencies diff --git a/platform/playbooks/base.yaml b/platform/playbooks/base.yaml new file mode 100644 index 0000000..a7e5b2d --- /dev/null +++ b/platform/playbooks/base.yaml @@ -0,0 +1,45 @@ +--- +- hosts: all + become: yes + tasks: + - name: Update apt cache + ansible.builtin.apt: + update_cache: yes + cache_valid_time: 3600 + + - name: Install required packages for NodeSource repository + ansible.builtin.apt: + name: + - curl + - ca-certificates + - gnupg + state: present + + - name: Create keyrings directory + ansible.builtin.file: + path: /etc/apt/keyrings + state: directory + mode: "0755" + + - name: Download NodeSource GPG key + ansible.builtin.get_url: + url: https://deb.nodesource.com/gpgkey/nodesource-repo.gpg.key + dest: /etc/apt/keyrings/nodesource.asc + mode: "0644" + + - name: Add NodeSource repository for Node.js 20 + ansible.builtin.apt_repository: + repo: "deb [signed-by=/etc/apt/keyrings/nodesource.asc] https://deb.nodesource.com/node_20.x nodistro main" + state: present + filename: nodesource + + - name: Install Node.js + ansible.builtin.apt: + name: nodejs + state: present + update_cache: yes + + - name: Install build-essential + ansible.builtin.apt: + name: build-essential + state: present diff --git a/src/clawrium/cli/init.py b/src/clawrium/cli/init.py index aa8cfda..508bd61 100644 --- a/src/clawrium/cli/init.py +++ b/src/clawrium/cli/init.py @@ -23,7 +23,7 @@ def init() -> None: """ # Create config directory config_dir = init_config_dir() - console.print(f"[green]Clawrium initialized![/green]") + console.print("[green]Clawrium initialized![/green]") console.print(f"Config directory: {config_dir}") console.print() diff --git a/src/clawrium/cli/install.py b/src/clawrium/cli/install.py new file mode 100644 index 0000000..3db0a63 --- /dev/null +++ b/src/clawrium/cli/install.py @@ -0,0 +1,175 @@ +"""Install command for deploying claws to hosts.""" + +from typing import Optional + +import typer +from rich.console import Console +from rich.markup import escape +from rich.panel import Panel +from rich.progress import Progress, SpinnerColumn, TextColumn + +from clawrium.core.hosts import load_hosts, get_host, HostsFileCorruptedError +from clawrium.core.install import run_installation, InstallationError +from clawrium.core.registry import ( + list_claws, + get_claw_info, + check_compatibility, + ManifestNotFoundError, +) + +__all__ = ["install"] + +console = Console() + + +def _select_claw() -> str: + """Prompt user to select a claw from registry.""" + claws = list_claws() + if not claws: + console.print("[red]Error:[/red] No claws available in registry") + raise typer.Exit(code=1) + + console.print("\n[bold]Available claws:[/bold]") + for i, claw in enumerate(claws, 1): + try: + info = get_claw_info(claw) + console.print(f" {i}. {escape(claw)} (v{info['latest_version']}) - {escape(info['description'])}") + except ManifestNotFoundError: + console.print(f" {i}. {escape(claw)} (manifest error)") + + console.print() + choice = typer.prompt("Select claw", type=int) + if choice < 1 or choice > len(claws): + console.print("[red]Invalid selection[/red]") + raise typer.Exit(code=1) + + return claws[choice - 1] + + +def _select_host() -> str: + """Prompt user to select a host from registered hosts.""" + try: + hosts = load_hosts() + except HostsFileCorruptedError as e: + console.print(f"[red]Error:[/red] {e}") + raise typer.Exit(code=1) + + if not hosts: + console.print("[red]Error:[/red] No hosts registered. Run 'clm host add' first.") + raise typer.Exit(code=1) + + console.print("\n[bold]Available hosts:[/bold]") + for i, host in enumerate(hosts, 1): + name = host.get("alias") or host["hostname"] + hw = host.get("hardware", {}) + arch = hw.get("architecture", "?") + mem_gb = round(hw.get("memtotal_mb", 0) / 1024, 1) if hw.get("memtotal_mb") else "?" + console.print(f" {i}. {escape(name)} ({arch}, {mem_gb}GB)") + + console.print() + choice = typer.prompt("Select host", type=int) + if choice < 1 or choice > len(hosts): + console.print("[red]Invalid selection[/red]") + raise typer.Exit(code=1) + + # Return hostname for lookup + selected = hosts[choice - 1] + return selected.get("alias") or selected["hostname"] + + +def install( + claw: Optional[str] = typer.Option( + None, "--claw", "-c", help="Claw type to install (e.g., openclaw)" + ), + host: Optional[str] = typer.Option( + None, "--host", "-H", help="Target host (hostname or alias)" + ), + yes: bool = typer.Option( + False, "--yes", "-y", help="Skip confirmation prompt" + ), +) -> None: + """Install a claw on a host. + + Without flags, prompts for claw and host selection interactively. + With --claw and --host flags, runs directly (per D-01 hybrid invocation). + """ + # Step 1: Get claw (prompt if not provided per D-01) + selected_claw = claw or _select_claw() + + # Step 2: Validate claw exists + try: + get_claw_info(selected_claw) # Validates claw exists + except ManifestNotFoundError: + console.print(f"[red]Error:[/red] Claw '{escape(selected_claw)}' not found in registry") + raise typer.Exit(code=1) + + # Step 3: Get host (prompt if not provided per D-01) + selected_host = host or _select_host() + + # Step 4: Load host and check compatibility + try: + host_record = get_host(selected_host) + except HostsFileCorruptedError as e: + console.print(f"[red]Error:[/red] {e}") + raise typer.Exit(code=1) + + if not host_record: + console.print(f"[red]Error:[/red] Host '{escape(selected_host)}' not found") + raise typer.Exit(code=1) + + hardware = host_record.get("hardware", {}) + compat = check_compatibility(selected_claw, hardware) + + if not compat["compatible"]: + console.print(f"[red]Error:[/red] Host is incompatible with {selected_claw}:") + for reason in compat["reasons"]: + console.print(f" - {reason}") + raise typer.Exit(code=1) + + matched_version = compat["matched_entry"]["version"] + display_host = host_record.get("alias") or host_record["hostname"] + + # Step 5: Show confirmation summary (per D-03) + summary = Panel( + f"[bold]Claw:[/bold] {selected_claw}\n" + f"[bold]Version:[/bold] {matched_version}\n" + f"[bold]Host:[/bold] {display_host}\n" + f"[bold]Architecture:[/bold] {hardware.get('architecture', 'unknown')}\n" + f"[bold]Memory:[/bold] {round(hardware.get('memtotal_mb', 0) / 1024, 1)}GB", + title="Installation Summary", + border_style="cyan", + ) + console.print(summary) + + if not yes and not typer.confirm("\nProceed with installation?", default=False): + console.print("Installation cancelled.") + raise typer.Exit(code=0) + + # Step 6: Run installation with progress spinner (per D-02) + console.print() # Blank line before progress + + try: + with Progress( + SpinnerColumn(), + TextColumn("[progress.description]{task.description}"), + console=console, + transient=True, + ) as progress: + task = progress.add_task("Starting installation...", total=None) + + def update_progress(stage: str, message: str) -> None: + progress.update(task, description=f"[{stage}] {message}") + + result = run_installation( + claw_name=selected_claw, + hostname=selected_host, + on_event=update_progress, + ) + + # Success + console.print(f"[green]Success![/green] {selected_claw} v{result['version']} installed on {display_host}") + + except InstallationError as e: + # Error display per D-10 + console.print(f"[red]Installation failed:[/red] {e}") + raise typer.Exit(code=1) diff --git a/src/clawrium/cli/main.py b/src/clawrium/cli/main.py index caf557e..48cfd68 100644 --- a/src/clawrium/cli/main.py +++ b/src/clawrium/cli/main.py @@ -1,10 +1,14 @@ """Main CLI entry point for Clawrium.""" +from typing import Optional + import typer from clawrium.cli.init import init as init_command from clawrium.cli.host import host_app +from clawrium.cli.install import install as install_command from clawrium.cli.registry import registry_app +from clawrium.cli.status import status as status_command __all__ = ["app"] @@ -30,6 +34,24 @@ def init() -> None: init_command() +@app.command() +def install( + claw: Optional[str] = typer.Option(None, "--claw", "-c", help="Claw type to install"), + host: Optional[str] = typer.Option(None, "--host", "-H", help="Target host"), + yes: bool = typer.Option(False, "--yes", "-y", help="Skip confirmation"), +) -> None: + """Install a claw on a host.""" + install_command(claw=claw, host=host, yes=yes) + + +@app.command() +def status( + host: Optional[str] = typer.Option(None, "--host", "-H", help="Filter to specific host"), +) -> None: + """Show fleet status across all hosts.""" + status_command(host=host) + + # Register host subcommands app.add_typer(host_app, name="host") diff --git a/src/clawrium/cli/status.py b/src/clawrium/cli/status.py new file mode 100644 index 0000000..123c09b --- /dev/null +++ b/src/clawrium/cli/status.py @@ -0,0 +1,128 @@ +"""Fleet status command for viewing claw instances across hosts.""" + +from collections import defaultdict +from typing import Optional + +import typer +from rich.console import Console +from rich.markup import escape +from rich.progress import Progress, SpinnerColumn, TextColumn +from rich.table import Table + +from clawrium.core.hosts import load_hosts, HostsFileCorruptedError +from clawrium.core.health import check_claw_health, ClawStatus + +__all__ = ["status"] + +console = Console() + + +def status( + host: Optional[str] = typer.Option( + None, "--host", "-H", help="Filter to specific host (hostname or alias)" + ), +) -> None: + """Show fleet status across all hosts. + + Displays claw instances grouped by claw type (per D-12) with live + health check (per D-13). Shows name, version, host, status (per D-14). + """ + # Load all hosts + try: + hosts = load_hosts() + except HostsFileCorruptedError as e: + console.print(f"[red]Error:[/red] {e}") + raise typer.Exit(code=1) + + if not hosts: + console.print("No hosts registered. Run 'clm host add' to add a host.") + return + + # Filter to specific host if requested + if host: + hosts = [h for h in hosts if h.get("hostname") == host or h.get("alias") == host] + if not hosts: + console.print(f"[red]Error:[/red] Host '{escape(host)}' not found") + raise typer.Exit(code=1) + + # Collect all claws across hosts, grouped by claw type + # Structure: {claw_name: [(host_record, claw_record), ...]} + claws_by_type: dict[str, list[tuple[dict, dict]]] = defaultdict(list) + + for h in hosts: + for claw_name, claw_record in h.get("claws", {}).items(): + claws_by_type[claw_name].append((h, claw_record)) + + if not claws_by_type: + console.print("No claws installed on any host.") + console.print("Run 'clm install' to install a claw.") + return + + # Perform live health checks with progress spinner + health_results: dict[tuple[str, str], ClawStatus] = {} # (claw, hostname) -> status + + with Progress( + SpinnerColumn(), + TextColumn("[progress.description]{task.description}"), + console=console, + transient=True, + ) as progress: + task = progress.add_task("Checking fleet health...", total=None) + + for claw_name, instances in claws_by_type.items(): + for h, claw_record in instances: + progress.update(task, description=f"Checking {claw_name} on {h.get('alias') or h['hostname']}...") + result = check_claw_health(claw_name, h) + health_results[(claw_name, h["hostname"])] = result["status"] + + console.print() # Blank line after progress + + # Display claw-centric view (per D-12) + for claw_name in sorted(claws_by_type.keys()): + instances = claws_by_type[claw_name] + + table = Table(title=f"[bold cyan]{escape(claw_name)}[/bold cyan]") + table.add_column("Host", style="white") + table.add_column("Version", style="green") + table.add_column("User", style="dim") + table.add_column("Status") + table.add_column("Installed", style="dim") + + for h, claw_record in instances: + display_host = h.get("alias") or h["hostname"] + version = claw_record.get("version", "?") + user = claw_record.get("user", "-") + installed_at = claw_record.get("installed_at", "-") + if installed_at and installed_at != "-": + # Format as date only for readability + installed_at = installed_at.split("T")[0] + + # Get live status with color coding + live_status = health_results.get((claw_name, h["hostname"]), ClawStatus.UNKNOWN) + + if live_status == ClawStatus.RUNNING: + status_display = "[green]running[/green]" + elif live_status == ClawStatus.STOPPED: + status_display = "[red]stopped[/red]" + elif live_status == ClawStatus.NOT_INSTALLED: + status_display = "[yellow]not installed[/yellow]" + else: + status_display = "[yellow]unknown[/yellow]" + + # Also show install state if failed + install_status = claw_record.get("status", "") + if install_status == "failed": + status_display = "[red]install failed[/red]" + elif install_status == "installing": + status_display = "[yellow]installing...[/yellow]" + + table.add_row( + escape(display_host), + version, + escape(user) if user else "-", + status_display, + installed_at, + ) + + console.print(table) + console.print() # Space between claw types diff --git a/src/clawrium/core/health.py b/src/clawrium/core/health.py new file mode 100644 index 0000000..5690863 --- /dev/null +++ b/src/clawrium/core/health.py @@ -0,0 +1,215 @@ +"""Live health checking for claw instances. + +This module provides functions to check if claw processes are running +on remote hosts via SSH. Per D-13, this performs live checks, not cached data. +""" + +import logging +import os +import re +import tempfile +from enum import Enum +from typing import TypedDict + +import ansible_runner + +from clawrium.core.keys import get_host_private_key + +logger = logging.getLogger(__name__) + +# Valid Linux username pattern: starts with lowercase letter, followed by +# lowercase letters, digits, underscores, or hyphens. Max 32 chars total. +VALID_USERNAME_PATTERN = re.compile(r"^[a-z][a-z0-9_-]{0,31}$") + + +class ClawStatus(str, Enum): + """Status of a claw instance.""" + RUNNING = "running" + STOPPED = "stopped" + UNKNOWN = "unknown" + NOT_INSTALLED = "not_installed" + + +class HealthResult(TypedDict): + """Result of health check for a claw on a host.""" + claw: str + host: str + status: ClawStatus + user: str | None + error: str | None + + +def check_claw_health( + claw_name: str, + host: dict, +) -> HealthResult: + """Check if a claw process is running on a host. + + Performs live SSH check per D-13. Does not use cached data. + + Args: + claw_name: Name of claw to check (e.g., "openclaw") + host: Host record dict with hostname, port, user, key_id, claws + + Returns: + HealthResult with status and any error message + """ + hostname = host["hostname"] + port = host.get("port", 22) + user = host.get("user", "xclm") + + # Get claw record from host + claws = host.get("claws", {}) + claw_record = claws.get(claw_name) + + if not claw_record: + return { + "claw": claw_name, + "host": hostname, + "status": ClawStatus.NOT_INSTALLED, + "user": None, + "error": None, + } + + claw_user = claw_record.get("user") + if not claw_user: + # No user set - can't check + return { + "claw": claw_name, + "host": hostname, + "status": ClawStatus.UNKNOWN, + "user": None, + "error": "No claw user recorded", + } + + # Validate username to prevent command injection + if not VALID_USERNAME_PATTERN.match(claw_user): + return { + "claw": claw_name, + "host": hostname, + "status": ClawStatus.UNKNOWN, + "user": claw_user, + "error": f"Invalid claw user format: {claw_user}", + } + + # Get SSH key + key_id = host.get("key_id") or hostname + ssh_key = get_host_private_key(key_id) + if not ssh_key: + return { + "claw": claw_name, + "host": hostname, + "status": ClawStatus.UNKNOWN, + "user": claw_user, + "error": "SSH key not found", + } + + # Build inventory + inventory = { + "all": { + "hosts": { + hostname: { + "ansible_user": user, + "ansible_port": port, + "ansible_ssh_private_key_file": str(ssh_key), + } + } + } + } + + # Check for node process owned by claw user + # OpenClaw runs as a Node.js process + check_cmd = f"pgrep -u {claw_user} node >/dev/null 2>&1 && echo RUNNING || echo STOPPED" + + with tempfile.TemporaryDirectory() as tmpdir: + os.chmod(tmpdir, 0o700) + + result = ansible_runner.run( + private_data_dir=tmpdir, + inventory=inventory, + host_pattern=hostname, + module="shell", + module_args=check_cmd, + quiet=True, + timeout=15, + ) + + if result.status == "timeout": + return { + "claw": claw_name, + "host": hostname, + "status": ClawStatus.UNKNOWN, + "user": claw_user, + "error": "Health check timed out", + } + + if result.status != "successful": + return { + "claw": claw_name, + "host": hostname, + "status": ClawStatus.UNKNOWN, + "user": claw_user, + "error": f"SSH failed: {result.status}", + } + + # Parse output from events + output = "" + for event in result.events: + event_type = event.get("event") + if event_type == "runner_on_unreachable": + # Host unreachable - network issue, not process status + return { + "claw": claw_name, + "host": hostname, + "status": ClawStatus.UNKNOWN, + "user": claw_user, + "error": "Host unreachable", + } + if event_type == "runner_on_ok": + output = event.get("event_data", {}).get("res", {}).get("stdout", "") + break + + if "RUNNING" in output: + return { + "claw": claw_name, + "host": hostname, + "status": ClawStatus.RUNNING, + "user": claw_user, + "error": None, + } + elif "STOPPED" in output: + return { + "claw": claw_name, + "host": hostname, + "status": ClawStatus.STOPPED, + "user": claw_user, + "error": None, + } + else: + # Unexpected output - treat as unknown + return { + "claw": claw_name, + "host": hostname, + "status": ClawStatus.UNKNOWN, + "user": claw_user, + "error": f"Unexpected output: {output[:50]}" if output else "No output", + } + + +def check_all_claws_on_host(host: dict) -> list[HealthResult]: + """Check health of all installed claws on a host. + + Args: + host: Host record dict + + Returns: + List of HealthResult for each installed claw + """ + results = [] + claws = host.get("claws", {}) + + for claw_name in claws: + result = check_claw_health(claw_name, host) + results.append(result) + + return results diff --git a/src/clawrium/core/hosts.py b/src/clawrium/core/hosts.py index 21ea084..538de58 100644 --- a/src/clawrium/core/hosts.py +++ b/src/clawrium/core/hosts.py @@ -19,6 +19,7 @@ "update_host", "HOSTS_FILE", "HostsFileCorruptedError", + "DuplicateHostError", ] HOSTS_FILE = "hosts.json" @@ -155,19 +156,57 @@ def update_host(hostname: str, updater: Callable[[dict], dict]) -> bool: return found +class DuplicateHostError(Exception): + """Raised when trying to add a host that already exists.""" + + pass + + def add_host(host: dict) -> None: - """Add a host to the registry. + """Add a host to the registry atomically. + + Acquires exclusive lock for the entire load-modify-save operation + to prevent TOCTOU races from concurrent add_host calls. Args: host: Host dictionary to add. + + Raises: + DuplicateHostError: If hostname already exists in registry. """ - hosts = load_hosts() - hosts.append(host) - save_hosts(hosts) + hostname = host.get("hostname") + with _hosts_lock(): + hosts = load_hosts() + + # Check for duplicate + for existing in hosts: + if existing.get("hostname") == hostname: + raise DuplicateHostError(f"Host '{hostname}' already exists") + + hosts.append(host) + + # Save without re-acquiring lock + config_dir = init_config_dir() + hosts_path = config_dir / HOSTS_FILE + fd, tmp_path = tempfile.mkstemp(dir=config_dir, suffix=".tmp") + try: + os.fchmod(fd, 0o600) + with os.fdopen(fd, "w") as f: + json.dump(hosts, f, indent=2) + os.replace(tmp_path, hosts_path) + except Exception: + try: + os.unlink(tmp_path) + except OSError: + pass + raise def remove_host(hostname: str) -> bool: - """Remove a host by hostname. + """Remove a host by hostname atomically. + + Acquires exclusive lock for the entire load-modify-save operation + to prevent TOCTOU races from concurrent remove_host calls. Args: hostname: The hostname to remove. @@ -175,15 +214,31 @@ def remove_host(hostname: str) -> bool: Returns: True if host was found and removed, False otherwise. """ - hosts = load_hosts() - filtered = [h for h in hosts if h.get("hostname") != hostname] + with _hosts_lock(): + hosts = load_hosts() + filtered = [h for h in hosts if h.get("hostname") != hostname] - if len(filtered) == len(hosts): - # No host was removed - return False + if len(filtered) == len(hosts): + # No host was removed + return False + + # Save without re-acquiring lock + config_dir = init_config_dir() + hosts_path = config_dir / HOSTS_FILE + fd, tmp_path = tempfile.mkstemp(dir=config_dir, suffix=".tmp") + try: + os.fchmod(fd, 0o600) + with os.fdopen(fd, "w") as f: + json.dump(filtered, f, indent=2) + os.replace(tmp_path, hosts_path) + except Exception: + try: + os.unlink(tmp_path) + except OSError: + pass + raise - save_hosts(filtered) - return True + return True def get_host(identifier: str) -> dict | None: diff --git a/src/clawrium/core/install.py b/src/clawrium/core/install.py new file mode 100644 index 0000000..da35a9a --- /dev/null +++ b/src/clawrium/core/install.py @@ -0,0 +1,300 @@ +"""Installation orchestration for claw deployment. + +This module handles the end-to-end installation flow: +1. Validate claw exists in registry +2. Check host compatibility +3. Run base playbook (system dependencies) +4. Run claw-specific playbook + +Host record schema (extended): +{ + "hostname": str, + "claws": { + "openclaw": { + "version": "0.1.0", + "status": "installed" | "failed" | "installing", + "installed_at": "ISO timestamp", + "error": str | None, + "user": "opc-hostname" # per D-07 + } + }, + ...existing fields... +} +""" + +import logging +import os +from datetime import datetime, timezone +from pathlib import Path +from typing import Callable, TypedDict + +import ansible_runner + +from clawrium.core.config import get_config_dir +from clawrium.core.hosts import get_host, update_host +from clawrium.core.keys import get_host_private_key +from clawrium.core.registry import ( + check_compatibility, + load_manifest, + ManifestNotFoundError, +) + +logger = logging.getLogger(__name__) + + +class InstallationError(Exception): + """Raised when installation fails.""" + pass + + +class InstallResult(TypedDict): + """Result of installation operation.""" + success: bool + claw: str + version: str + host: str + playbooks_run: list[str] + error: str | None + + +def _get_base_playbook_path() -> Path: + """Get path to base system playbook.""" + # Base playbook is at project root/platform/playbooks/base.yaml + # From src/clawrium/core/install.py: parent.parent.parent.parent gets to project root + return Path(__file__).parent.parent.parent.parent / "platform" / "playbooks" / "base.yaml" + + +def _get_claw_playbook_path(claw_name: str) -> Path: + """Get path to claw-specific install playbook.""" + return ( + Path(__file__).parent.parent + / "platform" + / "registry" + / claw_name + / "playbooks" + / "install.yaml" + ) + + +def _get_logs_dir() -> Path: + """Get logs directory, creating if needed.""" + logs_dir = get_config_dir() / "logs" + logs_dir.mkdir(parents=True, exist_ok=True) + return logs_dir + + +def _get_claw_user(claw_name: str, host: dict) -> str: + """Generate claw user name from host alias or key_id. + + Uses alias if available, otherwise key_id. Never uses IP address. + Prefix depends on claw type (zc- for zeroclaw, opc- for openclaw, etc.) + """ + # Claw prefixes + prefixes = { + "zeroclaw": "zc", + "openclaw": "opc", + "nemoclaw": "nc", + } + prefix = prefixes.get(claw_name, claw_name[:3]) + + # Use alias first, then key_id (which should be set during host init) + host_name = host.get("alias") or host.get("key_id") or host["hostname"] + + # Sanitize: only allow alphanumeric and hyphen, no dots + sanitized = "".join(c if c.isalnum() or c == "-" else "-" for c in host_name) + + return f"{prefix}-{sanitized}" + + +def run_installation( + claw_name: str, + hostname: str, + on_event: Callable[[str, str], None] | None = None, +) -> InstallResult: + """Run full installation of a claw on a host. + + Args: + claw_name: Name of claw to install (e.g., "openclaw") + hostname: Hostname or alias of target host + on_event: Optional callback for progress events (stage, message) + + Returns: + InstallResult with success status and details + + Raises: + InstallationError: If validation fails or playbook execution fails + """ + def emit(stage: str, message: str) -> None: + if on_event: + on_event(stage, message) + logger.info("[%s] %s", stage, message) + + # Step 1: Validate claw exists + emit("validate", f"Checking {claw_name} manifest...") + try: + load_manifest(claw_name) # Validates claw exists + except ManifestNotFoundError as e: + raise InstallationError(f"Claw '{claw_name}' not found in registry") from e + + # Step 2: Get host record + emit("validate", f"Loading host {hostname}...") + host = get_host(hostname) + if not host: + raise InstallationError(f"Host '{hostname}' not found. Run 'clm host add' first.") + + # Step 3: Check compatibility + emit("validate", "Checking compatibility...") + hardware = host.get("hardware", {}) + compat = check_compatibility(claw_name, hardware) + + if not compat["compatible"]: + reasons = ", ".join(compat["reasons"]) + raise InstallationError(f"Host is incompatible: {reasons}") + + matched_version = compat["matched_entry"]["version"] + emit("validate", f"Compatible with {claw_name} v{matched_version}") + + # Step 4: Generate claw user and set installing status + claw_user = _get_claw_user(claw_name, host) + + def set_installing(h: dict) -> dict: + if "claws" not in h: + h["claws"] = {} + h["claws"][claw_name] = { + "version": matched_version, + "status": "installing", + "installed_at": None, + "error": None, + "user": claw_user + } + return h + + update_host(host["hostname"], set_installing) + emit("validate", f"Installation state tracked (user: {claw_user})") + + # Step 5: Get SSH credentials + key_id = host.get("key_id") or host["hostname"] + ssh_key = get_host_private_key(key_id) + if not ssh_key: + raise InstallationError(f"No SSH key found for host. Run 'clm host init {key_id}'.") + + # Step 6: Build inventory with extra vars for playbook + inventory = { + "all": { + "hosts": { + host["hostname"]: { + "ansible_user": host.get("user", "xclm"), + "ansible_port": host.get("port", 22), + "ansible_ssh_private_key_file": str(ssh_key), + } + }, + "vars": { + "claw_user": claw_user, + "claw_version": f"v{matched_version}", + } + } + } + + # Step 7: Setup persistent logs directory + logs_dir = _get_logs_dir() + timestamp = datetime.now().strftime("%Y%m%d-%H%M%S") + host_display = host.get("alias") or host.get("key_id") or host["hostname"] + install_log_dir = logs_dir / f"install-{claw_name}-{host_display}-{timestamp}" + install_log_dir.mkdir(parents=True, exist_ok=True) + os.chmod(install_log_dir, 0o700) + + try: + # Step 8: Run base playbook + base_playbook = _get_base_playbook_path() + if not base_playbook.exists(): + raise InstallationError(f"Base playbook not found: {base_playbook}") + + emit("base", "Installing system dependencies...") + playbooks_run = [] + + base_data_dir = install_log_dir / "base" + base_data_dir.mkdir(exist_ok=True) + + result = ansible_runner.run( + private_data_dir=str(base_data_dir), + inventory=inventory, + playbook=str(base_playbook), + quiet=False, # Show output + timeout=300, # 5 min timeout for base install + ) + + if result.status != "successful": + raise InstallationError( + f"Base playbook failed: {result.status}. " + f"Check logs at {base_data_dir}/artifacts/" + ) + playbooks_run.append(str(base_playbook)) + emit("base", "System dependencies installed") + + # Step 9: Run claw playbook + claw_playbook = _get_claw_playbook_path(claw_name) + if not claw_playbook.exists(): + raise InstallationError(f"Claw playbook not found: {claw_playbook}") + + emit("claw", f"Installing {claw_name}...") + + claw_data_dir = install_log_dir / "claw" + claw_data_dir.mkdir(exist_ok=True) + + result = ansible_runner.run( + private_data_dir=str(claw_data_dir), + inventory=inventory, + playbook=str(claw_playbook), + quiet=False, # Show output + timeout=600, # 10 min timeout for claw install + ) + + if result.status != "successful": + raise InstallationError( + f"Claw playbook failed: {result.status}. " + f"Check logs at {claw_data_dir}/artifacts/" + ) + playbooks_run.append(str(claw_playbook)) + emit("claw", f"{claw_name} installed successfully") + + # Step 10: Update host with success status + def set_installed(h: dict) -> dict: + if "claws" in h and claw_name in h["claws"]: + h["claws"][claw_name]["status"] = "installed" + h["claws"][claw_name]["installed_at"] = datetime.now(timezone.utc).isoformat() + return h + + update_host(host["hostname"], set_installed) + emit("complete", f"Installation complete. Logs at {install_log_dir}") + + return { + "success": True, + "claw": claw_name, + "version": matched_version, + "host": host["hostname"], + "playbooks_run": playbooks_run, + "error": None, + } + + except Exception as e: + # Step 11: Update host with failure status + error_msg = str(e) + + def set_failed(h: dict) -> dict: + if "claws" not in h: + h["claws"] = {} + if claw_name not in h["claws"]: + h["claws"][claw_name] = { + "version": matched_version, + "user": claw_user + } + h["claws"][claw_name]["status"] = "failed" + h["claws"][claw_name]["error"] = error_msg + h["claws"][claw_name]["installed_at"] = datetime.now(timezone.utc).isoformat() + return h + + update_host(host["hostname"], set_failed) + emit("error", f"Installation failed. Logs at {install_log_dir}") + + # Re-raise the exception + raise diff --git a/src/clawrium/platform/registry/openclaw/playbooks/install.yaml b/src/clawrium/platform/registry/openclaw/playbooks/install.yaml new file mode 100644 index 0000000..9e365cb --- /dev/null +++ b/src/clawrium/platform/registry/openclaw/playbooks/install.yaml @@ -0,0 +1,31 @@ +--- +- hosts: all + become: yes + tasks: + - name: Create claw user + ansible.builtin.user: + name: "opc-{{ inventory_hostname }}" + state: present + create_home: yes + shell: /bin/bash + + - name: Clone OpenClaw repository + ansible.builtin.git: + repo: https://github.com/openclaw/openclaw.git + dest: "/home/opc-{{ inventory_hostname }}/openclaw" + version: "{{ claw_version | default('v1.0.0') }}" + become_user: "opc-{{ inventory_hostname }}" + + - name: Install npm dependencies + ansible.builtin.command: + cmd: npm install + chdir: "/home/opc-{{ inventory_hostname }}/openclaw" + become_user: "opc-{{ inventory_hostname }}" + + - name: Create workspace directory + ansible.builtin.file: + path: "/home/opc-{{ inventory_hostname }}/workspace" + state: directory + owner: "opc-{{ inventory_hostname }}" + group: "opc-{{ inventory_hostname }}" + mode: "0700" diff --git a/src/clawrium/platform/registry/zeroclaw/manifest.yaml b/src/clawrium/platform/registry/zeroclaw/manifest.yaml index 2614054..132d09c 100644 --- a/src/clawrium/platform/registry/zeroclaw/manifest.yaml +++ b/src/clawrium/platform/registry/zeroclaw/manifest.yaml @@ -1,43 +1,55 @@ name: zeroclaw description: "Lightweight AI assistant for edge devices and Raspberry Pi" entries: - - version: "0.2.0" + # Raspberry Pi 2/3 (armv7l) - Debian 13 (trixie) + - version: "0.5.7" os: debian - os_version: "11" + os_version: "13" arch: armv7l + requirements: + min_memory_mb: 512 # Pi 2 has ~920MB usable (1GB - video mem) + gpu_required: false + dependencies: + python: ">=3.9" + # Raspberry Pi 4/5 (aarch64) - Ubuntu + - version: "0.5.7" + os: ubuntu + os_version: "22.04" + arch: aarch64 requirements: min_memory_mb: 1024 gpu_required: false dependencies: python: ">=3.9" - nodejs: ">=18.0.0" - - version: "0.2.0" - os: debian - os_version: "12" - arch: armv7l + nodejs: ">=20.0.0" + - version: "0.5.7" + os: ubuntu + os_version: "24.04" + arch: aarch64 requirements: min_memory_mb: 1024 gpu_required: false dependencies: python: ">=3.9" - nodejs: ">=18.0.0" - - version: "0.2.0" + nodejs: ">=20.0.0" + # Desktop/Server x86_64 - Ubuntu + - version: "0.5.7" os: ubuntu os_version: "22.04" - arch: aarch64 + arch: x86_64 requirements: - min_memory_mb: 2048 + min_memory_mb: 1024 gpu_required: false dependencies: python: ">=3.9" - nodejs: ">=18.0.0" - - version: "0.2.0" + nodejs: ">=20.0.0" + - version: "0.5.7" os: ubuntu os_version: "24.04" - arch: aarch64 + arch: x86_64 requirements: - min_memory_mb: 2048 + min_memory_mb: 1024 gpu_required: false dependencies: python: ">=3.9" - nodejs: ">=18.0.0" + nodejs: ">=20.0.0" diff --git a/src/clawrium/platform/registry/zeroclaw/playbooks/install.yaml b/src/clawrium/platform/registry/zeroclaw/playbooks/install.yaml new file mode 100644 index 0000000..cc06c20 --- /dev/null +++ b/src/clawrium/platform/registry/zeroclaw/playbooks/install.yaml @@ -0,0 +1,54 @@ +--- +- hosts: all + become: yes + vars: + # Map ansible architecture to zeroclaw release naming + arch_map: + armv7l: "armv7-unknown-linux-gnueabihf" + aarch64: "aarch64-unknown-linux-gnu" + x86_64: "x86_64-unknown-linux-gnu" + release_arch: "{{ arch_map[ansible_architecture] }}" + release_url: "https://github.com/zeroclaw-labs/zeroclaw/releases/download/{{ claw_version }}/zeroclaw-{{ release_arch }}.tar.gz" + + tasks: + - name: Create claw user + ansible.builtin.user: + name: "{{ claw_user }}" + state: present + create_home: yes + shell: /bin/bash + + - name: Create bin directory + ansible.builtin.file: + path: "/home/{{ claw_user }}/bin" + state: directory + owner: "{{ claw_user }}" + group: "{{ claw_user }}" + mode: "0755" + + - name: Download zeroclaw binary + ansible.builtin.get_url: + url: "{{ release_url }}" + dest: "/tmp/zeroclaw-{{ release_arch }}.tar.gz" + mode: "0644" + + - name: Extract zeroclaw binary + ansible.builtin.unarchive: + src: "/tmp/zeroclaw-{{ release_arch }}.tar.gz" + dest: "/home/{{ claw_user }}/bin" + remote_src: yes + owner: "{{ claw_user }}" + group: "{{ claw_user }}" + + - name: Cleanup tarball + ansible.builtin.file: + path: "/tmp/zeroclaw-{{ release_arch }}.tar.gz" + state: absent + + - name: Create workspace directory + ansible.builtin.file: + path: "/home/{{ claw_user }}/workspace" + state: directory + owner: "{{ claw_user }}" + group: "{{ claw_user }}" + mode: "0700" diff --git a/tests/test_cli_init.py b/tests/test_cli_init.py index 430f886..267ce20 100644 --- a/tests/test_cli_init.py +++ b/tests/test_cli_init.py @@ -77,8 +77,6 @@ def test_init_exits_1_when_deps_missing( from clawrium.core import deps # Mock ansible as missing - original_check = deps.check_ansible - def mock_check_ansible(): return deps.DependencyStatus( name="ansible", diff --git a/tests/test_cli_install.py b/tests/test_cli_install.py new file mode 100644 index 0000000..a5d4eb3 --- /dev/null +++ b/tests/test_cli_install.py @@ -0,0 +1,270 @@ +"""Tests for CLI install command.""" + +import json +import os +from pathlib import Path +from unittest.mock import patch + +from typer.testing import CliRunner + +from clawrium.cli.main import app + +runner = CliRunner() + + +def create_test_keypair(config_dir: Path, key_id: str) -> None: + """Create a test keypair for a host (required before install).""" + key_dir = config_dir / "keys" / key_id + key_dir.mkdir(parents=True, exist_ok=True) + (key_dir / "xclm_ed25519").write_text("test-private-key") + (key_dir / "xclm_ed25519").chmod(0o600) + (key_dir / "xclm_ed25519.pub").write_text("ssh-ed25519 AAAA... clawrium") + + +def create_host(config_dir: Path, hostname: str, alias: str | None = None, key_id: str | None = None) -> None: + """Create a test host entry.""" + hosts_file = config_dir / "hosts.json" + config_dir.mkdir(parents=True, exist_ok=True) + + host_data = { + "hostname": hostname, + "key_id": key_id or hostname, + "port": 22, + "user": "xclm", + "auth_method": "key", + "hardware": { + "architecture": "x86_64", + "processor_cores": 4, + "memtotal_mb": 8192, + "os": "ubuntu", + "os_version": "24.04", + "distribution": "ubuntu", + "distribution_version": "24.04", + "gpu": {"present": False}, + }, + "metadata": { + "added_at": "2026-03-21T00:00:00Z", + "last_seen": "2026-03-21T00:00:00Z", + "tags": [], + }, + } + + if alias: + host_data["alias"] = alias + + # Load existing hosts or create new + if hosts_file.exists(): + hosts = json.loads(hosts_file.read_text()) + else: + hosts = [] + + hosts.append(host_data) + hosts_file.write_text(json.dumps(hosts, indent=2)) + + +def test_install_prompts_for_claw(isolated_config: Path): + """clm install with no --claw flag triggers claw selection prompt.""" + # Setup: create host and keypair + create_test_keypair(isolated_config, "testhost") + create_host(isolated_config, "192.168.1.100", alias="testhost", key_id="testhost") + + # Run without --claw, answer prompts with EOF to cancel + result = runner.invoke(app, ["install", "--host", "testhost"], input="\n", env=os.environ) + + # Should show claw selection prompt + assert "available claw" in result.output.lower() or "select claw" in result.output.lower() + + +def test_install_prompts_for_host(isolated_config: Path): + """clm install with no --host flag triggers host selection prompt.""" + # Setup: create host and keypair + create_test_keypair(isolated_config, "testhost") + create_host(isolated_config, "192.168.1.100", alias="testhost", key_id="testhost") + + # Run without --host, answer prompts with EOF to cancel + result = runner.invoke(app, ["install", "--claw", "openclaw"], input="\n", env=os.environ) + + # Should show host selection prompt + assert "available host" in result.output.lower() or "select host" in result.output.lower() + + +def test_install_with_flags_skips_prompts(isolated_config: Path): + """clm install --claw openclaw --host testhost skips prompts and goes to confirmation.""" + # Setup: create host and keypair + create_test_keypair(isolated_config, "testhost") + create_host(isolated_config, "192.168.1.100", alias="testhost", key_id="testhost") + + # Mock run_installation to avoid actual execution + with patch("clawrium.cli.install.run_installation") as mock_install: + mock_install.return_value = { + "success": True, + "claw": "openclaw", + "version": "1.0.0", + "host": "192.168.1.100", + "playbooks_run": [], + "error": None, + } + + # Run with both flags, cancel at confirmation + result = runner.invoke( + app, + ["install", "--claw", "openclaw", "--host", "testhost"], + input="n\n", + env=os.environ + ) + + # Should NOT show claw/host selection prompts + # Should show confirmation (cancelled) + assert "proceed" in result.output.lower() or "install" in result.output.lower() + # Should not have called install due to cancellation + mock_install.assert_not_called() + + +def test_install_shows_confirmation(isolated_config: Path): + """clm install shows confirmation summary before proceeding.""" + # Setup: create host and keypair + create_test_keypair(isolated_config, "testhost") + create_host(isolated_config, "192.168.1.100", alias="testhost", key_id="testhost") + + # Run with flags, cancel at confirmation + result = runner.invoke( + app, + ["install", "--claw", "openclaw", "--host", "testhost"], + input="n\n", + env=os.environ, + ) + + # Should show installation summary panel + assert "installation summary" in result.output.lower() or "claw:" in result.output.lower() + assert "openclaw" in result.output.lower() + assert "cancelled" in result.output.lower() + + +def test_install_yes_skips_confirmation(isolated_config: Path): + """clm install --yes proceeds without confirmation prompt.""" + # Setup: create host and keypair + create_test_keypair(isolated_config, "testhost") + create_host(isolated_config, "192.168.1.100", alias="testhost", key_id="testhost") + + # Mock run_installation + with patch("clawrium.cli.install.run_installation") as mock_install: + mock_install.return_value = { + "success": True, + "claw": "openclaw", + "version": "1.0.0", + "host": "192.168.1.100", + "playbooks_run": [], + "error": None, + } + + # Run with --yes flag + result = runner.invoke( + app, + ["install", "--claw", "openclaw", "--host", "testhost", "--yes"], + env=os.environ, + ) + + # Should proceed directly to installation + assert result.exit_code == 0 + assert "success" in result.output.lower() + mock_install.assert_called_once() + + +def test_install_cancelled_exits_0(isolated_config: Path): + """Declining confirmation exits cleanly with code 0.""" + # Setup: create host and keypair + create_test_keypair(isolated_config, "testhost") + create_host(isolated_config, "192.168.1.100", alias="testhost", key_id="testhost") + + # Run and decline confirmation + result = runner.invoke( + app, + ["install", "--claw", "openclaw", "--host", "testhost"], + input="n\n", + env=os.environ, + ) + + assert result.exit_code == 0 + assert "cancelled" in result.output.lower() + + +def test_install_error_exits_1(isolated_config: Path): + """InstallationError shows error message and exits 1.""" + # Setup: create host and keypair + create_test_keypair(isolated_config, "testhost") + create_host(isolated_config, "192.168.1.100", alias="testhost", key_id="testhost") + + # Mock run_installation to raise error + from clawrium.core.install import InstallationError + + with patch("clawrium.cli.install.run_installation") as mock_install: + mock_install.side_effect = InstallationError("Playbook failed") + + # Run with --yes to skip confirmation + result = runner.invoke( + app, + ["install", "--claw", "openclaw", "--host", "testhost", "--yes"], + env=os.environ, + ) + + assert result.exit_code == 1 + assert "failed" in result.output.lower() + assert "playbook" in result.output.lower() + + +def test_install_incompatible_exits_1(isolated_config: Path): + """Incompatible host shows reasons and exits 1.""" + # Setup: create host with incompatible hardware (ARM instead of x86_64) + create_test_keypair(isolated_config, "armhost") + + hosts_file = isolated_config / "hosts.json" + isolated_config.mkdir(parents=True, exist_ok=True) + + incompatible_host = { + "hostname": "192.168.1.200", + "alias": "armhost", + "key_id": "armhost", + "port": 22, + "user": "xclm", + "auth_method": "key", + "hardware": { + "architecture": "aarch64", # OpenClaw requires x86_64 + "processor_cores": 4, + "memtotal_mb": 8192, + "distribution": "ubuntu", + "distribution_version": "22.04", + "gpu": {"present": False}, + }, + "metadata": { + "added_at": "2026-03-21T00:00:00Z", + "last_seen": "2026-03-21T00:00:00Z", + "tags": [], + }, + } + + hosts_file.write_text(json.dumps([incompatible_host], indent=2)) + + # Try to install openclaw (requires x86_64) + result = runner.invoke( + app, + ["install", "--claw", "openclaw", "--host", "armhost"], + env=os.environ, + ) + + assert result.exit_code == 1 + assert "incompatible" in result.output.lower() or "architecture" in result.output.lower() + + +def test_install_hosts_file_corrupted(isolated_config: Path): + """HostsFileCorruptedError shows error and exits 1.""" + from clawrium.core.hosts import HostsFileCorruptedError + + with patch("clawrium.cli.install.load_hosts", side_effect=HostsFileCorruptedError("JSON parse error")): + result = runner.invoke( + app, + ["install", "--claw", "openclaw", "--host", "testhost"], + env=os.environ, + ) + + assert result.exit_code == 1 + assert "corrupted" in result.output.lower() or "error" in result.output.lower() diff --git a/tests/test_cli_status.py b/tests/test_cli_status.py new file mode 100644 index 0000000..d94a135 --- /dev/null +++ b/tests/test_cli_status.py @@ -0,0 +1,216 @@ +"""Tests for fleet status CLI command.""" + +import pytest +from unittest.mock import patch, MagicMock +from typer.testing import CliRunner + +from clawrium.cli.main import app +from clawrium.core.health import ClawStatus + + +runner = CliRunner() + + +@pytest.fixture +def mock_hosts_with_claws(): + """Hosts with installed claws.""" + return [ + { + "hostname": "192.168.1.100", + "alias": "server1", + "port": 22, + "user": "xclm", + "key_id": "server1", + "claws": { + "openclaw": { + "version": "0.1.0", + "status": "installed", + "installed_at": "2026-03-21T10:00:00Z", + "user": "opc-server1", + } + }, + }, + { + "hostname": "192.168.1.101", + "alias": "server2", + "port": 22, + "user": "xclm", + "key_id": "server2", + "claws": { + "openclaw": { + "version": "0.1.0", + "status": "installed", + "installed_at": "2026-03-21T11:00:00Z", + "user": "opc-server2", + } + }, + }, + ] + + +def test_status_no_hosts(): + """No hosts shows message to add hosts.""" + with patch("clawrium.cli.status.load_hosts", return_value=[]): + result = runner.invoke(app, ["status"]) + + assert result.exit_code == 0 + assert "No hosts registered" in result.output + + +def test_status_no_claws(): + """Hosts with no claws shows install message.""" + hosts = [{"hostname": "192.168.1.100", "claws": {}}] + + with patch("clawrium.cli.status.load_hosts", return_value=hosts): + result = runner.invoke(app, ["status"]) + + assert result.exit_code == 0 + assert "No claws installed" in result.output + + +def test_status_shows_claw_table(mock_hosts_with_claws): + """Status shows table grouped by claw type.""" + mock_health = MagicMock(return_value={ + "claw": "openclaw", + "host": "192.168.1.100", + "status": ClawStatus.RUNNING, + "user": "opc-server1", + "error": None, + }) + + with patch("clawrium.cli.status.load_hosts", return_value=mock_hosts_with_claws): + with patch("clawrium.cli.status.check_claw_health", mock_health): + result = runner.invoke(app, ["status"]) + + assert result.exit_code == 0 + assert "openclaw" in result.output + assert "server1" in result.output + assert "0.1.0" in result.output + + +def test_status_shows_running_status(mock_hosts_with_claws): + """Running claw shows green status.""" + mock_health = MagicMock(return_value={ + "claw": "openclaw", + "host": "192.168.1.100", + "status": ClawStatus.RUNNING, + "user": "opc-server1", + "error": None, + }) + + with patch("clawrium.cli.status.load_hosts", return_value=mock_hosts_with_claws): + with patch("clawrium.cli.status.check_claw_health", mock_health): + result = runner.invoke(app, ["status"]) + + assert "running" in result.output + + +def test_status_shows_stopped_status(mock_hosts_with_claws): + """Stopped claw shows red status.""" + mock_health = MagicMock(return_value={ + "claw": "openclaw", + "host": "192.168.1.100", + "status": ClawStatus.STOPPED, + "user": "opc-server1", + "error": None, + }) + + with patch("clawrium.cli.status.load_hosts", return_value=mock_hosts_with_claws): + with patch("clawrium.cli.status.check_claw_health", mock_health): + result = runner.invoke(app, ["status"]) + + assert "stopped" in result.output + + +def test_status_host_filter(mock_hosts_with_claws): + """--host flag filters to specific host.""" + mock_health = MagicMock(return_value={ + "claw": "openclaw", + "host": "192.168.1.100", + "status": ClawStatus.RUNNING, + "user": "opc-server1", + "error": None, + }) + + with patch("clawrium.cli.status.load_hosts", return_value=mock_hosts_with_claws): + with patch("clawrium.cli.status.check_claw_health", mock_health): + result = runner.invoke(app, ["status", "--host", "server1"]) + + assert result.exit_code == 0 + assert "server1" in result.output + # Health check should only be called once (for server1) + assert mock_health.call_count == 1 + + +def test_status_host_filter_not_found(mock_hosts_with_claws): + """--host with unknown host shows error.""" + with patch("clawrium.cli.status.load_hosts", return_value=mock_hosts_with_claws): + result = runner.invoke(app, ["status", "--host", "unknown"]) + + assert result.exit_code == 1 + assert "not found" in result.output + + +def test_status_shows_failed_install(): + """Failed installation shows install failed status.""" + hosts = [{ + "hostname": "192.168.1.100", + "alias": "server1", + "claws": { + "openclaw": { + "version": "0.1.0", + "status": "failed", + "error": "Playbook failed", + "user": "opc-server1", + } + }, + }] + + # Health check returns unknown since not really installed + mock_health = MagicMock(return_value={ + "claw": "openclaw", + "host": "192.168.1.100", + "status": ClawStatus.STOPPED, + "user": "opc-server1", + "error": None, + }) + + with patch("clawrium.cli.status.load_hosts", return_value=hosts): + with patch("clawrium.cli.status.check_claw_health", mock_health): + result = runner.invoke(app, ["status"]) + + assert "install failed" in result.output + + +def test_status_shows_installing_status(): + """Installing status shows installing indicator.""" + hosts = [{ + "hostname": "192.168.1.100", + "alias": "server1", + "claws": { + "openclaw": { + "version": "0.1.0", + "status": "installing", + "user": "opc-server1", + } + }, + }] + + # Health check not called for installing status - skip health check + with patch("clawrium.cli.status.load_hosts", return_value=hosts): + with patch("clawrium.cli.status.check_claw_health"): + result = runner.invoke(app, ["status"]) + + assert result.exit_code == 0 + assert "installing" in result.output.lower() + + +def test_status_hosts_file_corrupted(): + """HostsFileCorruptedError shows error and exits 1.""" + from clawrium.core.hosts import HostsFileCorruptedError + + with patch("clawrium.cli.status.load_hosts", side_effect=HostsFileCorruptedError("JSON parse error")): + result = runner.invoke(app, ["status"]) + + assert result.exit_code == 1 + assert "corrupted" in result.output.lower() or "error" in result.output.lower() diff --git a/tests/test_config.py b/tests/test_config.py index 19de68d..6d82461 100644 --- a/tests/test_config.py +++ b/tests/test_config.py @@ -1,6 +1,5 @@ """Tests for config directory management.""" -import os from pathlib import Path import pytest @@ -44,7 +43,7 @@ class TestInitConfigDir: def test_creates_directory(self, isolated_config: Path) -> None: """Should create the config directory.""" assert not isolated_config.exists() - result = init_config_dir() + init_config_dir() assert isolated_config.exists() assert isolated_config.is_dir() diff --git a/tests/test_deps.py b/tests/test_deps.py index 330dbef..04b3dd7 100644 --- a/tests/test_deps.py +++ b/tests/test_deps.py @@ -1,9 +1,7 @@ """Tests for dependency detection.""" -from pathlib import Path from unittest.mock import patch -import pytest from clawrium.core.deps import ( DependencyStatus, diff --git a/tests/test_health.py b/tests/test_health.py new file mode 100644 index 0000000..8cfb749 --- /dev/null +++ b/tests/test_health.py @@ -0,0 +1,192 @@ +"""Tests for claw health checking.""" + +import pytest +from unittest.mock import patch, MagicMock + +from clawrium.core.health import ( + check_claw_health, + check_all_claws_on_host, + ClawStatus, +) + + +@pytest.fixture +def mock_host(): + """Host record with installed claw.""" + return { + "hostname": "192.168.1.100", + "port": 22, + "user": "xclm", + "key_id": "testhost", + "claws": { + "openclaw": { + "version": "0.1.0", + "status": "installed", + "user": "opc-testhost", + } + }, + } + + +def test_health_check_running(mock_host): + """Process running returns RUNNING status.""" + mock_runner = MagicMock() + mock_runner.status = "successful" + mock_runner.events = [ + {"event": "runner_on_ok", "event_data": {"res": {"stdout": "RUNNING"}}} + ] + + with patch("clawrium.core.health.get_host_private_key", return_value="/fake/key"): + with patch("clawrium.core.health.ansible_runner.run", return_value=mock_runner): + result = check_claw_health("openclaw", mock_host) + + assert result["status"] == ClawStatus.RUNNING + assert result["claw"] == "openclaw" + assert result["user"] == "opc-testhost" + assert result["error"] is None + + +def test_health_check_stopped(mock_host): + """Process not running returns STOPPED status.""" + mock_runner = MagicMock() + mock_runner.status = "successful" + mock_runner.events = [ + {"event": "runner_on_ok", "event_data": {"res": {"stdout": "STOPPED"}}} + ] + + with patch("clawrium.core.health.get_host_private_key", return_value="/fake/key"): + with patch("clawrium.core.health.ansible_runner.run", return_value=mock_runner): + result = check_claw_health("openclaw", mock_host) + + assert result["status"] == ClawStatus.STOPPED + + +def test_health_check_ssh_fails(mock_host): + """SSH failure returns UNKNOWN status with error.""" + mock_runner = MagicMock() + mock_runner.status = "failed" + mock_runner.events = [] + + with patch("clawrium.core.health.get_host_private_key", return_value="/fake/key"): + with patch("clawrium.core.health.ansible_runner.run", return_value=mock_runner): + result = check_claw_health("openclaw", mock_host) + + assert result["status"] == ClawStatus.UNKNOWN + assert "SSH failed" in result["error"] + + +def test_health_check_not_installed(mock_host): + """Claw not in host record returns NOT_INSTALLED.""" + result = check_claw_health("zeroclaw", mock_host) + + assert result["status"] == ClawStatus.NOT_INSTALLED + + +def test_health_check_no_ssh_key(mock_host): + """Missing SSH key returns UNKNOWN.""" + with patch("clawrium.core.health.get_host_private_key", return_value=None): + result = check_claw_health("openclaw", mock_host) + + assert result["status"] == ClawStatus.UNKNOWN + assert "SSH key not found" in result["error"] + + +def test_health_check_timeout(mock_host): + """Timeout returns UNKNOWN status.""" + mock_runner = MagicMock() + mock_runner.status = "timeout" + mock_runner.events = [] + + with patch("clawrium.core.health.get_host_private_key", return_value="/fake/key"): + with patch("clawrium.core.health.ansible_runner.run", return_value=mock_runner): + result = check_claw_health("openclaw", mock_host) + + assert result["status"] == ClawStatus.UNKNOWN + assert "timed out" in result["error"] + + +def test_check_all_claws_on_host(mock_host): + """check_all_claws_on_host returns results for each claw.""" + mock_runner = MagicMock() + mock_runner.status = "successful" + mock_runner.events = [ + {"event": "runner_on_ok", "event_data": {"res": {"stdout": "RUNNING"}}} + ] + + with patch("clawrium.core.health.get_host_private_key", return_value="/fake/key"): + with patch("clawrium.core.health.ansible_runner.run", return_value=mock_runner): + results = check_all_claws_on_host(mock_host) + + assert len(results) == 1 + assert results[0]["claw"] == "openclaw" + assert results[0]["status"] == ClawStatus.RUNNING + + +def test_health_check_no_claw_user(): + """Missing claw user returns UNKNOWN status with error.""" + host = { + "hostname": "192.168.1.100", + "claws": { + "openclaw": { + "version": "0.1.0", + "status": "installed", + # No "user" field + } + }, + } + + result = check_claw_health("openclaw", host) + + assert result["status"] == ClawStatus.UNKNOWN + assert "No claw user recorded" in result["error"] + + +def test_health_check_invalid_claw_user(): + """Invalid claw user format returns UNKNOWN status with error.""" + host = { + "hostname": "192.168.1.100", + "claws": { + "openclaw": { + "version": "0.1.0", + "status": "installed", + "user": "root; rm -rf /", # Command injection attempt + } + }, + } + + result = check_claw_health("openclaw", host) + + assert result["status"] == ClawStatus.UNKNOWN + assert "Invalid claw user format" in result["error"] + + +def test_health_check_host_unreachable(mock_host): + """Host unreachable returns UNKNOWN status.""" + mock_runner = MagicMock() + mock_runner.status = "successful" # ansible-runner returns successful even for unreachable + mock_runner.events = [ + {"event": "runner_on_unreachable", "event_data": {}} + ] + + with patch("clawrium.core.health.get_host_private_key", return_value="/fake/key"): + with patch("clawrium.core.health.ansible_runner.run", return_value=mock_runner): + result = check_claw_health("openclaw", mock_host) + + assert result["status"] == ClawStatus.UNKNOWN + assert "unreachable" in result["error"].lower() + + +def test_health_check_unexpected_output(mock_host): + """Unexpected output returns UNKNOWN status.""" + mock_runner = MagicMock() + mock_runner.status = "successful" + mock_runner.events = [ + {"event": "runner_on_ok", "event_data": {"res": {"stdout": "UNEXPECTED_OUTPUT"}}} + ] + + with patch("clawrium.core.health.get_host_private_key", return_value="/fake/key"): + with patch("clawrium.core.health.ansible_runner.run", return_value=mock_runner): + result = check_claw_health("openclaw", mock_host) + + assert result["status"] == ClawStatus.UNKNOWN + assert "Unexpected output" in result["error"] diff --git a/tests/test_hosts.py b/tests/test_hosts.py index c6d462e..a326898 100644 --- a/tests/test_hosts.py +++ b/tests/test_hosts.py @@ -2,15 +2,17 @@ import json import pytest -from pathlib import Path from clawrium.core.hosts import ( load_hosts, save_hosts, add_host, remove_host, get_host, + get_host_by_key_id, + update_host, HOSTS_FILE, HostsFileCorruptedError, + DuplicateHostError, ) @@ -212,3 +214,138 @@ def test_load_hosts_list_with_non_dict_items(isolated_config): with pytest.raises(HostsFileCorruptedError) as exc_info: load_hosts() assert "invalid entries" in str(exc_info.value).lower() + + +def test_host_claw_tracking_installed(isolated_config): + """After successful install, host record contains claws[claw_name] with status='installed'.""" + # Setup: create host + isolated_config.mkdir(parents=True, exist_ok=True) + test_host = { + "hostname": "192.168.1.10", + "port": 22, + "user": "xclm", + "key_id": "testhost", + } + save_hosts([test_host]) + + # Simulate install success by updating host with claw tracking + def add_claw_tracking(h: dict) -> dict: + if "claws" not in h: + h["claws"] = {} + h["claws"]["openclaw"] = { + "version": "0.1.0", + "status": "installed", + "installed_at": "2024-01-01T00:00:00Z", + "error": None, + "user": "opc-testhost", + } + return h + + result = update_host("192.168.1.10", add_claw_tracking) + assert result is True + + # Verify host record contains claw tracking + hosts = load_hosts() + assert len(hosts) == 1 + assert "claws" in hosts[0] + assert "openclaw" in hosts[0]["claws"] + assert hosts[0]["claws"]["openclaw"]["status"] == "installed" + assert hosts[0]["claws"]["openclaw"]["version"] == "0.1.0" + assert hosts[0]["claws"]["openclaw"]["installed_at"] == "2024-01-01T00:00:00Z" + assert hosts[0]["claws"]["openclaw"]["error"] is None + assert hosts[0]["claws"]["openclaw"]["user"] == "opc-testhost" + + +def test_host_claw_tracking_failed(isolated_config): + """After failed install, host record contains claws[claw_name] with status='failed' and error message.""" + # Setup: create host + isolated_config.mkdir(parents=True, exist_ok=True) + test_host = {"hostname": "192.168.1.10", "port": 22, "user": "xclm"} + save_hosts([test_host]) + + # Simulate install failure by updating host with failed status + def add_failed_claw(h: dict) -> dict: + if "claws" not in h: + h["claws"] = {} + h["claws"]["openclaw"] = { + "version": "0.1.0", + "status": "failed", + "installed_at": "2024-01-01T00:00:00Z", + "error": "Base playbook failed: timeout", + "user": None, + } + return h + + result = update_host("192.168.1.10", add_failed_claw) + assert result is True + + # Verify host record contains failure tracking + hosts = load_hosts() + assert len(hosts) == 1 + assert "claws" in hosts[0] + assert "openclaw" in hosts[0]["claws"] + assert hosts[0]["claws"]["openclaw"]["status"] == "failed" + assert hosts[0]["claws"]["openclaw"]["error"] == "Base playbook failed: timeout" + + +def test_update_host_not_found(isolated_config): + """update_host returns False when hostname not found.""" + isolated_config.mkdir(parents=True, exist_ok=True) + test_hosts = [{"hostname": "192.168.1.10", "port": 22, "user": "xclm"}] + save_hosts(test_hosts) + + def noop(h: dict) -> dict: + return h + + result = update_host("nonexistent-host", noop) + + assert result is False + # Original hosts unchanged + hosts = load_hosts() + assert len(hosts) == 1 + assert hosts[0]["hostname"] == "192.168.1.10" + + +def test_add_host_duplicate_raises(isolated_config): + """add_host raises DuplicateHostError when hostname already exists.""" + isolated_config.mkdir(parents=True, exist_ok=True) + test_hosts = [{"hostname": "192.168.1.10", "port": 22, "user": "xclm"}] + save_hosts(test_hosts) + + # Try to add duplicate hostname + duplicate = {"hostname": "192.168.1.10", "port": 22, "user": "different"} + + with pytest.raises(DuplicateHostError) as exc_info: + add_host(duplicate) + + assert "already exists" in str(exc_info.value).lower() + # Original hosts unchanged + hosts = load_hosts() + assert len(hosts) == 1 + + +def test_get_host_by_key_id_found(isolated_config): + """get_host_by_key_id finds host by key_id field.""" + isolated_config.mkdir(parents=True, exist_ok=True) + test_hosts = [ + {"hostname": "192.168.1.10", "port": 22, "user": "xclm", "key_id": "server1-key"}, + {"hostname": "192.168.1.20", "port": 22, "user": "xclm", "key_id": "server2-key"}, + ] + save_hosts(test_hosts) + + host = get_host_by_key_id("server1-key") + + assert host is not None + assert host["hostname"] == "192.168.1.10" + assert host["key_id"] == "server1-key" + + +def test_get_host_by_key_id_not_found(isolated_config): + """get_host_by_key_id returns None when key_id not found.""" + isolated_config.mkdir(parents=True, exist_ok=True) + test_hosts = [{"hostname": "192.168.1.10", "port": 22, "user": "xclm", "key_id": "known-key"}] + save_hosts(test_hosts) + + host = get_host_by_key_id("unknown-key") + + assert host is None diff --git a/tests/test_install.py b/tests/test_install.py new file mode 100644 index 0000000..8462117 --- /dev/null +++ b/tests/test_install.py @@ -0,0 +1,595 @@ +"""Tests for installation orchestration.""" + +import pytest +from unittest.mock import Mock + + +def test_install_invalid_claw_raises(): + """Test that install with invalid claw raises InstallationError.""" + from clawrium.core.install import run_installation, InstallationError + + with pytest.raises(InstallationError, match="not found"): + run_installation("nonexistent_claw", "test-host") + + +def test_install_host_not_found_raises(monkeypatch): + """Test that install with unknown host raises InstallationError.""" + from clawrium.core.install import run_installation, InstallationError + + # Mock load_manifest to succeed + mock_manifest = { + "name": "openclaw", + "entries": [ + { + "version": "0.1.0", + "os": "ubuntu", + "os_version": "24.04", + "arch": "x86_64", + "requirements": { + "min_memory_mb": 2048, + "gpu_required": False, + "dependencies": {"nodejs": ">=20.0.0"}, + }, + } + ], + } + + import clawrium.core.install + monkeypatch.setattr(clawrium.core.install, "load_manifest", lambda x: mock_manifest) + + # Mock get_host to return None + monkeypatch.setattr(clawrium.core.install, "get_host", lambda x: None) + + with pytest.raises(InstallationError, match="not found"): + run_installation("openclaw", "unknown-host") + + +def test_install_incompatible_host_raises(monkeypatch): + """Test that install with incompatible host raises InstallationError.""" + from clawrium.core.install import run_installation, InstallationError + + # Mock load_manifest + mock_manifest = { + "name": "openclaw", + "entries": [ + { + "version": "0.1.0", + "os": "ubuntu", + "os_version": "24.04", + "arch": "x86_64", + "requirements": { + "min_memory_mb": 2048, + "gpu_required": False, + "dependencies": {"nodejs": ">=20.0.0"}, + }, + } + ], + } + + import clawrium.core.install + monkeypatch.setattr(clawrium.core.install, "load_manifest", lambda x: mock_manifest) + + # Mock get_host with incompatible hardware + incompatible_host = { + "hostname": "test-host", + "user": "xclm", + "port": 22, + "hardware": { + "architecture": "arm64", # Wrong arch + "os": "ubuntu", + "os_version": "24.04", + "memtotal_mb": 4096, + }, + } + monkeypatch.setattr(clawrium.core.install, "get_host", lambda x: incompatible_host) + + # Mock check_compatibility to return incompatible + compat_result = { + "compatible": False, + "matched_entry": None, + "reasons": ["Requires x86_64, host has arm64"], + } + monkeypatch.setattr( + clawrium.core.install, "check_compatibility", lambda *args, **kwargs: compat_result + ) + + with pytest.raises(InstallationError, match="incompatible.*arm64"): + run_installation("openclaw", "test-host") + + +def test_install_success(monkeypatch, tmp_path): + """Test successful installation flow.""" + from clawrium.core.install import run_installation + + # Mock load_manifest + mock_manifest = { + "name": "openclaw", + "entries": [ + { + "version": "0.1.0", + "os": "ubuntu", + "os_version": "24.04", + "arch": "x86_64", + "requirements": { + "min_memory_mb": 2048, + "gpu_required": False, + "dependencies": {"nodejs": ">=20.0.0"}, + }, + } + ], + } + + import clawrium.core.install + monkeypatch.setattr(clawrium.core.install, "load_manifest", lambda x: mock_manifest) + + # Create a mock SSH key + key_file = tmp_path / "test_key" + key_file.write_text("fake key") + + # Mock get_host + compatible_host = { + "hostname": "test-host", + "user": "xclm", + "port": 22, + "key_id": "test-host", + "hardware": { + "architecture": "x86_64", + "os": "ubuntu", + "os_version": "24.04", + "memtotal_mb": 4096, + }, + } + monkeypatch.setattr(clawrium.core.install, "get_host", lambda x: compatible_host) + + # Mock check_compatibility + compat_result = { + "compatible": True, + "matched_entry": mock_manifest["entries"][0], + "reasons": [], + } + monkeypatch.setattr( + clawrium.core.install, "check_compatibility", lambda *args, **kwargs: compat_result + ) + + # Mock get_host_private_key + monkeypatch.setattr( + clawrium.core.install, "get_host_private_key", lambda x: key_file + ) + + # Mock ansible_runner.run + class SuccessfulResult: + status = "successful" + + mock_run = Mock(return_value=SuccessfulResult()) + + import ansible_runner + monkeypatch.setattr(ansible_runner, "run", mock_run) + + # Run installation + result = run_installation("openclaw", "test-host") + + # Verify result + assert result["success"] is True + assert result["claw"] == "openclaw" + assert result["version"] == "0.1.0" + assert result["host"] == "test-host" + assert len(result["playbooks_run"]) == 2 + assert result["error"] is None + + # Verify ansible_runner.run was called twice (base + claw playbook) + assert mock_run.call_count == 2 + + +def test_install_emits_events(monkeypatch, tmp_path): + """Test that installation emits progress events.""" + from clawrium.core.install import run_installation + + # Mock dependencies (same as test_install_success) + mock_manifest = { + "name": "openclaw", + "entries": [ + { + "version": "0.1.0", + "os": "ubuntu", + "os_version": "24.04", + "arch": "x86_64", + "requirements": { + "min_memory_mb": 2048, + "gpu_required": False, + "dependencies": {"nodejs": ">=20.0.0"}, + }, + } + ], + } + + import clawrium.core.install + monkeypatch.setattr(clawrium.core.install, "load_manifest", lambda x: mock_manifest) + + key_file = tmp_path / "test_key" + key_file.write_text("fake key") + + compatible_host = { + "hostname": "test-host", + "user": "xclm", + "port": 22, + "key_id": "test-host", + "hardware": { + "architecture": "x86_64", + "os": "ubuntu", + "os_version": "24.04", + "memtotal_mb": 4096, + }, + } + monkeypatch.setattr(clawrium.core.install, "get_host", lambda x: compatible_host) + + compat_result = { + "compatible": True, + "matched_entry": mock_manifest["entries"][0], + "reasons": [], + } + monkeypatch.setattr( + clawrium.core.install, "check_compatibility", lambda *args, **kwargs: compat_result + ) + + monkeypatch.setattr( + clawrium.core.install, "get_host_private_key", lambda x: key_file + ) + + class SuccessfulResult: + status = "successful" + + mock_run = Mock(return_value=SuccessfulResult()) + + import ansible_runner + monkeypatch.setattr(ansible_runner, "run", mock_run) + + # Capture events + events = [] + + def on_event(stage, message): + events.append((stage, message)) + + # Run installation with event callback + run_installation("openclaw", "test-host", on_event=on_event) + + # Verify events were emitted + assert len(events) > 0 + stages = [stage for stage, _ in events] + assert "validate" in stages + assert "base" in stages + assert "claw" in stages + + +def test_install_base_playbook_fails(monkeypatch, tmp_path): + """Test that base playbook failure raises InstallationError.""" + from clawrium.core.install import run_installation, InstallationError + + # Mock dependencies + mock_manifest = { + "name": "openclaw", + "entries": [ + { + "version": "0.1.0", + "os": "ubuntu", + "os_version": "24.04", + "arch": "x86_64", + "requirements": { + "min_memory_mb": 2048, + "gpu_required": False, + "dependencies": {"nodejs": ">=20.0.0"}, + }, + } + ], + } + + import clawrium.core.install + monkeypatch.setattr(clawrium.core.install, "load_manifest", lambda x: mock_manifest) + + key_file = tmp_path / "test_key" + key_file.write_text("fake key") + + compatible_host = { + "hostname": "test-host", + "user": "xclm", + "port": 22, + "key_id": "test-host", + "hardware": { + "architecture": "x86_64", + "os": "ubuntu", + "os_version": "24.04", + "memtotal_mb": 4096, + }, + } + monkeypatch.setattr(clawrium.core.install, "get_host", lambda x: compatible_host) + + compat_result = { + "compatible": True, + "matched_entry": mock_manifest["entries"][0], + "reasons": [], + } + monkeypatch.setattr( + clawrium.core.install, "check_compatibility", lambda *args, **kwargs: compat_result + ) + + monkeypatch.setattr( + clawrium.core.install, "get_host_private_key", lambda x: key_file + ) + + # Mock ansible_runner.run to fail + class FailedResult: + status = "failed" + + mock_run = Mock(return_value=FailedResult()) + + import ansible_runner + monkeypatch.setattr(ansible_runner, "run", mock_run) + + with pytest.raises(InstallationError, match="Base playbook failed"): + run_installation("openclaw", "test-host") + + +def test_install_missing_ssh_key_raises(monkeypatch): + """Test that missing SSH key raises InstallationError.""" + from clawrium.core.install import run_installation, InstallationError + + # Mock dependencies + mock_manifest = { + "name": "openclaw", + "entries": [ + { + "version": "0.1.0", + "os": "ubuntu", + "os_version": "24.04", + "arch": "x86_64", + "requirements": { + "min_memory_mb": 2048, + "gpu_required": False, + "dependencies": {"nodejs": ">=20.0.0"}, + }, + } + ], + } + + import clawrium.core.install + monkeypatch.setattr(clawrium.core.install, "load_manifest", lambda x: mock_manifest) + + compatible_host = { + "hostname": "test-host", + "user": "xclm", + "port": 22, + "key_id": "test-host", + "hardware": { + "architecture": "x86_64", + "os": "ubuntu", + "os_version": "24.04", + "memtotal_mb": 4096, + }, + } + monkeypatch.setattr(clawrium.core.install, "get_host", lambda x: compatible_host) + + compat_result = { + "compatible": True, + "matched_entry": mock_manifest["entries"][0], + "reasons": [], + } + monkeypatch.setattr( + clawrium.core.install, "check_compatibility", lambda *args, **kwargs: compat_result + ) + + # Mock get_host_private_key to return None + monkeypatch.setattr(clawrium.core.install, "get_host_private_key", lambda x: None) + + with pytest.raises(InstallationError, match="No SSH key found"): + run_installation("openclaw", "test-host") + + +def test_install_updates_host_on_success(monkeypatch, tmp_path): + """Test that install.py calls update_host with installed status on success.""" + from clawrium.core.install import run_installation + + # Mock dependencies + mock_manifest = { + "name": "openclaw", + "entries": [ + { + "version": "0.1.0", + "os": "ubuntu", + "os_version": "24.04", + "arch": "x86_64", + "requirements": { + "min_memory_mb": 2048, + "gpu_required": False, + "dependencies": {"nodejs": ">=20.0.0"}, + }, + } + ], + } + + import clawrium.core.install + monkeypatch.setattr(clawrium.core.install, "load_manifest", lambda x: mock_manifest) + + key_file = tmp_path / "test_key" + key_file.write_text("fake key") + + compatible_host = { + "hostname": "test-host", + "user": "xclm", + "port": 22, + "key_id": "test-host", + "hardware": { + "architecture": "x86_64", + "os": "ubuntu", + "os_version": "24.04", + "memtotal_mb": 4096, + }, + } + monkeypatch.setattr(clawrium.core.install, "get_host", lambda x: compatible_host) + + compat_result = { + "compatible": True, + "matched_entry": mock_manifest["entries"][0], + "reasons": [], + } + monkeypatch.setattr( + clawrium.core.install, "check_compatibility", lambda *args, **kwargs: compat_result + ) + + monkeypatch.setattr( + clawrium.core.install, "get_host_private_key", lambda x: key_file + ) + + class SuccessfulResult: + status = "successful" + + mock_run = Mock(return_value=SuccessfulResult()) + + import ansible_runner + monkeypatch.setattr(ansible_runner, "run", mock_run) + + # Mock update_host to track calls and simulate persistent state + update_calls = [] + persistent_host = compatible_host.copy() + + def mock_update_host(hostname, updater): + nonlocal persistent_host + # Capture before state + before_status = None + if "claws" in persistent_host and "openclaw" in persistent_host.get("claws", {}): + before_status = persistent_host["claws"]["openclaw"].get("status") + + # Apply updater to persistent host state (simulates real update_host behavior) + persistent_host = updater(persistent_host) + + # Capture after state + after_status = None + if "claws" in persistent_host and "openclaw" in persistent_host.get("claws", {}): + after_status = persistent_host["claws"]["openclaw"].get("status") + + # Store the before/after snapshot + update_calls.append((hostname, before_status, after_status, persistent_host.copy())) + return True + + monkeypatch.setattr(clawrium.core.install, "update_host", mock_update_host) + + # Run installation + run_installation("openclaw", "test-host") + + # Verify update_host was called with installing and installed status + assert len(update_calls) >= 2 + + # Extract after-statuses from calls (what was set by each update) + after_statuses = [call[2] for call in update_calls] + + # Should have: installing -> installed + assert after_statuses[0] == "installing", f"First update should set 'installing', got {after_statuses}" + assert after_statuses[-1] == "installed", f"Last update should set 'installed', got {after_statuses}" + + # Verify final state has all required fields + last_call = update_calls[-1] + last_hostname = last_call[0] + last_updated = last_call[3] + + assert last_hostname == "test-host" + assert "claws" in last_updated + assert "openclaw" in last_updated["claws"] + assert last_updated["claws"]["openclaw"]["status"] == "installed" + assert last_updated["claws"]["openclaw"]["version"] == "0.1.0" + assert last_updated["claws"]["openclaw"]["installed_at"] is not None + + +def test_install_updates_host_on_failure(monkeypatch, tmp_path): + """Test that install.py calls update_host with failed status on failure.""" + from clawrium.core.install import run_installation, InstallationError + + # Mock dependencies + mock_manifest = { + "name": "openclaw", + "entries": [ + { + "version": "0.1.0", + "os": "ubuntu", + "os_version": "24.04", + "arch": "x86_64", + "requirements": { + "min_memory_mb": 2048, + "gpu_required": False, + "dependencies": {"nodejs": ">=20.0.0"}, + }, + } + ], + } + + import clawrium.core.install + monkeypatch.setattr(clawrium.core.install, "load_manifest", lambda x: mock_manifest) + + key_file = tmp_path / "test_key" + key_file.write_text("fake key") + + compatible_host = { + "hostname": "test-host", + "user": "xclm", + "port": 22, + "key_id": "test-host", + "hardware": { + "architecture": "x86_64", + "os": "ubuntu", + "os_version": "24.04", + "memtotal_mb": 4096, + }, + } + monkeypatch.setattr(clawrium.core.install, "get_host", lambda x: compatible_host) + + compat_result = { + "compatible": True, + "matched_entry": mock_manifest["entries"][0], + "reasons": [], + } + monkeypatch.setattr( + clawrium.core.install, "check_compatibility", lambda *args, **kwargs: compat_result + ) + + monkeypatch.setattr( + clawrium.core.install, "get_host_private_key", lambda x: key_file + ) + + # Mock ansible_runner.run to fail + class FailedResult: + status = "failed" + + mock_run = Mock(return_value=FailedResult()) + + import ansible_runner + monkeypatch.setattr(ansible_runner, "run", mock_run) + + # Mock update_host to track calls + update_calls = [] + + def mock_update_host(hostname, updater): + # Call the updater to capture the update + test_host = compatible_host.copy() + if "claws" not in test_host: + test_host["claws"] = {} + updated = updater(test_host) + update_calls.append((hostname, updated)) + return True + + monkeypatch.setattr(clawrium.core.install, "update_host", mock_update_host) + + # Run installation (should fail and update host with error) + with pytest.raises(InstallationError): + run_installation("openclaw", "test-host") + + # Verify update_host was called with failed status + assert len(update_calls) >= 1 + + # Check if any call has failed status + found_failed = False + for hostname, updated in update_calls: + if "claws" in updated and "openclaw" in updated["claws"]: + if updated["claws"]["openclaw"]["status"] == "failed": + found_failed = True + assert updated["claws"]["openclaw"]["error"] is not None + assert "failed" in updated["claws"]["openclaw"]["error"].lower() + break + + assert found_failed, "Expected update_host to be called with failed status" diff --git a/tests/test_names.py b/tests/test_names.py index cf4e1f9..eb685c1 100644 --- a/tests/test_names.py +++ b/tests/test_names.py @@ -1,8 +1,6 @@ """Tests for clawrium.core.names module.""" -import re -import pytest from clawrium.core.names import generate_random_name, is_ip_address diff --git a/tests/test_playbooks.py b/tests/test_playbooks.py new file mode 100644 index 0000000..d099bbc --- /dev/null +++ b/tests/test_playbooks.py @@ -0,0 +1,69 @@ +"""Tests for Ansible playbooks structure and content.""" + +from pathlib import Path + + +def test_base_playbook_exists(): + """Test that base playbook exists.""" + + # Base playbook is at platform/playbooks/base.yaml (project root) + project_root = Path(__file__).parent.parent + base_playbook = project_root / "platform" / "playbooks" / "base.yaml" + + assert base_playbook.exists(), "base.yaml playbook should exist" + + +def test_base_playbook_structure(): + """Test that base playbook has required structure.""" + import yaml + + project_root = Path(__file__).parent.parent + base_playbook = project_root / "platform" / "playbooks" / "base.yaml" + + content = base_playbook.read_text() + + # Check for required elements + assert "- hosts:" in content, "Should have hosts directive" + assert "become: yes" in content or "become: true" in content, "Should require sudo" + assert "nodejs" in content.lower(), "Should install nodejs" + assert "build-essential" in content, "Should install build-essential" + + # Parse YAML to ensure it's valid + data = yaml.safe_load(content) + assert isinstance(data, list), "Playbook should be a list of plays" + assert len(data) > 0, "Playbook should have at least one play" + + +def test_openclaw_install_playbook_exists(): + """Test that openclaw install playbook exists.""" + from importlib.resources import files + + openclaw_package = files("clawrium.platform.registry.openclaw") + playbook_dir = openclaw_package / "playbooks" + install_playbook = playbook_dir / "install.yaml" + + # Since we're using importlib.resources, check if it's readable + assert install_playbook.is_file(), "install.yaml playbook should exist" + + +def test_openclaw_install_playbook_structure(): + """Test that openclaw install playbook has required structure.""" + from importlib.resources import files + import yaml + + openclaw_package = files("clawrium.platform.registry.openclaw") + playbook_path = openclaw_package / "playbooks" / "install.yaml" + + content = playbook_path.read_text() + + # Check for required elements + assert "- hosts:" in content, "Should have hosts directive" + assert "opc-" in content, "Should create opc- user" + assert "inventory_hostname" in content, "Should use inventory_hostname variable" + assert "npm install" in content, "Should run npm install" + assert "openclaw" in content.lower(), "Should reference openclaw repository" + + # Parse YAML to ensure it's valid + data = yaml.safe_load(content) + assert isinstance(data, list), "Playbook should be a list of plays" + assert len(data) > 0, "Playbook should have at least one play" diff --git a/tests/test_registry.py b/tests/test_registry.py index 12c8fec..538283f 100644 --- a/tests/test_registry.py +++ b/tests/test_registry.py @@ -2,7 +2,6 @@ import pytest from clawrium.core.registry import ( - ClawManifest, ManifestNotFoundError, ManifestParseError, get_claw_info, @@ -42,21 +41,56 @@ def test_load_manifest_nonexistent(): load_manifest("nonexistent") -def test_load_manifest_malformed(tmp_path): +def test_load_manifest_path_traversal(): + """Test path traversal attempt triggers InvalidClawNameError.""" + from clawrium.core.registry import InvalidClawNameError + with pytest.raises(InvalidClawNameError): + load_manifest("../etc/passwd") + + +def test_load_manifest_malformed_yaml(monkeypatch): """Test loading malformed YAML raises ManifestParseError.""" - # This will be tested with a malformed manifest file - # For now, we test that the exception type exists - with pytest.raises(ManifestParseError): - raise ManifestParseError("test") + import yaml as yaml_module + + def raise_yaml_error(*args, **kwargs): + raise yaml_module.YAMLError("test parse error") + + # Monkeypatch yaml.safe_load in the registry module + from clawrium.core import registry + monkeypatch.setattr(registry.yaml, "safe_load", raise_yaml_error) + + with pytest.raises(ManifestParseError, match="Failed to parse"): + load_manifest("openclaw") + + +def test_load_manifest_not_dict(monkeypatch): + """Test manifest that parses to non-dict raises ManifestParseError.""" + # Monkeypatch yaml.safe_load to return a list instead of dict + from clawrium.core import registry + monkeypatch.setattr(registry.yaml, "safe_load", lambda x: ["item1", "item2"]) + + with pytest.raises(ManifestParseError, match="not a valid YAML dict"): + load_manifest("openclaw") + + +def test_load_manifest_missing_required_fields(monkeypatch): + """Test manifest missing name/entries raises ManifestParseError.""" + # Monkeypatch yaml.safe_load to return dict missing 'entries' + from clawrium.core import registry + monkeypatch.setattr(registry.yaml, "safe_load", lambda x: {"name": "incomplete"}) + + with pytest.raises(ManifestParseError, match="missing required fields"): + load_manifest("openclaw") def test_list_claws(): - """Test list_claws returns openclaw.""" + """Test list_claws returns openclaw and zeroclaw.""" claws = list_claws() assert isinstance(claws, list) assert "openclaw" in claws - assert len(claws) > 0 + assert "zeroclaw" in claws + assert len(claws) >= 2 def test_get_claw_info_openclaw(): @@ -270,3 +304,127 @@ def test_check_compatibility_wrong_os_version(): assert len(result["reasons"]) > 0 # Should mention a supported version and the host version assert any("20.04" in r for r in result["reasons"]) + + +def test_load_manifest_zeroclaw(): + """Test loading zeroclaw manifest returns valid ClawManifest.""" + manifest = load_manifest("zeroclaw") + + assert isinstance(manifest, dict) + assert manifest["name"] == "zeroclaw" + assert "entries" in manifest + assert len(manifest["entries"]) > 0 + + # Should have armv7l entries for Pi 2/3 + archs = [e["arch"] for e in manifest["entries"]] + assert "armv7l" in archs + + +def test_check_compatibility_zeroclaw_armv7l(): + """Test zeroclaw compatibility with Raspberry Pi 2 hardware (Debian 13).""" + from clawrium.core.registry import check_compatibility + + hardware = { + "os": "debian", + "os_version": "13", + "architecture": "armv7l", + "memtotal_mb": 921, # Pi 2 has ~920MB usable + "gpu": {"present": False, "vendor": None, "error": None}, + "processor_cores": 4, + "processor_count": 1, + "mounts": [], + } + + result = check_compatibility("zeroclaw", hardware) + + assert result["compatible"] is True + assert result["matched_entry"] is not None + assert result["matched_entry"]["arch"] == "armv7l" + + +def test_check_compatibility_zeroclaw_low_memory(): + """Test zeroclaw compatibility with very low memory fails.""" + from clawrium.core.registry import check_compatibility + + hardware = { + "os": "debian", + "os_version": "13", + "architecture": "armv7l", + "memtotal_mb": 256, # Below 512MB minimum + "gpu": {"present": False, "vendor": None, "error": None}, + "processor_cores": 4, + "processor_count": 1, + "mounts": [], + } + + result = check_compatibility("zeroclaw", hardware) + + assert result["compatible"] is False + assert any("memory" in r.lower() or "ram" in r.lower() for r in result["reasons"]) + + +def test_check_compatibility_zeroclaw_debian12_incompatible(): + """Test zeroclaw does not match Debian 12 (only 13 supported).""" + from clawrium.core.registry import check_compatibility + + hardware = { + "os": "debian", + "os_version": "12", + "architecture": "armv7l", + "memtotal_mb": 921, + "gpu": {"present": False, "vendor": None, "error": None}, + "processor_cores": 4, + "processor_count": 1, + "mounts": [], + } + + result = check_compatibility("zeroclaw", hardware) + + assert result["compatible"] is False + assert any("debian 13" in r.lower() and "debian 12" in r.lower() for r in result["reasons"]) + + +def test_check_compatibility_zeroclaw_ubuntu_aarch64(): + """Test zeroclaw compatibility with Ubuntu aarch64 (Pi 4/5).""" + from clawrium.core.registry import check_compatibility + + hardware = { + "os": "ubuntu", + "os_version": "24.04", + "architecture": "aarch64", + "memtotal_mb": 4096, + "gpu": {"present": False, "vendor": None, "error": None}, + "processor_cores": 4, + "processor_count": 1, + "mounts": [], + } + + result = check_compatibility("zeroclaw", hardware) + + assert result["compatible"] is True + assert result["matched_entry"] is not None + assert result["matched_entry"]["arch"] == "aarch64" + assert result["matched_entry"]["os"] == "ubuntu" + + +def test_check_compatibility_zeroclaw_ubuntu_x86_64(): + """Test zeroclaw compatibility with Ubuntu x86_64.""" + from clawrium.core.registry import check_compatibility + + hardware = { + "os": "ubuntu", + "os_version": "22.04", + "architecture": "x86_64", + "memtotal_mb": 8192, + "gpu": {"present": False, "vendor": None, "error": None}, + "processor_cores": 8, + "processor_count": 1, + "mounts": [], + } + + result = check_compatibility("zeroclaw", hardware) + + assert result["compatible"] is True + assert result["matched_entry"] is not None + assert result["matched_entry"]["arch"] == "x86_64" + assert result["matched_entry"]["os"] == "ubuntu"