Conversation
…ript modules Move ~210 lines of pure, side-effect-free functions from the 3,800-line onboard.js into five typed TypeScript modules under src/lib/: - gateway-state.ts: gateway/sandbox state classification - validation.ts: failure classification, API key validation, model ID checks - url-utils.ts: URL normalization, text compaction, env formatting - build-context.ts: Docker build context filtering, recovery hints - dashboard.ts: dashboard URL resolution and construction Infrastructure: - tsconfig.src.json compiles src/ → dist/ as CJS (dist/ already gitignored) - tsconfig.cli.json updated to type-check src/ - npm run build:cli added to package.json - Pre-commit test-cli hook builds TS and includes dist/lib/ in coverage onboard.js imports from compiled dist/lib/ output. All 542 CLI tests pass. No user-facing behavior changes. Closes #1237. Relates to #924 (shell consolidation). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
56 new tests across three co-located test files: - src/lib/validation.test.ts: classifyValidationFailure, classifyApplyFailure, classifySandboxCreateFailure, validateNvidiaApiKeyValue, isSafeModelId - src/lib/dashboard.test.ts: resolveDashboardForwardTarget, buildControlUiUrls - src/lib/url-utils.test.ts: compactText, stripEndpointSuffix, normalizeProviderBaseUrl, isLoopbackHostname, formatEnvAssignment, parsePolicyPresetEnv vitest.config.ts updated to include src/**/*.test.ts in the CLI project. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The existing callsite in printDashboard calls buildControlUiUrls() with no arguments when no token is available. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The tsc-js pre-push hook (jsconfig.json) type-checks bin/lib/onboard.js which requires dist/lib/ to exist. CI was not running npm run build:cli before the prek checks, causing TS2307 "Cannot find module" errors. - Add npm run build:cli step to .github/actions/basic-checks - Update tsc-js hook to run build:cli before tsc Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…law into cv/extract-onboard-pure-fns
## Summary - Convert `bin/lib/preflight.js` (357 lines) to `src/lib/preflight.ts` with full type definitions - Typed interfaces for all opts objects and return types: `PortProbeResult`, `MemoryInfo`, `SwapResult`, `CheckPortOpts`, `GetMemoryInfoOpts`, `EnsureSwapOpts` - Extract `parseLsofLines` helper to reduce duplication in `checkPortAvailable` - Incorporate #1227 fix: `sudo` → `sudo -n` (non-interactive) for lsof retry - `bin/lib/preflight.js` becomes a thin re-export shim — existing consumers unaffected - Co-locate tests: `test/preflight.test.js` → `src/lib/preflight.test.ts` - Add real net probe tests (EADDRINUSE detection on occupied ports) - Fix all co-located test imports to use `dist/` paths for coverage attribution - Add targeted dashboard/validation branch tests to maintain ratchet Stacked on #1240. Not touched by any #924 blocker PR. ## Test plan - [x] 612 CLI tests pass (601 existing + 11 new) - [x] `tsc -p tsconfig.src.json` compiles cleanly - [x] `tsc -p tsconfig.cli.json` type-checks cleanly - [x] `tsc -p jsconfig.json` type-checks cleanly (the pre-push check that caught the union issue) - [x] Coverage ratchet passes Relates to #924 (shell consolidation). Supersedes #1227. 🤖 Generated with [Claude Code](https://claude.com/claude-code) --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Convert bin/lib/nim.js (259 lines) to src/lib/nim.ts with typed interfaces for NimModel, GpuDetection, and NimStatus. Co-locates tests. Mock pattern updated to patch dist/ cache path. 615 CLI tests pass. Coverage ratchet passes. Relates to #924 (shell consolidation). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
📝 WalkthroughWalkthroughThe PR adds a new TypeScript implementation for NIM container management and GPU detection ( Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Consumer/Test
participant Detect as detectGpu()
participant NVIDIA as nvidia-smi
participant Sys as system_profiler
participant OS as Host OS
Client->>Detect: detectGpu()
Detect->>NVIDIA: run nvidia-smi query
alt NVIDIA available
NVIDIA-->>Detect: VRAM & GPU info
Detect-->>Client: {type: 'nvidia', totalMemoryMB, perGpuMB, nimCapable}
else NVIDIA unavailable
Detect->>Detect: check unified-memory GPU names
alt Unified-memory detected
Detect->>OS: read system RAM
Detect-->>Client: {type: 'nvidia', unifiedMemory: true, perGpuMB}
else macOS fallback
Detect->>Sys: run system_profiler
Sys-->>Detect: Apple GPU info
Detect-->>Client: {type: 'apple', cores, totalMemoryMB}
else
Detect-->>Client: null
end
end
sequenceDiagram
participant Caller as Test/Consumer
participant Start as startNimContainer()
participant Docker as Docker CLI
participant Health as waitForNimHealth()
participant HTTP as /v1/models endpoint
Caller->>Start: startNimContainer(sandbox, model)
Start->>Docker: docker rm --force existing container (if any)
Start->>Docker: docker pull <image>
Start->>Docker: docker run --gpus all -d -p hostPort:8000 <image>
Docker-->>Start: container_id
Start-->>Caller: container_id
Caller->>Health: waitForNimHealth(port, timeout)
loop poll until timeout
Health->>HTTP: curl localhost:port/v1/models
alt 200 OK
HTTP-->>Health: models list
Health-->>Caller: true
else error / no response
Health-->>Caller: retry until timeout
end
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
src/lib/nim.test.ts (2)
204-223:⚠️ Potential issue | 🟡 MinorKeep the
containercontract under test in thesenimStatusByName()cases.These assertions now only check
running/healthy, butsrc/lib/nim.tsstill returnscontaineron every path andbin/nemoclaw.jsprintsnimStat.container. Dropping that field from the expectations makes a CLI-visible regression invisible to this suite.🧪 Suggested assertion updates
- expect(st).toMatchObject({ running: true, healthy: true }); + expect(st).toMatchObject({ + running: true, + healthy: true, + container: "foo", + state: "running", + }); ... - expect(st).toMatchObject({ running: true, healthy: true }); + expect(st).toMatchObject({ + running: true, + healthy: true, + container: "foo", + state: "running", + }); ... - expect(st).toMatchObject({ running: false, healthy: false, state: "exited" }); + expect(st).toMatchObject({ + running: false, + healthy: false, + container: "foo", + state: "exited", + });Also applies to: 226-241, 243-253
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/nim.test.ts` around lines 204 - 223, The tests for nimStatusByName drop asserting the returned container shape which causes regressions because src/lib/nim.ts still returns a container field and bin/nemoclaw.js prints nimStat.container; update the expectations in the test cases that call nimModule.nimStatusByName("foo") (including the blocks at the shown range and the other noted ranges 226-241 and 243-253) to assert the container value/shape is present (e.g., include container in the object passed to toMatchObject or add an explicit expect(nimStat.container).toEqual(...) or toBeDefined()) so the suite verifies the container contract alongside running/healthy.
95-118:⚠️ Potential issue | 🟡 MinorPlease add a mocked test for the primary NVIDIA detection path too.
These new cases only exercise the unified-memory fallback. The regular
nvidia-smi --query-gpu=memory.totalbranch insrc/lib/nim.tsis still effectively covered by the live-hostnvidia type is nimCapablecheck above, which will fail on low-VRAM NVIDIA machines. The Xavier case here already showstype: "nvidia"does not implynimCapable: true, so the suite is still hardware-dependent.Also applies to: 120-143, 145-166
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/nim.test.ts` around lines 95 - 118, Add a mocked unit test that exercises the primary NVIDIA detection branch (the path that parses `nvidia-smi --query-gpu=memory.total`) by using loadNimWithMockedRunner and a runCapture mock that returns a non-empty value when cmd.includes("memory.total") (e.g. "16384"), returns "NVIDIA <model>" for query-gpu=name, and returns an appropriate free -m value; call detectGpu() and assert the resulting object has type "nvidia", unifiedMemory: false, nimCapable set according to the parsed memory, and perGpuMB/totalMemoryMB matching the mocked values. Ensure you add the same style of mocked tests for the other cases noted (lines around 120-143 and 145-166) so the nvidia-smi primary branch is covered deterministically without relying on host hardware.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@bin/lib/nim.js`:
- Around line 4-7: The shim currently hard-codes module.exports =
require("../../dist/lib/nim") which breaks runtime usage before a build; update
the shim to try requiring the compiled dist module first and fall back to the
source implementation when dist is missing (i.e., wrap the require in a
try/catch and require the source module on failure), so module.exports still
resolves for the CLI entrypoints (referenced by bin/lib/onboard.js and
bin/nemoclaw.js) without forcing a prior build.
---
Outside diff comments:
In `@src/lib/nim.test.ts`:
- Around line 204-223: The tests for nimStatusByName drop asserting the returned
container shape which causes regressions because src/lib/nim.ts still returns a
container field and bin/nemoclaw.js prints nimStat.container; update the
expectations in the test cases that call nimModule.nimStatusByName("foo")
(including the blocks at the shown range and the other noted ranges 226-241 and
243-253) to assert the container value/shape is present (e.g., include container
in the object passed to toMatchObject or add an explicit
expect(nimStat.container).toEqual(...) or toBeDefined()) so the suite verifies
the container contract alongside running/healthy.
- Around line 95-118: Add a mocked unit test that exercises the primary NVIDIA
detection branch (the path that parses `nvidia-smi --query-gpu=memory.total`) by
using loadNimWithMockedRunner and a runCapture mock that returns a non-empty
value when cmd.includes("memory.total") (e.g. "16384"), returns "NVIDIA <model>"
for query-gpu=name, and returns an appropriate free -m value; call detectGpu()
and assert the resulting object has type "nvidia", unifiedMemory: false,
nimCapable set according to the parsed memory, and perGpuMB/totalMemoryMB
matching the mocked values. Ensure you add the same style of mocked tests for
the other cases noted (lines around 120-143 and 145-166) so the nvidia-smi
primary branch is covered deterministically without relying on host hardware.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 55acb9a0-3c8c-485c-824c-7680ec514038
📒 Files selected for processing (3)
bin/lib/nim.jssrc/lib/nim.test.tssrc/lib/nim.ts
| // Thin re-export shim — the implementation lives in src/lib/nim.ts, | ||
| // compiled to dist/lib/nim.js. | ||
|
|
||
| const { run, runCapture, shellQuote } = require("./runner"); | ||
| const nimImages = require("./nim-images.json"); | ||
| const UNIFIED_MEMORY_GPU_TAGS = ["GB10", "Thor", "Orin", "Xavier"]; | ||
|
|
||
| function containerName(sandboxName) { | ||
| return `nemoclaw-nim-${sandboxName}`; | ||
| } | ||
|
|
||
| function getImageForModel(modelName) { | ||
| const entry = nimImages.models.find((m) => m.name === modelName); | ||
| return entry ? entry.image : null; | ||
| } | ||
|
|
||
| function listModels() { | ||
| return nimImages.models.map((m) => ({ | ||
| name: m.name, | ||
| image: m.image, | ||
| minGpuMemoryMB: m.minGpuMemoryMB, | ||
| })); | ||
| } | ||
|
|
||
| function canRunNimWithMemory(totalMemoryMB) { | ||
| return nimImages.models.some((m) => m.minGpuMemoryMB <= totalMemoryMB); | ||
| } | ||
|
|
||
| function detectGpu() { | ||
| // Try NVIDIA first — query VRAM | ||
| try { | ||
| const output = runCapture("nvidia-smi --query-gpu=memory.total --format=csv,noheader,nounits", { | ||
| ignoreError: true, | ||
| }); | ||
| if (output) { | ||
| const lines = output.split("\n").filter((l) => l.trim()); | ||
| const perGpuMB = lines.map((l) => parseInt(l.trim(), 10)).filter((n) => !isNaN(n)); | ||
| if (perGpuMB.length > 0) { | ||
| const totalMemoryMB = perGpuMB.reduce((a, b) => a + b, 0); | ||
| return { | ||
| type: "nvidia", | ||
| count: perGpuMB.length, | ||
| totalMemoryMB, | ||
| perGpuMB: perGpuMB[0], | ||
| nimCapable: canRunNimWithMemory(totalMemoryMB), | ||
| }; | ||
| } | ||
| } | ||
| } catch { | ||
| /* ignored */ | ||
| } | ||
|
|
||
| // Fallback: unified-memory NVIDIA devices where discrete VRAM is not queryable. | ||
| try { | ||
| const nameOutput = runCapture("nvidia-smi --query-gpu=name --format=csv,noheader,nounits", { | ||
| ignoreError: true, | ||
| }); | ||
| const gpuNames = nameOutput | ||
| .split("\n") | ||
| .map((line) => line.trim()) | ||
| .filter(Boolean); | ||
| const unifiedGpuNames = gpuNames.filter((name) => | ||
| UNIFIED_MEMORY_GPU_TAGS.some((tag) => new RegExp(tag, "i").test(name)), | ||
| ); | ||
| if (unifiedGpuNames.length > 0) { | ||
| let totalMemoryMB = 0; | ||
| try { | ||
| const memLine = runCapture("free -m | awk '/Mem:/ {print $2}'", { ignoreError: true }); | ||
| if (memLine) totalMemoryMB = parseInt(memLine.trim(), 10) || 0; | ||
| } catch { | ||
| /* ignored */ | ||
| } | ||
| const count = unifiedGpuNames.length; | ||
| const perGpuMB = count > 0 ? Math.floor(totalMemoryMB / count) : totalMemoryMB; | ||
| const isSpark = unifiedGpuNames.some((name) => /GB10/i.test(name)); | ||
| return { | ||
| type: "nvidia", | ||
| name: unifiedGpuNames[0], | ||
| count, | ||
| totalMemoryMB, | ||
| perGpuMB: perGpuMB || totalMemoryMB, | ||
| nimCapable: canRunNimWithMemory(totalMemoryMB), | ||
| unifiedMemory: true, | ||
| spark: isSpark, | ||
| }; | ||
| } | ||
| } catch { | ||
| /* ignored */ | ||
| } | ||
|
|
||
| // macOS: detect Apple Silicon or discrete GPU | ||
| if (process.platform === "darwin") { | ||
| try { | ||
| const spOutput = runCapture("system_profiler SPDisplaysDataType 2>/dev/null", { | ||
| ignoreError: true, | ||
| }); | ||
| if (spOutput) { | ||
| const chipMatch = spOutput.match(/Chipset Model:\s*(.+)/); | ||
| const vramMatch = spOutput.match(/VRAM.*?:\s*(\d+)\s*(MB|GB)/i); | ||
| const coresMatch = spOutput.match(/Total Number of Cores:\s*(\d+)/); | ||
|
|
||
| if (chipMatch) { | ||
| const name = chipMatch[1].trim(); | ||
| let memoryMB = 0; | ||
|
|
||
| if (vramMatch) { | ||
| memoryMB = parseInt(vramMatch[1], 10); | ||
| if (vramMatch[2].toUpperCase() === "GB") memoryMB *= 1024; | ||
| } else { | ||
| // Apple Silicon shares system RAM — read total memory | ||
| try { | ||
| const memBytes = runCapture("sysctl -n hw.memsize", { ignoreError: true }); | ||
| if (memBytes) memoryMB = Math.floor(parseInt(memBytes, 10) / 1024 / 1024); | ||
| } catch { | ||
| /* ignored */ | ||
| } | ||
| } | ||
|
|
||
| return { | ||
| type: "apple", | ||
| name, | ||
| count: 1, | ||
| cores: coresMatch ? parseInt(coresMatch[1], 10) : null, | ||
| totalMemoryMB: memoryMB, | ||
| perGpuMB: memoryMB, | ||
| nimCapable: false, | ||
| }; | ||
| } | ||
| } | ||
| } catch { | ||
| /* ignored */ | ||
| } | ||
| } | ||
|
|
||
| return null; | ||
| } | ||
|
|
||
| function pullNimImage(model) { | ||
| const image = getImageForModel(model); | ||
| if (!image) { | ||
| console.error(` Unknown model: ${model}`); | ||
| process.exit(1); | ||
| } | ||
| console.log(` Pulling NIM image: ${image}`); | ||
| run(`docker pull ${shellQuote(image)}`); | ||
| return image; | ||
| } | ||
|
|
||
| function startNimContainer(sandboxName, model, port = 8000) { | ||
| const name = containerName(sandboxName); | ||
| return startNimContainerByName(name, model, port); | ||
| } | ||
|
|
||
| function startNimContainerByName(name, model, port = 8000) { | ||
| const image = getImageForModel(model); | ||
| if (!image) { | ||
| console.error(` Unknown model: ${model}`); | ||
| process.exit(1); | ||
| } | ||
|
|
||
| // Stop any existing container with same name | ||
| const qn = shellQuote(name); | ||
| run(`docker rm -f ${qn} 2>/dev/null || true`, { ignoreError: true }); | ||
|
|
||
| console.log(` Starting NIM container: ${name}`); | ||
| run( | ||
| `docker run -d --gpus all -p ${Number(port)}:8000 --name ${qn} --shm-size 16g ${shellQuote(image)}`, | ||
| ); | ||
| return name; | ||
| } | ||
|
|
||
| function waitForNimHealth(port = 8000, timeout = 300) { | ||
| const start = Date.now(); | ||
| const intervalSec = 5; | ||
| const hostPort = Number(port); | ||
| console.log(` Waiting for NIM health on port ${hostPort} (timeout: ${timeout}s)...`); | ||
|
|
||
| while ((Date.now() - start) / 1000 < timeout) { | ||
| try { | ||
| const result = runCapture(`curl -sf http://localhost:${hostPort}/v1/models`, { | ||
| ignoreError: true, | ||
| }); | ||
| if (result) { | ||
| console.log(" NIM is healthy."); | ||
| return true; | ||
| } | ||
| } catch { | ||
| /* ignored */ | ||
| } | ||
| require("child_process").spawnSync("sleep", [String(intervalSec)]); | ||
| } | ||
| console.error(` NIM did not become healthy within ${timeout}s.`); | ||
| return false; | ||
| } | ||
|
|
||
| function stopNimContainer(sandboxName) { | ||
| const name = containerName(sandboxName); | ||
| stopNimContainerByName(name); | ||
| } | ||
|
|
||
| function stopNimContainerByName(name) { | ||
| const qn = shellQuote(name); | ||
| console.log(` Stopping NIM container: ${name}`); | ||
| run(`docker stop ${qn} 2>/dev/null || true`, { ignoreError: true }); | ||
| run(`docker rm ${qn} 2>/dev/null || true`, { ignoreError: true }); | ||
| } | ||
|
|
||
| function nimStatus(sandboxName, port) { | ||
| const name = containerName(sandboxName); | ||
| return nimStatusByName(name, port); | ||
| } | ||
|
|
||
| function nimStatusByName(name, port) { | ||
| try { | ||
| const qn = shellQuote(name); | ||
| const state = runCapture(`docker inspect --format '{{.State.Status}}' ${qn} 2>/dev/null`, { | ||
| ignoreError: true, | ||
| }); | ||
| if (!state) return { running: false, container: name }; | ||
|
|
||
| let healthy = false; | ||
| if (state === "running") { | ||
| let resolvedHostPort = port != null ? Number(port) : 0; | ||
| if (!resolvedHostPort) { | ||
| const mapping = runCapture(`docker port ${qn} 8000 2>/dev/null`, { | ||
| ignoreError: true, | ||
| }); | ||
| const m = mapping && mapping.match(/:(\d+)\s*$/); | ||
| resolvedHostPort = m ? Number(m[1]) : 8000; | ||
| } | ||
| const health = runCapture( | ||
| `curl -sf http://localhost:${resolvedHostPort}/v1/models 2>/dev/null`, | ||
| { ignoreError: true }, | ||
| ); | ||
| healthy = !!health; | ||
| } | ||
| return { running: state === "running", healthy, container: name, state }; | ||
| } catch { | ||
| return { running: false, container: name }; | ||
| } | ||
| } | ||
|
|
||
| module.exports = { | ||
| containerName, | ||
| getImageForModel, | ||
| listModels, | ||
| canRunNimWithMemory, | ||
| detectGpu, | ||
| pullNimImage, | ||
| startNimContainer, | ||
| startNimContainerByName, | ||
| waitForNimHealth, | ||
| stopNimContainer, | ||
| stopNimContainerByName, | ||
| nimStatus, | ||
| nimStatusByName, | ||
| }; | ||
| module.exports = require("../../dist/lib/nim"); |
There was a problem hiding this comment.
This shim now breaks bin/ execution before a build.
bin/lib/onboard.js and bin/nemoclaw.js require this module at runtime, so redirecting it to ../../dist/lib/nim turns a clean source checkout into MODULE_NOT_FOUND unless the build has already run. That regresses the CLI entry path this shim is supposed to keep stable. As per coding guidelines, "The bin/ directory uses CommonJS intentionally — it's the CLI entry point that must work without a build step."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@bin/lib/nim.js` around lines 4 - 7, The shim currently hard-codes
module.exports = require("../../dist/lib/nim") which breaks runtime usage before
a build; update the shim to try requiring the compiled dist module first and
fall back to the source implementation when dist is missing (i.e., wrap the
require in a try/catch and require the source module on failure), so
module.exports still resolves for the CLI entrypoints (referenced by
bin/lib/onboard.js and bin/nemoclaw.js) without forcing a prior build.
Add container and state fields back to toMatchObject assertions so the test suite catches regressions in the CLI-visible nimStat.container property. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
🧹 Nitpick comments (1)
src/lib/nim.test.ts (1)
187-198: Redundant type annotations in callback.The
commandsvariable is already typed asstring[]from the map operation, so the(cmd: string)annotations in the.some()callbacks are redundant. TypeScript infers the callback parameter type automatically.♻️ Optional simplification
- expect(commands.some((cmd: string) => cmd.includes("docker port"))).toBe(false); - expect(commands.some((cmd: string) => cmd.includes("http://localhost:9000/v1/models"))).toBe( - true, - ); + expect(commands.some((cmd) => cmd.includes("docker port"))).toBe(false); + expect(commands.some((cmd) => cmd.includes("http://localhost:9000/v1/models"))).toBe(true);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/nim.test.ts` around lines 187 - 198, The test uses redundant type annotations in the .some() callbacks; remove the unnecessary "(cmd: string)" annotations so the callbacks use inferred parameter types (e.g., change calls like commands.some((cmd: string) => ...) to commands.some(cmd => ...)). Update both occurrences that check for "docker port" and "http://localhost:9000/v1/models" which are derived from runCapture.mock.calls mapped into the commands array.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@src/lib/nim.test.ts`:
- Around line 187-198: The test uses redundant type annotations in the .some()
callbacks; remove the unnecessary "(cmd: string)" annotations so the callbacks
use inferred parameter types (e.g., change calls like commands.some((cmd:
string) => ...) to commands.some(cmd => ...)). Update both occurrences that
check for "docker port" and "http://localhost:9000/v1/models" which are derived
from runCapture.mock.calls mapped into the commands array.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 2317deee-e0e2-44b1-a6ef-6ea3acc0535a
📒 Files selected for processing (1)
src/lib/nim.test.ts
## Summary - Convert `bin/lib/nim.js` (259 lines) to `src/lib/nim.ts` - Typed interfaces: `NimModel`, `GpuDetection`, `NimStatus` - 4 pure functions + subprocess-heavy I/O functions (docker, nvidia-smi) - Co-locate tests (16 tests) with updated mock pattern for dist/ paths Stacked on #1240. 615 CLI tests pass. Coverage ratchet passes. Relates to #924 (shell consolidation). 🤖 Generated with [Claude Code](https://claude.com/claude-code) <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit * **New Features** * GPU detection with capability assessment for NIM containers * Container lifecycle management (start, stop, status monitoring) with automatic health checks * Model enumeration and image selection for launching containers * **Refactor** * Simplified module surface and improved internal architecture for maintainability * **Tests** * Updated test suite to align with the refactored module behavior and stronger typings <!-- end of auto-generated comment: release notes by coderabbit.ai --> --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
## Summary - Convert `bin/lib/nim.js` (259 lines) to `src/lib/nim.ts` - Typed interfaces: `NimModel`, `GpuDetection`, `NimStatus` - 4 pure functions + subprocess-heavy I/O functions (docker, nvidia-smi) - Co-locate tests (16 tests) with updated mock pattern for dist/ paths Stacked on NVIDIA#1240. 615 CLI tests pass. Coverage ratchet passes. Relates to NVIDIA#924 (shell consolidation). 🤖 Generated with [Claude Code](https://claude.com/claude-code) <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit * **New Features** * GPU detection with capability assessment for NIM containers * Container lifecycle management (start, stop, status monitoring) with automatic health checks * Model enumeration and image selection for launching containers * **Refactor** * Simplified module surface and improved internal architecture for maintainability * **Tests** * Updated test suite to align with the refactored module behavior and stronger typings <!-- end of auto-generated comment: release notes by coderabbit.ai --> --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Summary
bin/lib/nim.js(259 lines) tosrc/lib/nim.tsNimModel,GpuDetection,NimStatusStacked on #1240. 615 CLI tests pass. Coverage ratchet passes.
Relates to #924 (shell consolidation).
🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Refactor
Tests