refactor(cli): migrate local-inference.js to TypeScript#1270
Conversation
…ript modules Move ~210 lines of pure, side-effect-free functions from the 3,800-line onboard.js into five typed TypeScript modules under src/lib/: - gateway-state.ts: gateway/sandbox state classification - validation.ts: failure classification, API key validation, model ID checks - url-utils.ts: URL normalization, text compaction, env formatting - build-context.ts: Docker build context filtering, recovery hints - dashboard.ts: dashboard URL resolution and construction Infrastructure: - tsconfig.src.json compiles src/ → dist/ as CJS (dist/ already gitignored) - tsconfig.cli.json updated to type-check src/ - npm run build:cli added to package.json - Pre-commit test-cli hook builds TS and includes dist/lib/ in coverage onboard.js imports from compiled dist/lib/ output. All 542 CLI tests pass. No user-facing behavior changes. Closes #1237. Relates to #924 (shell consolidation). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
56 new tests across three co-located test files: - src/lib/validation.test.ts: classifyValidationFailure, classifyApplyFailure, classifySandboxCreateFailure, validateNvidiaApiKeyValue, isSafeModelId - src/lib/dashboard.test.ts: resolveDashboardForwardTarget, buildControlUiUrls - src/lib/url-utils.test.ts: compactText, stripEndpointSuffix, normalizeProviderBaseUrl, isLoopbackHostname, formatEnvAssignment, parsePolicyPresetEnv vitest.config.ts updated to include src/**/*.test.ts in the CLI project. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The existing callsite in printDashboard calls buildControlUiUrls() with no arguments when no token is available. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The tsc-js pre-push hook (jsconfig.json) type-checks bin/lib/onboard.js which requires dist/lib/ to exist. CI was not running npm run build:cli before the prek checks, causing TS2307 "Cannot find module" errors. - Add npm run build:cli step to .github/actions/basic-checks - Update tsc-js hook to run build:cli before tsc Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…law into cv/extract-onboard-pure-fns
## Summary - Convert `bin/lib/preflight.js` (357 lines) to `src/lib/preflight.ts` with full type definitions - Typed interfaces for all opts objects and return types: `PortProbeResult`, `MemoryInfo`, `SwapResult`, `CheckPortOpts`, `GetMemoryInfoOpts`, `EnsureSwapOpts` - Extract `parseLsofLines` helper to reduce duplication in `checkPortAvailable` - Incorporate #1227 fix: `sudo` → `sudo -n` (non-interactive) for lsof retry - `bin/lib/preflight.js` becomes a thin re-export shim — existing consumers unaffected - Co-locate tests: `test/preflight.test.js` → `src/lib/preflight.test.ts` - Add real net probe tests (EADDRINUSE detection on occupied ports) - Fix all co-located test imports to use `dist/` paths for coverage attribution - Add targeted dashboard/validation branch tests to maintain ratchet Stacked on #1240. Not touched by any #924 blocker PR. ## Test plan - [x] 612 CLI tests pass (601 existing + 11 new) - [x] `tsc -p tsconfig.src.json` compiles cleanly - [x] `tsc -p tsconfig.cli.json` type-checks cleanly - [x] `tsc -p jsconfig.json` type-checks cleanly (the pre-push check that caught the union issue) - [x] Coverage ratchet passes Relates to #924 (shell consolidation). Supersedes #1227. 🤖 Generated with [Claude Code](https://claude.com/claude-code) --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Convert bin/lib/local-inference.js (228 lines) to src/lib/local-inference.ts with typed interfaces for ValidationResult and GpuInfo. Typed RunCaptureFn for DI pattern used by validateLocalProvider, getOllamaModelOptions, getDefaultOllamaModel, and validateOllamaModel. 616 CLI tests pass. Coverage ratchet passes. Relates to #924 (shell consolidation). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (2)
🚧 Files skipped from review as they are similar to previous changes (1)
📝 WalkthroughWalkthroughReplaces the in-repo JavaScript local-inference implementation with a thin shim exporting from the compiled module, adds a new TypeScript Changes
Sequence Diagram(s)mermaid Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
🧹 Nitpick comments (4)
src/lib/local-inference.test.ts (1)
117-125: Test name doesn't match behavior.The test is named "reports generic container unreachability for unknown providers" but actually verifies that unknown providers return
{ ok: true }immediately without any health check (sincegetLocalProviderHealthCheck("custom-provider")returnsnull). TherunCapturestub is never called in this path.Consider renaming to align with the existing test on line 113 ("treats unknown local providers as already valid") or combining them, since both test the same code path.
Suggested rename
- it("reports generic container unreachability for unknown providers", () => { + it("skips runCapture entirely for unknown providers with no health check", () => {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/local-inference.test.ts` around lines 117 - 125, Rename or consolidate the test so its name reflects the actual behavior: the case exercises validateLocalProvider("custom-provider", ...) when getLocalProviderHealthCheck("custom-provider") returns null, so the provider is treated as already valid and runCapture is not invoked; update the test name to something like "treats unknown local providers as already valid" (or merge it with the existing test covering the same path) and keep the callCount stub as-is to show the health-check is not executed.src/lib/local-inference.ts (3)
196-208:timeoutSecondsinterpolated directly into shell command.The
timeoutSecondsparameter (default 120) is interpolated directly into the curl command at line 207. While TypeScript ensures it's a number at compile time, if this function is called from JavaScript without type checking, a malicious string could be injected.Given this is an internal function and the codebase uses TypeScript type checking, the risk is low. However, consider adding runtime validation if this becomes part of a public API.
Optional defensive validation
export function getOllamaProbeCommand( model: string, timeoutSeconds = 120, keepAlive = "15m", ): string { + const timeout = Math.floor(Number(timeoutSeconds)) || 120; const payload = JSON.stringify({ model, prompt: "hello", stream: false, keep_alive: keepAlive, }); - return `curl -sS --max-time ${timeoutSeconds} http://localhost:11434/api/generate -H 'Content-Type: application/json' -d ${shellQuote(payload)} 2>/dev/null`; + return `curl -sS --max-time ${timeout} http://localhost:11434/api/generate -H 'Content-Type: application/json' -d ${shellQuote(payload)} 2>/dev/null`; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/local-inference.ts` around lines 196 - 208, The getOllamaProbeCommand function interpolates timeoutSeconds directly into a shell command; add a runtime validation/coercion step for the timeoutSeconds parameter (e.g., use Number(timeoutSeconds), verify Number.isFinite and Number.isInteger and that it's >0, or clamp to a safe range) and fall back to the default 120 (or throw) when validation fails before building the curl string; keep using shellQuote for payload as-is and reference timeoutSeconds and getOllamaProbeCommand so the check is applied immediately prior to constructing the return string.
18-18: Consider exportingRunCaptureFntype.The
RunCaptureFntype is used by multiple exported functions (validateLocalProvider,getOllamaModelOptions,getDefaultOllamaModel,validateOllamaModel) but is not exported. Consumers implementing or mocking this function would benefit from having access to the type definition.Suggested change
-type RunCaptureFn = (cmd: string, opts?: { ignoreError?: boolean }) => string; +export type RunCaptureFn = (cmd: string, opts?: { ignoreError?: boolean }) => string;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/local-inference.ts` at line 18, Export the RunCaptureFn type so consumers can import and implement/mock it: change the local declaration "type RunCaptureFn = (cmd: string, opts?: { ignoreError?: boolean }) => string;" to an exported type (export type RunCaptureFn = ...). Ensure references in functions validateLocalProvider, getOllamaModelOptions, getDefaultOllamaModel, and validateOllamaModel continue to use the same symbol and update any external imports/usages to import { RunCaptureFn } where needed.
9-10: Consider movingshellQuotetosrc/lib/to eliminate reverse dependency.The
require("../../bin/lib/runner")on line 10 creates a reverse dependency: library code insrc/lib/depends on CLI-layer code inbin/lib/, which violates typical architectural layering. While this works in practice (shellQuote is a pure utility), it prevents usingsrc/lib/local-inference.tsoutside the CLI context.Moving
shellQuotetosrc/lib/(e.g.,src/lib/shell-quote.ts) and updating imports acrossbin/,scripts/, andsrc/lib/would fix this. Alternatively, acceptshellQuoteas a dependency-injected parameter likerunCapture.This is acceptable for now given migration scope, but address as technical debt.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/local-inference.ts` around lines 9 - 10, The file local-inference.ts currently requires shellQuote from bin/lib/runner creating a reverse dependency; move the shellQuote implementation into a new module under src/lib (e.g., src/lib/shell-quote.ts) and export it, then update all imports that currently pull shellQuote from bin/lib/runner (including local-inference.ts and any files in bin/ and scripts/) to import from the new src/lib/shell-quote module; alternatively, refactor functions that call shellQuote (e.g., runCapture or the functions in local-inference.ts) to accept shellQuote as a dependency-injected parameter and wire callers accordingly so src/lib no longer depends on bin/.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@src/lib/local-inference.test.ts`:
- Around line 117-125: Rename or consolidate the test so its name reflects the
actual behavior: the case exercises validateLocalProvider("custom-provider",
...) when getLocalProviderHealthCheck("custom-provider") returns null, so the
provider is treated as already valid and runCapture is not invoked; update the
test name to something like "treats unknown local providers as already valid"
(or merge it with the existing test covering the same path) and keep the
callCount stub as-is to show the health-check is not executed.
In `@src/lib/local-inference.ts`:
- Around line 196-208: The getOllamaProbeCommand function interpolates
timeoutSeconds directly into a shell command; add a runtime validation/coercion
step for the timeoutSeconds parameter (e.g., use Number(timeoutSeconds), verify
Number.isFinite and Number.isInteger and that it's >0, or clamp to a safe range)
and fall back to the default 120 (or throw) when validation fails before
building the curl string; keep using shellQuote for payload as-is and reference
timeoutSeconds and getOllamaProbeCommand so the check is applied immediately
prior to constructing the return string.
- Line 18: Export the RunCaptureFn type so consumers can import and
implement/mock it: change the local declaration "type RunCaptureFn = (cmd:
string, opts?: { ignoreError?: boolean }) => string;" to an exported type
(export type RunCaptureFn = ...). Ensure references in functions
validateLocalProvider, getOllamaModelOptions, getDefaultOllamaModel, and
validateOllamaModel continue to use the same symbol and update any external
imports/usages to import { RunCaptureFn } where needed.
- Around line 9-10: The file local-inference.ts currently requires shellQuote
from bin/lib/runner creating a reverse dependency; move the shellQuote
implementation into a new module under src/lib (e.g., src/lib/shell-quote.ts)
and export it, then update all imports that currently pull shellQuote from
bin/lib/runner (including local-inference.ts and any files in bin/ and scripts/)
to import from the new src/lib/shell-quote module; alternatively, refactor
functions that call shellQuote (e.g., runCapture or the functions in
local-inference.ts) to accept shellQuote as a dependency-injected parameter and
wire callers accordingly so src/lib no longer depends on bin/.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 5bb8e2f8-6466-435d-a45b-e83a1272a2ee
📒 Files selected for processing (3)
bin/lib/local-inference.jssrc/lib/local-inference.test.tssrc/lib/local-inference.ts
- Export RunCaptureFn so consumers can import and mock the type - Rename "reports generic container unreachability" test to "skips health check entirely" to match actual behavior Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
## Summary - Convert `bin/lib/local-inference.js` (228 lines) to `src/lib/local-inference.ts` - Typed interfaces: `ValidationResult`, `GpuInfo`, `RunCaptureFn` - 9 pure functions + 4 with DI-injected `runCapture` - Co-locate tests (30 tests) Stacked on #1240. 616 CLI tests pass. Coverage ratchet passes. Relates to #924 (shell consolidation). 🤖 Generated with [Claude Code](https://claude.com/claude-code) <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit * **New Features** * Added local inference provider validation and connectivity diagnostics for vllm-local and ollama-local. * Implemented Ollama model discovery, warmup/probe commands, and intelligent model selection based on GPU memory. * Added health-check and model readiness checks to surface configuration or networking issues earlier. * **Tests** * Enhanced test coverage for provider validation, model detection, and unknown-provider handling; updated test targets accordingly. <!-- end of auto-generated comment: release notes by coderabbit.ai --> --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
## Summary - Convert `bin/lib/local-inference.js` (228 lines) to `src/lib/local-inference.ts` - Typed interfaces: `ValidationResult`, `GpuInfo`, `RunCaptureFn` - 9 pure functions + 4 with DI-injected `runCapture` - Co-locate tests (30 tests) Stacked on NVIDIA#1240. 616 CLI tests pass. Coverage ratchet passes. Relates to NVIDIA#924 (shell consolidation). 🤖 Generated with [Claude Code](https://claude.com/claude-code) <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit * **New Features** * Added local inference provider validation and connectivity diagnostics for vllm-local and ollama-local. * Implemented Ollama model discovery, warmup/probe commands, and intelligent model selection based on GPU memory. * Added health-check and model readiness checks to surface configuration or networking issues earlier. * **Tests** * Enhanced test coverage for provider validation, model detection, and unknown-provider handling; updated test targets accordingly. <!-- end of auto-generated comment: release notes by coderabbit.ai --> --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Summary
bin/lib/local-inference.js(228 lines) tosrc/lib/local-inference.tsValidationResult,GpuInfo,RunCaptureFnrunCaptureStacked on #1240. 616 CLI tests pass. Coverage ratchet passes.
Relates to #924 (shell consolidation).
🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Tests