Releases: m-marinucci/LANCompute
Releases · m-marinucci/LANCompute
LANCompute v0.1.0 — LM Studio/Ollama integration, CLI helper, tests, and docs
LANCompute v0.1.0 — LM Studio/Ollama integration, CLI helper, tests, and docs
Summary
- Adds a minimal OpenAI-compatible workflow to exercise a local LLM service (LM Studio API server or Ollama).
- Introduces a tiny CLI for quick prompts and model discovery.
- Includes an integration test to verify
/v1/modelsand/v1/chat/completions. - Adds environment templates, updates docs, and improves developer ergonomics.
Highlights
- CLI helper:
scripts/lmstudio_chat.py- Lists models and sends quick chat prompts to an OpenAI-compatible endpoint.
- Reads
.envautomatically and respectsLM_STUDIO_BASE_URL. - Works out-of-the-box against
http://127.0.0.1:1234(override via env or flag).
- Makefile targets
make modelslists models via the CLI helper.make chat MODEL=<id> PROMPT="<text>"sends a quick prompt.make setupupgrades pip and ensuresrequestsis present.
- Integration test:
tests/test_lmstudio_integration.py- Discovers available models, prefers lightweight IDs, and runs a short chat completion.
- Environment overrides:
LM_STUDIO_BASE_URL,LM_STUDIO_TEST_MODEL,LM_STUDIO_TEST_PROMPT.
- Configuration and docs
.env.examplefor public-safe defaults..envrcauto-activates.venvand loads.env(with direnv)..gitignoreignores.envand.direnv/.- README adds integration guide, CLI usage, env setup, and test instructions.
Getting Started
python -m venv .venv
source .venv/bin/activate
pip install -U pip requests pytest
# optional: direnv allow
cp .env.example .env
# set LM_STUDIO_BASE_URL (default: http://127.0.0.1:1234)Usage
python scripts/lmstudio_chat.py --list-models
make models
python scripts/lmstudio_chat.py --model mistral:latest --prompt "Give me one fun fact."
make chat MODEL=mistral:latest PROMPT="Give me one fun fact."Integration Test
pytest -q tests/test_lmstudio_integration.pyEnvironment overrides:
LM_STUDIO_BASE_URL— API base URLLM_STUDIO_TEST_MODEL— preferred model id (optional)LM_STUDIO_TEST_PROMPT— short test prompt (optional)
Security Notes
- Only expose your LLM service to a trusted network.
- Prefer binding locally (
127.0.0.1) and use SSH tunnels for development. - If you bind
0.0.0.0, ensure you control access (firewall/VLAN/reverse proxy).
Changelog
- Add
scripts/lmstudio_chat.pyCLI for models and chat - Add Makefile targets:
models,chat,setup - Add
tests/test_lmstudio_integration.pywith env overrides - Add
.env.example; load.envin CLI and tests - Update
.envrcto source.env - Update
.gitignoreto exclude.envand.direnv/ - Expand README with integration guide and usage