-
Notifications
You must be signed in to change notification settings - Fork 302
Validation Issue #517
Description
Version: spacebot 0.3.3 (Docker)
Setup: LM Studio with qwen2.5-vl-7b-instruct (vision model) via OpenAI-compatible API at http://host.docker.internal:1234/v1
Issue: Images fail with "Cannot read image.png (this model does not support image input)" but the model DOES support images - verified manually with curl
What works: Text-only messages work fine
What doesn't: Sending images in the web UI fails
Observation: The API /api/providers shows no providers configured despite valid [llm.provider.lmstudio] in config.toml
Config used:
[llm.provider.lmstudio]
api_type = "openai_chat_completions"
base_url = "http://host.docker.internal:1234/v1"
api_key = ""
This seems like a validation issue - the UI checks for vision support but doesn't properly detect it for custom OpenAI-compatible providers.