Skip to content

Validation Issue #517

@AzebanTV

Description

@AzebanTV

Version: spacebot 0.3.3 (Docker)
Setup: LM Studio with qwen2.5-vl-7b-instruct (vision model) via OpenAI-compatible API at http://host.docker.internal:1234/v1
Issue: Images fail with "Cannot read image.png (this model does not support image input)" but the model DOES support images - verified manually with curl
What works: Text-only messages work fine
What doesn't: Sending images in the web UI fails
Observation: The API /api/providers shows no providers configured despite valid [llm.provider.lmstudio] in config.toml
Config used:
[llm.provider.lmstudio]
api_type = "openai_chat_completions"
base_url = "http://host.docker.internal:1234/v1"
api_key = ""
This seems like a validation issue - the UI checks for vision support but doesn't properly detect it for custom OpenAI-compatible providers.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions