Replies: 3 comments
-
|
There's an open PR for Ollama support (#94) which would cover local LLMs. For local TTS, |
Beta Was this translation helpful? Give feedback.
-
|
you can already use local LLMs today. in Settings > LLM, click "+ Add" to create a custom provider, set the base URL to your local server (e.g. http://127.0.0.1:1234/v1 for LM Studio), uncheck "Requires API Key", add your model name, and save. works with LM Studio, Ollama, or any OpenAI-compatible local server. |
Beta Was this translation helpful? Give feedback.
-
|
It is a good project but with so many different APIs used in the project can it be used in a commercially viable project for mass production? What is future plan to reduce these costs? |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Is there a way i can utillize a Local LLM like OSS 20B or Qwen2.5 Code 13B instead of a Gemini or OpenAI API? i l also would like to use a local TTS and a local Voice to Text? i dont have a API for everything
Beta Was this translation helpful? Give feedback.
All reactions