Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how run as desktop application? #38

Closed
sussyboiiii opened this issue Apr 11, 2023 · 2 comments
Closed

how run as desktop application? #38

sussyboiiii opened this issue Apr 11, 2023 · 2 comments

Comments

@sussyboiiii
Copy link

Hello,
I have tried the web gui and found it quite nice, but I was wondering if it would be possible to run even just one model as an application outside of a web server completely "locally" idk how to explain this better, basically just having the ability to run models with a gui as an application not web server on a computer to get a ChatGPT like experience but completely just outside of the browser like searching in explorer or finder even if thats a terrible example.
Thanks!

@AlexanderLourenco
Copy link
Collaborator

Hi!

Technically you can already achieve this since there's no special authentication logic for the API: https://github.com/nat/openplayground/tree/main/server/lib/api

e.g: If you were to visit http://localhost:5432/api/models it will return a JSON array with all models on record, whereas http://localhost:5432/api/models-enabled returns an object of {provider:model-name}-> model details for models that are ready to go (the first API call also contains models that are currently not ready for completions, like HF models that are yet to be downloaded or have been disabled).

Completions are Server Side Events, but if you don't need streaming we could introduce a POST request to return the entire completion.

I will assign myself the task of documenting the API but if someone else wants to take charge of that do let me know! :)

@sussyboiiii
Copy link
Author

Thank you for your response,

I believe my first message was a bit all over the place.

So, the thing I ment is a desktop application to run different models on e.g. via llama.cpp, or the OpenAI API.
Not to compare models just to set it up and have a desktop application with "your own" assistant or how you want to call it.
The parameters could be modified in the main file or in a .txt file.

@sussyboiiii sussyboiiii changed the title add capability to run ai as an application instead of a server how run as desktop application? Apr 21, 2023
@sussyboiiii sussyboiiii closed this as not planned Won't fix, can't repro, duplicate, stale Jun 14, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants