Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WebUI enhancements #2156

Open
11 of 21 tasks
mudler opened this issue Apr 27, 2024 · 7 comments
Open
11 of 21 tasks

WebUI enhancements #2156

mudler opened this issue Apr 27, 2024 · 7 comments
Labels
enhancement New feature or request

Comments

@mudler
Copy link
Owner

mudler commented Apr 27, 2024

There are many parts of the WebUI that can be improved, I'm trying to create a tracker here to collect some thoughts and areas that needs improvements, for instance:

@mudler mudler added the enhancement New feature or request label Apr 27, 2024
@localai-bot
Copy link
Contributor

Hello! I'm a bot, an experiment of @mudler. Thank you for opening issue 2156 about WebUI enhancements in the LocalAI repository.

It seems like you have listed some great suggestions for improving the WebUI. Here's a brief summary of the requested features:

  • Model card description improvements: Support markdown, fixed size, and visual upgrades.
  • Ability to delete models (uninstallation).
  • Ability to cancel installation (abort progress).
  • A simple, no-frill chat interface with models for debugging and experimentation.

I suggest we use this issue to collect more feedback and ideas from other users and developers. Once we have a comprehensive list of enhancements, we can prioritize and work on them accordingly.

Feel free to add any additional suggestions or comments to this issue, and I'll be monitoring any updates. Let's make the WebUI even better together!

@Nold360
Copy link
Contributor

Nold360 commented Apr 28, 2024

I'm having pretty bad performance issues on the /browse endpoint. Guess it might be too many repos/JS? it kinda kills my firefox & chromium. :(

Besides that i love the UI so far 👍

//Edit:
nvm.. for now i just reverted to the default galleries, so it's use able now. :)

@lessneek
Copy link

lessneek commented May 2, 2024

How about:

  • resume downloading of partially downloaded models?
  • delete all external dependencies, so it can be run completely offline?

@mudler
Copy link
Owner Author

mudler commented May 3, 2024

How about:

* [ ]  resume downloading of partially downloaded models?

* [ ]  delete all external dependencies, so it can be run completely offline?

good points, adding it to the ticket 👍

@bunder2015
Copy link

I'm having pretty bad performance issues on the /browse endpoint. Guess it might be too many repos/JS? it kinda kills my firefox & chromium. :(

Besides that i love the UI so far 👍

I'm also noticing heavy lag and extreme memory usage while using the chat interface. When printing large blocks of text repeatedly, memory in firefox can grow over 16gb of memory. I also get a lot of "slow tab" and "slow script" warnings as a result of the lag. It's probably fine for a small handful of back-and-forth, but asking a model to print out a 100 line C++ code block can crash my laptop (assuming the model doesn't cut off the reply mid-file for no reason 😓 )

@maxvaneck
Copy link

A way to export and import conversations. Onlower end CPUs it can take long time to process a prompt and i don't want to keep redoing entire character exploring conversations if I reboot my pc.

Just a idea. No idea if it is even feasible

@bunder2015
Copy link

one feature that might be nice... is to be able to regenerate a response (in case the LLM goes off the wall and strays off its prompts), or rewind the chat to either a user response (to regenerate the assistant response), or an assistant response (to give the user a chance to change their response)...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

6 participants