-
-
Notifications
You must be signed in to change notification settings - Fork 1.8k
Issues: mudler/LocalAI
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Hello, cant connect with NextCloud
bug
Something isn't working
unconfirmed
#3608
opened Sep 20, 2024 by
userbox020
feat(multimodal): Audio understanding
enhancement
New feature or request
roadmap
#3602
opened Sep 19, 2024 by
mudler
Transformers backend: Auto-downloaded model can not be used
bug
Something isn't working
unconfirmed
#3594
opened Sep 18, 2024 by
mcd01
LocalAI Docker Container won't start on Windows 11
bug
Something isn't working
unconfirmed
#3588
opened Sep 17, 2024 by
FMarazzi
LocalAI P2P connection failure between Linux server and Windows 11 WSL2 instances
bug
Something isn't working
unconfirmed
#3546
opened Sep 15, 2024 by
berry-13
feat: automatically adjust default gpu_layers by available GPU memory
enhancement
New feature or request
roadmap
#3541
opened Sep 13, 2024 by
mudler
Add the new Multi-Modal model of mistral AI: pixtral-12b
enhancement
New feature or request
roadmap
#3535
opened Sep 12, 2024 by
SuperPat45
Confusing Something isn't working
confirmed
finish_reason
when using max_tokens
property in 'v1/chat/completions' endpoint
bug
#3533
opened Sep 10, 2024 by
daJuels
Cannot select models from dropdown
bug
Something isn't working
unconfirmed
#3493
opened Sep 7, 2024 by
jwaresoft
Identify model by type (tts, text, ...)
enhancement
New feature or request
roadmap
#3488
opened Sep 6, 2024 by
ecyht2
gpu_layers is not effective
bug
Something isn't working
unconfirmed
#3479
opened Sep 3, 2024 by
msameer
Flux GGUF (Replicate, Document, Add to Model)
bug
Something isn't working
unconfirmed
#3447
opened Sep 1, 2024 by
sfxworks
Intel ARC GPU - llama_model_load: can not find preferred GPU platform
bug
Something isn't working
unconfirmed
#3437
opened Aug 30, 2024 by
Xav-v
Completion endpoint does not count tokens when using vLLM backend
area/backends
area/vllm
bug
Something isn't working
python
Pull requests that update Python code
roadmap
#3436
opened Aug 30, 2024 by
ephraimrothschild
gpu + transformers-musicgen
enhancement
New feature or request
#3420
opened Aug 28, 2024 by
dave-gray101
Can't build LocalAI with llama.cpp with CUDA
bug
Something isn't working
unconfirmed
#3418
opened Aug 28, 2024 by
dimazig
Only use 4 CPU threads in P2P worker cluster
bug
Something isn't working
unconfirmed
#3410
opened Aug 26, 2024 by
titogrima
intel igpu not working
bug
Something isn't working
unconfirmed
#3382
opened Aug 26, 2024 by
maxvaneck
change chat colors
enhancement
New feature or request
roadmap
ux
#3381
opened Aug 26, 2024 by
maxvaneck
Ability to get a list of loaded models and unload a model by request
enhancement
New feature or request
#3378
opened Aug 25, 2024 by
Nyralei
integrate whisperX
enhancement
New feature or request
roadmap
#3375
opened Aug 25, 2024 by
hlzhangxt
whisper-diarization
enhancement
New feature or request
roadmap
#3374
opened Aug 25, 2024 by
hlzhangxt
Error message for custom embedding model: 'NoneType' object has no attribute 'tokenize'
bug
Something isn't working
unconfirmed
#3369
opened Aug 24, 2024 by
Ccccx
Can't start LocalAI (with REBUILD) on Xeon X5570 - Unwanted AVX dependency?
bug
Something isn't working
unconfirmed
#3367
opened Aug 24, 2024 by
chris-hatton
Previous Next
ProTip!
Type g p on any issue or pull request to go back to the pull request listing page.