-
-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: ensure correct version of torch is always installed based on BUI… #2890
Conversation
…LD_TYPE Signed-off-by: Chris Jowett <[email protected]>
✅ Deploy Preview for localai ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
@cryptk I think this is a good idea. However, I think that dependabot may not be picking up our "accelerator specific" files, and torch is obviously pretty important to track (even if in most cases, we're probably stuck on whatever version the project requires) Based on https://stackoverflow.com/questions/75899186/how-to-configure-dependabot-to-check-multiple-files it looks like we might be better off renaming these files, simply to invert the naming to something like Do you mind double checking me here and potentially doing the rename as a part of this PR since it's already basically touching everything? If you think that's a risky enough change to make its own PR, that's obviously fine too. |
@dave-gray101 once I have some time I'll take a look at your recommendation, I've been pretty swamped lately |
This would be great, because at the moment I cannot use this project on anything but CUDA powered hardware. CPU only AMD 8700GE or CPU & iGPU won't work with the current docker images, even after rebuilding. |
Hi @gymnae if you are using docker, try using sha-e676809-hipblas-ffmpeg as the image name... I can't find which gfx#### code the 8700ge is at the moment. If I have some spare time this week I might try applying this patch to my local instance, but I've had issues with other changes in the past so no guarantees until the patch can get merged to the repo. Cheers |
Thank you, I will try that. The 8700GE has a 870M iGPU, which is gfx1103 and currently not natively supported by ROCm, but an override works, at least in so far that ROCm is happy EDIT: Does not improve the situation for me, but I managed to get the pure CPU version running by passing the entire iGPU to the VM running localai, which tells me that I had issues with the host before |
Seems to break on
|
Signed-off-by: mudler <[email protected]>
714ffdb
to
962fb0b
Compare
I think I've got most of the fixes out, let's see if CI is now liking it |
Signed-off-by: mudler <[email protected]>
54e33f2
to
1303285
Compare
Should be available in master images as CI builds and publish the images - @bunder2015 @gymnae if you can test that now everything works as expected that'd be appreciated! |
Hi @mudler I waited for quay to output the ed322bf hipblas images, but they didn't seem to get uploaded. I tried to build them manually with The sliver of logs suggested that BUILD_TYPE was blank despite setting it in the docker-compose.yaml file... So I tried adding Cheers |
Hi @mudler, also wanted to try the new images, but it appears the hipblas image build failed |
sha-ad5978b-hipblas-ffmpeg popped up on quay, but I'm still getting the "operator torchvision::nms does not exist" message... (still no idea why I ran into issues building the image manually) Cheers |
ok cool - I could finally reproduce here and #3194 should fix it. I tested manually, but need to double check after the CI builds the images as I did jumped in the container I had here and recreated the venv manually. |
…0.0 by renovate (#25426) This PR contains the following updates: | Package | Update | Change | |---|---|---| | [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) | minor | `v2.19.4-aio-cpu` -> `v2.20.0-aio-cpu` | | [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) | minor | `v2.19.4-aio-gpu-nvidia-cuda-12` -> `v2.20.0-aio-gpu-nvidia-cuda-12` | | [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) | minor | `v2.19.4-cublas-cuda11-ffmpeg-core` -> `v2.20.0-cublas-cuda11-ffmpeg-core` | | [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) | minor | `v2.19.4-cublas-cuda11-core` -> `v2.20.0-cublas-cuda11-core` | | [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) | minor | `v2.19.4-cublas-cuda12-core` -> `v2.20.0-cublas-cuda12-core` | | [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) | minor | `v2.19.4-ffmpeg-core` -> `v2.20.0-ffmpeg-core` | | [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) | minor | `v2.19.4` -> `v2.20.0` | --- > [!WARNING] > Some dependencies could not be looked up. Check the Dependency Dashboard for more information. --- ### Release Notes <details> <summary>mudler/LocalAI (docker.io/localai/localai)</summary> ### [`v2.20.0`](https://togithub.com/mudler/LocalAI/releases/tag/v2.20.0) [Compare Source](https://togithub.com/mudler/LocalAI/compare/v2.19.4...v2.20.0) ![local-ai-release-2 20-shadow4](https://togithub.com/user-attachments/assets/cdcc0f8f-953b-4346-be50-b7542e15def6) ##### **TL;DR** - 🌍 **Explorer & Community:** Explore global community pools at [explorer.localai.io](https://explorer.localai.io) - 👀 **Demo instance available:** Test out LocalAI at [demo.localai.io](https://demo.localai.io) - 🤗 **Integration:** Hugging Face Local apps now include LocalAI - 🐛 **Bug Fixes:** Diffusers and hipblas issues resolved - 🎨 **New Feature:** FLUX-1 image generation support - 🏎️ **Strict Mode:** Stay compliant with OpenAI’s latest API changes - 💪 **Multiple P2P Clusters:** Run multiple clusters within the same network - 🧪 **Deprecation Notice:** `gpt4all.cpp` and `petals` backends deprecated *** ##### 🌍 **Explorer and Global Community Pools** Now you can share your LocalAI instance with the global community or explore available instances by visiting [explorer.localai.io](https://explorer.localai.io). This decentralized network powers our demo instance, creating a truly collaborative AI experience. <p align="center"> <img width="638" alt="Explorer Global Community Pools" src="https://github.com/user-attachments/assets/048fc0e7-58f7-4c8a-8076-874f7469e802"> </p> ##### **How It Works** Using the Explorer, you can easily share or connect to clusters. For detailed instructions on creating new clusters or connecting to existing ones, check out our [documentation](https://localai.io/features/distribute/). ##### 👀 **Demo Instance Now Available** Curious about what LocalAI can do? Dive right in with our live demo at [demo.localai.io](https://demo.localai.io)! Thanks to our generous sponsors, this instance is publicly available and configured via peer-to-peer (P2P) networks. If you'd like to connect, follow the [instructions here](https://explorer.localai.io). ##### 🤗 **Hugging Face Integration** I am excited to announce that LocalAI is now integrated within Hugging Face’s local apps! This means you can select LocalAI directly within Hugging Face to build and deploy models with the power and flexibility of our platform. Experience seamless integration with a single click! <p align="center"> <img width="303" alt="Hugging Face Integration Screenshot" src="https://github.com/user-attachments/assets/53804387-2e1d-47f4-91ab-bb2923bc5625"> <img width="638" alt="Hugging Face Integration in Action" src="https://github.com/user-attachments/assets/2a2a2498-c463-458e-96d0-33dc9cfadd60"> </p> This integration was made possible through [this PR](https://togithub.com/huggingface/huggingface.js/pull/833). ##### 🎨 **FLUX-1 Image Generation Support** **FLUX-1** lands in LocalAI! With this update, LocalAI can now generate stunning images using FLUX-1, even in federated mode. Whether you're experimenting with new designs or creating production-quality visuals, FLUX-1 has you covered. Try it out at [demo.localai.io](https://demo.localai.io) and see what LocalAI + FLUX-1 can do! <p align="center"> <img width="638" alt="FLUX-1 Image Generation Example" src="https://github.com/user-attachments/assets/02b70603-21e7-4d7e-ad81-3246c8b70161"> </p> ##### 🐛 **Diffusers and hipblas Fixes** Great news for AMD users! If you’ve encountered issues with the Diffusers backend or hipblas, those bugs have been resolved. We’ve transitioned to `uv` for managing Python dependencies, ensuring a smoother experience. For more details, check out [Issue #​1592](https://togithub.com/mudler/LocalAI/issues/1592). ##### 🏎️ **Strict Mode for API Compliance** To stay up to date with OpenAI’s latest changes, now LocalAI have support as well for **Strict Mode** ( https://openai.com/index/introducing-structured-outputs-in-the-api/ ). This new feature ensures compatibility with the most recent API updates, enforcing stricter JSON outputs using BNF grammar rules. To activate, simply set `strict: true` in your API calls, even if it’s disabled in your configuration. ##### **Key Notes:** - Setting `strict: true` enables grammar enforcement, even if disabled in your config. - If `format_type` is set to `json_schema`, BNF grammars will be automatically generated from the schema. ##### 🛑 **Disable Gallery** Need to streamline your setup? You can now disable the gallery endpoint using `LOCALAI_DISABLE_GALLERY_ENDPOINT`. For more options, check out the full list of commands with `--help`. ##### 🌞 **P2P and Federation Enhancements** Several enhancements have been made to improve your experience with P2P and federated clusters: - **Load Balancing by Default:** This feature is now enabled by default (disable it with `LOCALAI_RANDOM_WORKER` if needed). - **Target Specific Workers:** Directly target workers in federated mode using `LOCALAI_TARGET_WORKER`. ##### 💪 **Run Multiple P2P Clusters in the Same Network** You can now run multiple clusters within the same network by specifying a network ID via CLI. This allows you to logically separate clusters while using the same shared token. Just set `LOCALAI_P2P_NETWORK_ID` to a UUID that matches across instances. Please note, while this offers segmentation, it’s not fully secure—anyone with the network token can view available services within the network. ##### 🧪 **Deprecation Notice: `gpt4all.cpp` and `petals` Backends** As we continue to evolve, we are officially deprecating the `gpt4all.cpp` and `petals` backends. The newer `llama.cpp` offers a superior set of features and better performance, making it the preferred choice moving forward. From this release onward, `gpt4all` models in `ggml` format are no longer compatible. Additionally, the `petals` backend has been deprecated as well. LocalAI’s new P2P capabilities now offer a comprehensive replacement for these features. <!-- Release notes generated using configuration in .github/release.yml at master --> ##### What's Changed ##### Breaking Changes 🛠 - chore: drop gpt4all.cpp by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3106](https://togithub.com/mudler/LocalAI/pull/3106) - chore: drop petals by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3316](https://togithub.com/mudler/LocalAI/pull/3316) ##### Bug fixes 🐛 - fix(ui): do not show duplicate entries if not installed by gallery by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3107](https://togithub.com/mudler/LocalAI/pull/3107) - fix: be consistent in downloading files, check for scanner errors by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3108](https://togithub.com/mudler/LocalAI/pull/3108) - fix: ensure correct version of torch is always installed based on BUI… by [@​cryptk](https://togithub.com/cryptk) in [https://github.com/mudler/LocalAI/pull/2890](https://togithub.com/mudler/LocalAI/pull/2890) - fix(python): move accelerate and GPU-specific libs to build-type by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3194](https://togithub.com/mudler/LocalAI/pull/3194) - fix(apple): disable BUILD_TYPE metal on fallback by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3199](https://togithub.com/mudler/LocalAI/pull/3199) - fix(vall-e-x): pin hipblas deps by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3201](https://togithub.com/mudler/LocalAI/pull/3201) - fix(diffusers): use nightly rocm for hipblas builds by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3202](https://togithub.com/mudler/LocalAI/pull/3202) - fix(explorer): reset counter when network is active by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3213](https://togithub.com/mudler/LocalAI/pull/3213) - fix(p2p): allocate tunnels only when needed by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3259](https://togithub.com/mudler/LocalAI/pull/3259) - fix(gallery): be consistent and disable UI routes as well by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3262](https://togithub.com/mudler/LocalAI/pull/3262) - fix(parler-tts): bump and require after build type deps by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3272](https://togithub.com/mudler/LocalAI/pull/3272) - fix: add llvm to extra images by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3321](https://togithub.com/mudler/LocalAI/pull/3321) - fix(p2p): re-use p2p host when running federated mode by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3341](https://togithub.com/mudler/LocalAI/pull/3341) - fix(ci): pin to llvmlite 0.43 by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3342](https://togithub.com/mudler/LocalAI/pull/3342) - fix(p2p): avoid starting the node twice by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3349](https://togithub.com/mudler/LocalAI/pull/3349) - fix(chat): re-generated uuid, created, and text on each request by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3359](https://togithub.com/mudler/LocalAI/pull/3359) ##### Exciting New Features 🎉 - feat(guesser): add gemma2 by [@​sozercan](https://togithub.com/sozercan) in [https://github.com/mudler/LocalAI/pull/3118](https://togithub.com/mudler/LocalAI/pull/3118) - feat(venv): shared env by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3195](https://togithub.com/mudler/LocalAI/pull/3195) - feat(openai): add `json_schema` format type and strict mode by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3193](https://togithub.com/mudler/LocalAI/pull/3193) - feat(p2p): allow to run multiple clusters in the same p2p network by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3128](https://togithub.com/mudler/LocalAI/pull/3128) - feat(p2p): add network explorer and community pools by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3125](https://togithub.com/mudler/LocalAI/pull/3125) - feat(explorer): relax token deletion with error threshold by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3211](https://togithub.com/mudler/LocalAI/pull/3211) - feat(diffusers): support flux models by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3129](https://togithub.com/mudler/LocalAI/pull/3129) - feat(explorer): make possible to run sync in a separate process by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3224](https://togithub.com/mudler/LocalAI/pull/3224) - feat(federated): allow to pickup a specific worker, improve loadbalancing by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3243](https://togithub.com/mudler/LocalAI/pull/3243) - feat: Initial Version of vscode DevContainer by [@​dave-gray101](https://togithub.com/dave-gray101) in [https://github.com/mudler/LocalAI/pull/3217](https://togithub.com/mudler/LocalAI/pull/3217) - feat(explorer): visual improvements by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3247](https://togithub.com/mudler/LocalAI/pull/3247) - feat(gallery): lazy load images by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3246](https://togithub.com/mudler/LocalAI/pull/3246) - chore(explorer): add join instructions by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3255](https://togithub.com/mudler/LocalAI/pull/3255) - chore: allow to disable gallery endpoints, improve p2p connection handling by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3256](https://togithub.com/mudler/LocalAI/pull/3256) - chore(ux): add animated header with anime.js in p2p sections by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3271](https://togithub.com/mudler/LocalAI/pull/3271) - chore(p2p): make commands easier to copy-paste by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3273](https://togithub.com/mudler/LocalAI/pull/3273) - chore(ux): allow to create and drag dots in the animation by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3287](https://togithub.com/mudler/LocalAI/pull/3287) - feat(federation): do not allocate local services for load balancing by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3337](https://togithub.com/mudler/LocalAI/pull/3337) - feat(p2p): allow to set intervals by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3353](https://togithub.com/mudler/LocalAI/pull/3353) ##### 🧠 Models - models(gallery): add meta-llama-3.1-instruct-9.99b-brainstorm-10x-form-3 by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3103](https://togithub.com/mudler/LocalAI/pull/3103) - models(gallery): add mn-12b-celeste-v1.9 by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3104](https://togithub.com/mudler/LocalAI/pull/3104) - models(gallery): add shieldgemma by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3105](https://togithub.com/mudler/LocalAI/pull/3105) - models(gallery): add llama-3.1-techne-rp-8b-v1 by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3112](https://togithub.com/mudler/LocalAI/pull/3112) - models(gallery): add llama-spark by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3116](https://togithub.com/mudler/LocalAI/pull/3116) - models(gallery): add glitz by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3119](https://togithub.com/mudler/LocalAI/pull/3119) - models(gallery): add gemmasutra-mini by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3120](https://togithub.com/mudler/LocalAI/pull/3120) - models(gallery): add kumiho-v1-rp-uwu-8b by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3121](https://togithub.com/mudler/LocalAI/pull/3121) - models(gallery): add humanish-roleplay-llama-3.1-8b-i1 by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3126](https://togithub.com/mudler/LocalAI/pull/3126) - chore(model-gallery): ⬆️ update checksum by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3167](https://togithub.com/mudler/LocalAI/pull/3167) - models(gallery): add calme-2.2-qwen2-72b by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3185](https://togithub.com/mudler/LocalAI/pull/3185) - models(gallery): add calme-2.3-legalkit-8b by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3200](https://togithub.com/mudler/LocalAI/pull/3200) - chore(model-gallery): ⬆️ update checksum by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3210](https://togithub.com/mudler/LocalAI/pull/3210) - models(gallery): add flux.1-dev and flux.1-schnell by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3215](https://togithub.com/mudler/LocalAI/pull/3215) - models(gallery): add infinity-instruct-7m-gen-llama3\_1-70b by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3220](https://togithub.com/mudler/LocalAI/pull/3220) - models(gallery): add cathallama-70b by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3221](https://togithub.com/mudler/LocalAI/pull/3221) - models(gallery): add edgerunner-tactical-7b by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3249](https://togithub.com/mudler/LocalAI/pull/3249) - models(gallery): add hermes-3 by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3252](https://togithub.com/mudler/LocalAI/pull/3252) - models(gallery): add SmolLM by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3265](https://togithub.com/mudler/LocalAI/pull/3265) - models(gallery): add mahou-1.3-llama3.1-8b by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3266](https://togithub.com/mudler/LocalAI/pull/3266) - models(gallery): add fireball-llama-3.11-8b-v1orpo by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3267](https://togithub.com/mudler/LocalAI/pull/3267) - models(gallery): add rocinante-12b-v1.1 by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3268](https://togithub.com/mudler/LocalAI/pull/3268) - models(gallery): add pantheon-rp-1.6-12b-nemo by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3269](https://togithub.com/mudler/LocalAI/pull/3269) - models(gallery): add llama-3.1-storm-8b-q4\_k_m by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3270](https://togithub.com/mudler/LocalAI/pull/3270) ##### 📖 Documentation and examples - docs: ⬆️ update docs version mudler/LocalAI by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3109](https://togithub.com/mudler/LocalAI/pull/3109) - fix(docs): Refer to the OpenAI documentation to update the openai-functions docu… by [@​jermeyhu](https://togithub.com/jermeyhu) in [https://github.com/mudler/LocalAI/pull/3317](https://togithub.com/mudler/LocalAI/pull/3317) - chore(docs): update p2p env var documentation by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3350](https://togithub.com/mudler/LocalAI/pull/3350) ##### 👒 Dependencies - chore: ⬆️ Update ggerganov/llama.cpp by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3110](https://togithub.com/mudler/LocalAI/pull/3110) - chore: ⬆️ Update ggerganov/llama.cpp by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3115](https://togithub.com/mudler/LocalAI/pull/3115) - chore: ⬆️ Update ggerganov/llama.cpp by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3117](https://togithub.com/mudler/LocalAI/pull/3117) - chore: ⬆️ Update ggerganov/llama.cpp by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3123](https://togithub.com/mudler/LocalAI/pull/3123) - chore(deps): Bump grpcio from 1.65.1 to 1.65.4 in /backend/python/autogptq by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3130](https://togithub.com/mudler/LocalAI/pull/3130) - chore(deps): Bump grpcio from 1.65.1 to 1.65.4 in /backend/python/common/template by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3131](https://togithub.com/mudler/LocalAI/pull/3131) - chore(deps): Bump langchain from 0.2.10 to 0.2.12 in /examples/functions by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3132](https://togithub.com/mudler/LocalAI/pull/3132) - chore(deps): Bump grpcio from 1.65.1 to 1.65.4 in /backend/python/openvoice by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3137](https://togithub.com/mudler/LocalAI/pull/3137) - chore(deps): Bump grpcio from 1.65.1 to 1.65.4 in /backend/python/coqui by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3138](https://togithub.com/mudler/LocalAI/pull/3138) - chore(deps): Bump grpcio from 1.65.1 to 1.65.4 in /backend/python/transformers-musicgen by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3140](https://togithub.com/mudler/LocalAI/pull/3140) - chore(deps): Bump grpcio from 1.65.1 to 1.65.4 in /backend/python/diffusers by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3141](https://togithub.com/mudler/LocalAI/pull/3141) - chore(deps): Bump llama-index from 0.10.56 to 0.10.59 in /examples/chainlit by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3143](https://togithub.com/mudler/LocalAI/pull/3143) - chore(deps): Bump docs/themes/hugo-theme-relearn from `7aec99b` to `8b14837` by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3142](https://togithub.com/mudler/LocalAI/pull/3142) - chore(deps): Bump grpcio from 1.65.1 to 1.65.4 in /backend/python/exllama2 by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3146](https://togithub.com/mudler/LocalAI/pull/3146) - chore(deps): Bump grpcio from 1.65.1 to 1.65.4 in /backend/python/bark by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3144](https://togithub.com/mudler/LocalAI/pull/3144) - chore(deps): Bump grpcio from 1.65.1 to 1.65.4 in /backend/python/rerankers by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3147](https://togithub.com/mudler/LocalAI/pull/3147) - chore(deps): Bump langchain from 0.2.10 to 0.2.12 in /examples/langchain-chroma by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3148](https://togithub.com/mudler/LocalAI/pull/3148) - chore(deps): Bump streamlit from 1.37.0 to 1.37.1 in /examples/streamlit-bot by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3151](https://togithub.com/mudler/LocalAI/pull/3151) - chore(deps): Bump grpcio from 1.65.1 to 1.65.4 in /backend/python/vllm by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3152](https://togithub.com/mudler/LocalAI/pull/3152) - chore(deps): Bump langchain from 0.2.11 to 0.2.12 in /examples/langchain/langchainpy-localai-example by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3155](https://togithub.com/mudler/LocalAI/pull/3155) - chore(deps): Bump grpcio from 1.65.1 to 1.65.4 in /backend/python/transformers by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3161](https://togithub.com/mudler/LocalAI/pull/3161) - chore(deps): Bump grpcio from 1.65.1 to 1.65.4 in /backend/python/vall-e-x by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3156](https://togithub.com/mudler/LocalAI/pull/3156) - chore(deps): Bump sqlalchemy from 2.0.31 to 2.0.32 in /examples/langchain/langchainpy-localai-example by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3157](https://togithub.com/mudler/LocalAI/pull/3157) - chore: ⬆️ Update ggerganov/whisper.cpp by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3164](https://togithub.com/mudler/LocalAI/pull/3164) - chore(deps): Bump openai from 1.37.0 to 1.39.0 in /examples/functions by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3134](https://togithub.com/mudler/LocalAI/pull/3134) - chore(deps): Bump openai from 1.37.0 to 1.39.0 in /examples/langchain-chroma by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3149](https://togithub.com/mudler/LocalAI/pull/3149) - chore(deps): Bump openai from 1.37.1 to 1.39.0 in /examples/langchain/langchainpy-localai-example by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3158](https://togithub.com/mudler/LocalAI/pull/3158) - chore: ⬆️ Update ggerganov/llama.cpp by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3166](https://togithub.com/mudler/LocalAI/pull/3166) - chore(deps): Bump tqdm from 4.66.4 to 4.66.5 in /examples/langchain/langchainpy-localai-example by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3159](https://togithub.com/mudler/LocalAI/pull/3159) - chore(deps): Bump llama-index from 0.10.56 to 0.10.61 in /examples/langchain-chroma by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3168](https://togithub.com/mudler/LocalAI/pull/3168) - chore: ⬆️ Update ggerganov/llama.cpp to `1e6f6554aa11fa10160a5fda689e736c3c34169f` by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3189](https://togithub.com/mudler/LocalAI/pull/3189) - chore: ⬆️ Update ggerganov/llama.cpp to `15fa07a5c564d3ed7e7eb64b73272cedb27e73ec` by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3197](https://togithub.com/mudler/LocalAI/pull/3197) - chore: ⬆️ Update ggerganov/whisper.cpp to `6eac06759b87b50132a01be019e9250a3ffc8969` by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3203](https://togithub.com/mudler/LocalAI/pull/3203) - chore: ⬆️ Update ggerganov/llama.cpp to `3a14e00366399040a139c67dd5951177a8cb5695` by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3204](https://togithub.com/mudler/LocalAI/pull/3204) - chore(deps): Bump aiohttp from 3.9.5 to 3.10.2 in /examples/langchain/langchainpy-localai-example in the pip group by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3207](https://togithub.com/mudler/LocalAI/pull/3207) - chore: ⬆️ Update ggerganov/llama.cpp to `b72942fac998672a79a1ae3c03b340f7e629980b` by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3208](https://togithub.com/mudler/LocalAI/pull/3208) - chore: ⬆️ Update ggerganov/whisper.cpp to `81c999fe0a25c4ebbfef10ed8a1a96df9cfc10fd` by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3209](https://togithub.com/mudler/LocalAI/pull/3209) - chore: ⬆️ Update ggerganov/llama.cpp to `6e02327e8b7837358e0406bf90a4632e18e27846` by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3212](https://togithub.com/mudler/LocalAI/pull/3212) - chore(deps): update edgevpn by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3214](https://togithub.com/mudler/LocalAI/pull/3214) - chore: ⬆️ Update ggerganov/llama.cpp to `4134999e01f31256b15342b41c4de9e2477c4a6c` by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3218](https://togithub.com/mudler/LocalAI/pull/3218) - chore(deps): Bump llama-index from 0.10.61 to 0.10.65 in /examples/langchain-chroma by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3225](https://togithub.com/mudler/LocalAI/pull/3225) - chore(deps): Bump langchain-community from 0.2.9 to 0.2.11 in /examples/langchain/langchainpy-localai-example by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3230](https://togithub.com/mudler/LocalAI/pull/3230) - chore(deps): Bump attrs from 23.2.0 to 24.2.0 in /examples/langchain/langchainpy-localai-example by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3232](https://togithub.com/mudler/LocalAI/pull/3232) - chore(deps): Bump pyyaml from 6.0.1 to 6.0.2 in /examples/langchain/langchainpy-localai-example by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3231](https://togithub.com/mudler/LocalAI/pull/3231) - chore(deps): Bump llama-index from 0.10.59 to 0.10.65 in /examples/chainlit by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3238](https://togithub.com/mudler/LocalAI/pull/3238) - chore: ⬆️ Update ggerganov/llama.cpp to `fc4ca27b25464a11b3b86c9dbb5b6ed6065965c2` by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3240](https://togithub.com/mudler/LocalAI/pull/3240) - chore(deps): Bump openai from 1.39.0 to 1.40.5 in /examples/langchain-chroma by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3241](https://togithub.com/mudler/LocalAI/pull/3241) - chore: ⬆️ Update ggerganov/whisper.cpp to `22fcd5fd110ba1ff592b4e23013d870831756259` by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3239](https://togithub.com/mudler/LocalAI/pull/3239) - chore(deps): Bump aiohttp from 3.10.2 to 3.10.3 in /examples/langchain/langchainpy-localai-example by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3234](https://togithub.com/mudler/LocalAI/pull/3234) - chore(deps): Bump openai from 1.39.0 to 1.40.6 in /examples/langchain/langchainpy-localai-example by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3244](https://togithub.com/mudler/LocalAI/pull/3244) - chore: ⬆️ Update ggerganov/llama.cpp to `06943a69f678fb32829ff06d9c18367b17d4b361` by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3245](https://togithub.com/mudler/LocalAI/pull/3245) - chore(deps): Bump openai from 1.39.0 to 1.40.4 in /examples/functions by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3235](https://togithub.com/mudler/LocalAI/pull/3235) - chore: ⬆️ Update ggerganov/llama.cpp to `5fd89a70ead34d1a17015ddecad05aaa2490ca46` by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3248](https://togithub.com/mudler/LocalAI/pull/3248) - chore(deps): bump llama.cpp, rename `llama_add_bos_token` by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3253](https://togithub.com/mudler/LocalAI/pull/3253) - chore: ⬆️ Update ggerganov/llama.cpp to `8b3befc0e2ed8fb18b903735831496b8b0c80949` by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3257](https://togithub.com/mudler/LocalAI/pull/3257) - chore: ⬆️ Update ggerganov/llama.cpp to `2fb9267887d24a431892ce4dccc75c7095b0d54d` by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3260](https://togithub.com/mudler/LocalAI/pull/3260) - chore: ⬆️ Update ggerganov/llama.cpp to `554b049068de24201d19dde2fa83e35389d4585d` by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3263](https://togithub.com/mudler/LocalAI/pull/3263) - chore(deps): Bump langchain from 0.2.12 to 0.2.14 in /examples/langchain-chroma by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3275](https://togithub.com/mudler/LocalAI/pull/3275) - chore(deps): Bump grpcio from 1.65.4 to 1.65.5 in /backend/python/openvoice by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3282](https://togithub.com/mudler/LocalAI/pull/3282) - chore(deps): Bump docs/themes/hugo-theme-relearn from `8b14837` to `82a5e98` by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3274](https://togithub.com/mudler/LocalAI/pull/3274) - chore(deps): Bump grpcio from 1.65.4 to 1.65.5 in /backend/python/bark by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3285](https://togithub.com/mudler/LocalAI/pull/3285) - chore(deps): Bump grpcio from 1.65.1 to 1.65.5 in /backend/python/parler-tts by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3283](https://togithub.com/mudler/LocalAI/pull/3283) - chore(deps): Bump grpcio from 1.65.4 to 1.65.5 in /backend/python/common/template by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3291](https://togithub.com/mudler/LocalAI/pull/3291) - chore(deps): Bump grpcio from 1.65.1 to 1.65.5 in /backend/python/sentencetransformers by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3292](https://togithub.com/mudler/LocalAI/pull/3292) - chore(deps): Bump grpcio from 1.65.4 to 1.65.5 in /backend/python/vall-e-x by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3294](https://togithub.com/mudler/LocalAI/pull/3294) - chore(deps): Bump grpcio from 1.65.4 to 1.65.5 in /backend/python/transformers by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3296](https://togithub.com/mudler/LocalAI/pull/3296) - chore(deps): Bump grpcio from 1.65.0 to 1.65.5 in /backend/python/exllama by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3299](https://togithub.com/mudler/LocalAI/pull/3299) - chore(deps): Bump grpcio from 1.65.4 to 1.65.5 in /backend/python/vllm by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3301](https://togithub.com/mudler/LocalAI/pull/3301) - chore(deps): Bump langchain from 0.2.12 to 0.2.14 in /examples/functions by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3304](https://togithub.com/mudler/LocalAI/pull/3304) - chore(deps): Bump numpy from 2.0.1 to 2.1.0 in /examples/langchain/langchainpy-localai-example by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3310](https://togithub.com/mudler/LocalAI/pull/3310) - chore(deps): Bump grpcio from 1.65.1 to 1.65.5 in /backend/python/mamba by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3313](https://togithub.com/mudler/LocalAI/pull/3313) - chore(deps): Bump grpcio from 1.65.4 to 1.65.5 in /backend/python/coqui by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3306](https://togithub.com/mudler/LocalAI/pull/3306) - chore(deps): Bump grpcio from 1.65.4 to 1.65.5 in /backend/python/transformers-musicgen by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3308](https://togithub.com/mudler/LocalAI/pull/3308) - chore(deps): Bump langchain-community from 0.2.11 to 0.2.12 in /examples/langchain/langchainpy-localai-example by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3311](https://togithub.com/mudler/LocalAI/pull/3311) - chore: ⬆️ Update ggerganov/llama.cpp to `cfac111e2b3953cdb6b0126e67a2487687646971` by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3315](https://togithub.com/mudler/LocalAI/pull/3315) - chore(deps): Bump openai from 1.40.4 to 1.41.1 in /examples/functions by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3319](https://togithub.com/mudler/LocalAI/pull/3319) - chore(deps): Bump openai from 1.40.6 to 1.41.1 in /examples/langchain/langchainpy-localai-example by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3320](https://togithub.com/mudler/LocalAI/pull/3320) - chore(deps): Bump llama-index from 0.10.65 to 0.10.67.post1 in /examples/langchain-chroma by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3335](https://togithub.com/mudler/LocalAI/pull/3335) - chore(deps): update edgevpn by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3340](https://togithub.com/mudler/LocalAI/pull/3340) - chore(deps): Bump langchain from 0.2.12 to 0.2.14 in /examples/langchain/langchainpy-localai-example by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3307](https://togithub.com/mudler/LocalAI/pull/3307) - chore(deps): update edgevpn by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3346](https://togithub.com/mudler/LocalAI/pull/3346) - chore: ⬆️ Update ggerganov/whisper.cpp to `d65786ea540a5aef21f67cacfa6f134097727780` by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3344](https://togithub.com/mudler/LocalAI/pull/3344) - chore: ⬆️ Update ggerganov/llama.cpp to `2f3c1466ff46a2413b0e363a5005c46538186ee6` by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3345](https://togithub.com/mudler/LocalAI/pull/3345) - chore: ⬆️ Update ggerganov/llama.cpp to `fc54ef0d1c138133a01933296d50a36a1ab64735` by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3356](https://togithub.com/mudler/LocalAI/pull/3356) - chore: ⬆️ Update ggerganov/whisper.cpp to `9e3c5345cd46ea718209db53464e426c3fe7a25e` by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3357](https://togithub.com/mudler/LocalAI/pull/3357) ##### Other Changes - feat(swagger): update swagger by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3196](https://togithub.com/mudler/LocalAI/pull/3196) - fix: devcontainer part 1 by [@​dave-gray101](https://togithub.com/dave-gray101) in [https://github.com/mudler/LocalAI/pull/3254](https://togithub.com/mudler/LocalAI/pull/3254) - fix: devcontainer pt 2 by [@​dave-gray101](https://togithub.com/dave-gray101) in [https://github.com/mudler/LocalAI/pull/3258](https://togithub.com/mudler/LocalAI/pull/3258) - feat: devcontainer part 3 by [@​dave-gray101](https://togithub.com/dave-gray101) in [https://github.com/mudler/LocalAI/pull/3318](https://togithub.com/mudler/LocalAI/pull/3318) - feat: devcontainer part 4 by [@​dave-gray101](https://togithub.com/dave-gray101) in [https://github.com/mudler/LocalAI/pull/3339](https://togithub.com/mudler/LocalAI/pull/3339) - feat(swagger): update swagger by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3343](https://togithub.com/mudler/LocalAI/pull/3343) - chore(anime.js): drop unused by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3351](https://togithub.com/mudler/LocalAI/pull/3351) - chore(p2p): single-node when sharing federated instance by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3354](https://togithub.com/mudler/LocalAI/pull/3354) ##### New Contributors - [@​jermeyhu](https://togithub.com/jermeyhu) made their first contribution in [https://github.com/mudler/LocalAI/pull/3317](https://togithub.com/mudler/LocalAI/pull/3317) **Full Changelog**: mudler/LocalAI@v2.19.4...v2.20.0 </details> --- ### Configuration 📅 **Schedule**: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined). 🚦 **Automerge**: Enabled. ♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox. 🔕 **Ignore**: Close this PR and you won't be reminded about these updates again. --- - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box --- This PR has been generated by [Renovate Bot](https://togithub.com/renovatebot/renovate). <!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzOC41MS4wIiwidXBkYXRlZEluVmVyIjoiMzguNTEuMCIsInRhcmdldEJyYW5jaCI6Im1hc3RlciIsImxhYmVscyI6WyJhdXRvbWVyZ2UiLCJ1cGRhdGUvZG9ja2VyL2dlbmVyYWwvbm9uLW1ham9yIl19-->
…0.1 by renovate (#25443) This PR contains the following updates: | Package | Update | Change | |---|---|---| | [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) | patch | `v2.20.0-aio-cpu` -> `v2.20.1-aio-cpu` | | [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) | patch | `v2.20.0-aio-gpu-nvidia-cuda-11` -> `v2.20.1-aio-gpu-nvidia-cuda-11` | | [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) | patch | `v2.20.0-aio-gpu-nvidia-cuda-12` -> `v2.20.1-aio-gpu-nvidia-cuda-12` | | [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) | patch | `v2.20.0-cublas-cuda11-core` -> `v2.20.1-cublas-cuda11-core` | | [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) | patch | `v2.20.0-ffmpeg-core` -> `v2.20.1-ffmpeg-core` | | [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) | patch | `v2.20.0` -> `v2.20.1` | --- > [!WARNING] > Some dependencies could not be looked up. Check the Dependency Dashboard for more information. --- ### Release Notes <details> <summary>mudler/LocalAI (docker.io/localai/localai)</summary> ### [`v2.20.1`](https://togithub.com/mudler/LocalAI/releases/tag/v2.20.1) [Compare Source](https://togithub.com/mudler/LocalAI/compare/v2.20.0...v2.20.1) ![local-ai-release-2 20-shadow4](https://togithub.com/user-attachments/assets/cdcc0f8f-953b-4346-be50-b7542e15def6) ##### **TL;DR** - 🌍 **Explorer & Community:** Explore global community pools at [explorer.localai.io](https://explorer.localai.io) - 👀 **Demo instance available:** Test out LocalAI at [demo.localai.io](https://demo.localai.io) - 🤗 **Integration:** Hugging Face Local apps now include LocalAI - 🐛 **Bug Fixes:** Diffusers and hipblas issues resolved - 🎨 **New Feature:** FLUX-1 image generation support - 🏎️ **Strict Mode:** Stay compliant with OpenAI’s latest API changes - 💪 **Multiple P2P Clusters:** Run multiple clusters within the same network - 🧪 **Deprecation Notice:** `gpt4all.cpp` and `petals` backends deprecated *** ##### 🌍 **Explorer and Global Community Pools** Now you can share your LocalAI instance with the global community or explore available instances by visiting [explorer.localai.io](https://explorer.localai.io). This decentralized network powers our demo instance, creating a truly collaborative AI experience. <p align="center"> <img width="638" alt="Explorer Global Community Pools" src="https://github.com/user-attachments/assets/048fc0e7-58f7-4c8a-8076-874f7469e802"> </p> ##### **How It Works** Using the Explorer, you can easily share or connect to clusters. For detailed instructions on creating new clusters or connecting to existing ones, check out our [documentation](https://localai.io/features/distribute/). ##### 👀 **Demo Instance Now Available** Curious about what LocalAI can do? Dive right in with our live demo at [demo.localai.io](https://demo.localai.io)! Thanks to our generous sponsors, this instance is publicly available and configured via peer-to-peer (P2P) networks. If you'd like to connect, follow the [instructions here](https://explorer.localai.io). ##### 🤗 **Hugging Face Integration** I am excited to announce that LocalAI is now integrated within Hugging Face’s local apps! This means you can select LocalAI directly within Hugging Face to build and deploy models with the power and flexibility of our platform. Experience seamless integration with a single click! <p align="center"> <img width="303" alt="Hugging Face Integration Screenshot" src="https://github.com/user-attachments/assets/53804387-2e1d-47f4-91ab-bb2923bc5625"> <img width="638" alt="Hugging Face Integration in Action" src="https://github.com/user-attachments/assets/2a2a2498-c463-458e-96d0-33dc9cfadd60"> </p> This integration was made possible through [this PR](https://togithub.com/huggingface/huggingface.js/pull/833). ##### 🎨 **FLUX-1 Image Generation Support** **FLUX-1** lands in LocalAI! With this update, LocalAI can now generate stunning images using FLUX-1, even in federated mode. Whether you're experimenting with new designs or creating production-quality visuals, FLUX-1 has you covered. Try it out at [demo.localai.io](https://demo.localai.io) and see what LocalAI + FLUX-1 can do! <p align="center"> <img width="638" alt="FLUX-1 Image Generation Example" src="https://github.com/user-attachments/assets/02b70603-21e7-4d7e-ad81-3246c8b70161"> </p> ##### 🐛 **Diffusers and hipblas Fixes** Great news for AMD users! If you’ve encountered issues with the Diffusers backend or hipblas, those bugs have been resolved. We’ve transitioned to `uv` for managing Python dependencies, ensuring a smoother experience. For more details, check out [Issue #​1592](https://togithub.com/mudler/LocalAI/issues/1592). ##### 🏎️ **Strict Mode for API Compliance** To stay up to date with OpenAI’s latest changes, now LocalAI have support as well for **Strict Mode** ( https://openai.com/index/introducing-structured-outputs-in-the-api/ ). This new feature ensures compatibility with the most recent API updates, enforcing stricter JSON outputs using BNF grammar rules. To activate, simply set `strict: true` in your API calls, even if it’s disabled in your configuration. ##### **Key Notes:** - Setting `strict: true` enables grammar enforcement, even if disabled in your config. - If `format_type` is set to `json_schema`, BNF grammars will be automatically generated from the schema. ##### 🛑 **Disable Gallery** Need to streamline your setup? You can now disable the gallery endpoint using `LOCALAI_DISABLE_GALLERY_ENDPOINT`. For more options, check out the full list of commands with `--help`. ##### 🌞 **P2P and Federation Enhancements** Several enhancements have been made to improve your experience with P2P and federated clusters: - **Load Balancing by Default:** This feature is now enabled by default (disable it with `LOCALAI_RANDOM_WORKER` if needed). - **Target Specific Workers:** Directly target workers in federated mode using `LOCALAI_TARGET_WORKER`. ##### 💪 **Run Multiple P2P Clusters in the Same Network** You can now run multiple clusters within the same network by specifying a network ID via CLI. This allows you to logically separate clusters while using the same shared token. Just set `LOCALAI_P2P_NETWORK_ID` to a UUID that matches across instances. Please note, while this offers segmentation, it’s not fully secure—anyone with the network token can view available services within the network. ##### 🧪 **Deprecation Notice: `gpt4all.cpp` and `petals` Backends** As we continue to evolve, we are officially deprecating the `gpt4all.cpp` and `petals` backends. The newer `llama.cpp` offers a superior set of features and better performance, making it the preferred choice moving forward. From this release onward, `gpt4all` models in `ggml` format are no longer compatible. Additionally, the `petals` backend has been deprecated as well. LocalAI’s new P2P capabilities now offer a comprehensive replacement for these features. <!-- Release notes generated using configuration in .github/release.yml at master --> ##### What's Changed ##### Breaking Changes 🛠 - chore: drop gpt4all.cpp by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3106](https://togithub.com/mudler/LocalAI/pull/3106) - chore: drop petals by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3316](https://togithub.com/mudler/LocalAI/pull/3316) ##### Bug fixes 🐛 - fix(ui): do not show duplicate entries if not installed by gallery by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3107](https://togithub.com/mudler/LocalAI/pull/3107) - fix: be consistent in downloading files, check for scanner errors by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3108](https://togithub.com/mudler/LocalAI/pull/3108) - fix: ensure correct version of torch is always installed based on BUI… by [@​cryptk](https://togithub.com/cryptk) in [https://github.com/mudler/LocalAI/pull/2890](https://togithub.com/mudler/LocalAI/pull/2890) - fix(python): move accelerate and GPU-specific libs to build-type by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3194](https://togithub.com/mudler/LocalAI/pull/3194) - fix(apple): disable BUILD_TYPE metal on fallback by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3199](https://togithub.com/mudler/LocalAI/pull/3199) - fix(vall-e-x): pin hipblas deps by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3201](https://togithub.com/mudler/LocalAI/pull/3201) - fix(diffusers): use nightly rocm for hipblas builds by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3202](https://togithub.com/mudler/LocalAI/pull/3202) - fix(explorer): reset counter when network is active by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3213](https://togithub.com/mudler/LocalAI/pull/3213) - fix(p2p): allocate tunnels only when needed by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3259](https://togithub.com/mudler/LocalAI/pull/3259) - fix(gallery): be consistent and disable UI routes as well by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3262](https://togithub.com/mudler/LocalAI/pull/3262) - fix(parler-tts): bump and require after build type deps by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3272](https://togithub.com/mudler/LocalAI/pull/3272) - fix: add llvm to extra images by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3321](https://togithub.com/mudler/LocalAI/pull/3321) - fix(p2p): re-use p2p host when running federated mode by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3341](https://togithub.com/mudler/LocalAI/pull/3341) - fix(ci): pin to llvmlite 0.43 by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3342](https://togithub.com/mudler/LocalAI/pull/3342) - fix(p2p): avoid starting the node twice by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3349](https://togithub.com/mudler/LocalAI/pull/3349) - fix(chat): re-generated uuid, created, and text on each request by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3359](https://togithub.com/mudler/LocalAI/pull/3359) ##### Exciting New Features 🎉 - feat(guesser): add gemma2 by [@​sozercan](https://togithub.com/sozercan) in [https://github.com/mudler/LocalAI/pull/3118](https://togithub.com/mudler/LocalAI/pull/3118) - feat(venv): shared env by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3195](https://togithub.com/mudler/LocalAI/pull/3195) - feat(openai): add `json_schema` format type and strict mode by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3193](https://togithub.com/mudler/LocalAI/pull/3193) - feat(p2p): allow to run multiple clusters in the same p2p network by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3128](https://togithub.com/mudler/LocalAI/pull/3128) - feat(p2p): add network explorer and community pools by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3125](https://togithub.com/mudler/LocalAI/pull/3125) - feat(explorer): relax token deletion with error threshold by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3211](https://togithub.com/mudler/LocalAI/pull/3211) - feat(diffusers): support flux models by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3129](https://togithub.com/mudler/LocalAI/pull/3129) - feat(explorer): make possible to run sync in a separate process by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3224](https://togithub.com/mudler/LocalAI/pull/3224) - feat(federated): allow to pickup a specific worker, improve loadbalancing by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3243](https://togithub.com/mudler/LocalAI/pull/3243) - feat: Initial Version of vscode DevContainer by [@​dave-gray101](https://togithub.com/dave-gray101) in [https://github.com/mudler/LocalAI/pull/3217](https://togithub.com/mudler/LocalAI/pull/3217) - feat(explorer): visual improvements by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3247](https://togithub.com/mudler/LocalAI/pull/3247) - feat(gallery): lazy load images by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3246](https://togithub.com/mudler/LocalAI/pull/3246) - chore(explorer): add join instructions by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3255](https://togithub.com/mudler/LocalAI/pull/3255) - chore: allow to disable gallery endpoints, improve p2p connection handling by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3256](https://togithub.com/mudler/LocalAI/pull/3256) - chore(ux): add animated header with anime.js in p2p sections by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3271](https://togithub.com/mudler/LocalAI/pull/3271) - chore(p2p): make commands easier to copy-paste by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3273](https://togithub.com/mudler/LocalAI/pull/3273) - chore(ux): allow to create and drag dots in the animation by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3287](https://togithub.com/mudler/LocalAI/pull/3287) - feat(federation): do not allocate local services for load balancing by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3337](https://togithub.com/mudler/LocalAI/pull/3337) - feat(p2p): allow to set intervals by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3353](https://togithub.com/mudler/LocalAI/pull/3353) ##### 🧠 Models - models(gallery): add meta-llama-3.1-instruct-9.99b-brainstorm-10x-form-3 by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3103](https://togithub.com/mudler/LocalAI/pull/3103) - models(gallery): add mn-12b-celeste-v1.9 by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3104](https://togithub.com/mudler/LocalAI/pull/3104) - models(gallery): add shieldgemma by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3105](https://togithub.com/mudler/LocalAI/pull/3105) - models(gallery): add llama-3.1-techne-rp-8b-v1 by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3112](https://togithub.com/mudler/LocalAI/pull/3112) - models(gallery): add llama-spark by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3116](https://togithub.com/mudler/LocalAI/pull/3116) - models(gallery): add glitz by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3119](https://togithub.com/mudler/LocalAI/pull/3119) - models(gallery): add gemmasutra-mini by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3120](https://togithub.com/mudler/LocalAI/pull/3120) - models(gallery): add kumiho-v1-rp-uwu-8b by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3121](https://togithub.com/mudler/LocalAI/pull/3121) - models(gallery): add humanish-roleplay-llama-3.1-8b-i1 by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3126](https://togithub.com/mudler/LocalAI/pull/3126) - chore(model-gallery): ⬆️ update checksum by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3167](https://togithub.com/mudler/LocalAI/pull/3167) - models(gallery): add calme-2.2-qwen2-72b by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3185](https://togithub.com/mudler/LocalAI/pull/3185) - models(gallery): add calme-2.3-legalkit-8b by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3200](https://togithub.com/mudler/LocalAI/pull/3200) - chore(model-gallery): ⬆️ update checksum by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3210](https://togithub.com/mudler/LocalAI/pull/3210) - models(gallery): add flux.1-dev and flux.1-schnell by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3215](https://togithub.com/mudler/LocalAI/pull/3215) - models(gallery): add infinity-instruct-7m-gen-llama3\_1-70b by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3220](https://togithub.com/mudler/LocalAI/pull/3220) - models(gallery): add cathallama-70b by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3221](https://togithub.com/mudler/LocalAI/pull/3221) - models(gallery): add edgerunner-tactical-7b by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3249](https://togithub.com/mudler/LocalAI/pull/3249) - models(gallery): add hermes-3 by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3252](https://togithub.com/mudler/LocalAI/pull/3252) - models(gallery): add SmolLM by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3265](https://togithub.com/mudler/LocalAI/pull/3265) - models(gallery): add mahou-1.3-llama3.1-8b by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3266](https://togithub.com/mudler/LocalAI/pull/3266) - models(gallery): add fireball-llama-3.11-8b-v1orpo by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3267](https://togithub.com/mudler/LocalAI/pull/3267) - models(gallery): add rocinante-12b-v1.1 by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3268](https://togithub.com/mudler/LocalAI/pull/3268) - models(gallery): add pantheon-rp-1.6-12b-nemo by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3269](https://togithub.com/mudler/LocalAI/pull/3269) - models(gallery): add llama-3.1-storm-8b-q4\_k_m by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3270](https://togithub.com/mudler/LocalAI/pull/3270) ##### 📖 Documentation and examples - docs: ⬆️ update docs version mudler/LocalAI by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3109](https://togithub.com/mudler/LocalAI/pull/3109) - fix(docs): Refer to the OpenAI documentation to update the openai-functions docu… by [@​jermeyhu](https://togithub.com/jermeyhu) in [https://github.com/mudler/LocalAI/pull/3317](https://togithub.com/mudler/LocalAI/pull/3317) - chore(docs): update p2p env var documentation by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3350](https://togithub.com/mudler/LocalAI/pull/3350) ##### 👒 Dependencies - chore: ⬆️ Update ggerganov/llama.cpp by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3110](https://togithub.com/mudler/LocalAI/pull/3110) - chore: ⬆️ Update ggerganov/llama.cpp by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3115](https://togithub.com/mudler/LocalAI/pull/3115) - chore: ⬆️ Update ggerganov/llama.cpp by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3117](https://togithub.com/mudler/LocalAI/pull/3117) - chore: ⬆️ Update ggerganov/llama.cpp by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3123](https://togithub.com/mudler/LocalAI/pull/3123) - chore(deps): Bump grpcio from 1.65.1 to 1.65.4 in /backend/python/autogptq by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3130](https://togithub.com/mudler/LocalAI/pull/3130) - chore(deps): Bump grpcio from 1.65.1 to 1.65.4 in /backend/python/common/template by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3131](https://togithub.com/mudler/LocalAI/pull/3131) - chore(deps): Bump langchain from 0.2.10 to 0.2.12 in /examples/functions by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3132](https://togithub.com/mudler/LocalAI/pull/3132) - chore(deps): Bump grpcio from 1.65.1 to 1.65.4 in /backend/python/openvoice by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3137](https://togithub.com/mudler/LocalAI/pull/3137) - chore(deps): Bump grpcio from 1.65.1 to 1.65.4 in /backend/python/coqui by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3138](https://togithub.com/mudler/LocalAI/pull/3138) - chore(deps): Bump grpcio from 1.65.1 to 1.65.4 in /backend/python/transformers-musicgen by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3140](https://togithub.com/mudler/LocalAI/pull/3140) - chore(deps): Bump grpcio from 1.65.1 to 1.65.4 in /backend/python/diffusers by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3141](https://togithub.com/mudler/LocalAI/pull/3141) - chore(deps): Bump llama-index from 0.10.56 to 0.10.59 in /examples/chainlit by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3143](https://togithub.com/mudler/LocalAI/pull/3143) - chore(deps): Bump docs/themes/hugo-theme-relearn from `7aec99b` to `8b14837` by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3142](https://togithub.com/mudler/LocalAI/pull/3142) - chore(deps): Bump grpcio from 1.65.1 to 1.65.4 in /backend/python/exllama2 by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3146](https://togithub.com/mudler/LocalAI/pull/3146) - chore(deps): Bump grpcio from 1.65.1 to 1.65.4 in /backend/python/bark by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3144](https://togithub.com/mudler/LocalAI/pull/3144) - chore(deps): Bump grpcio from 1.65.1 to 1.65.4 in /backend/python/rerankers by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3147](https://togithub.com/mudler/LocalAI/pull/3147) - chore(deps): Bump langchain from 0.2.10 to 0.2.12 in /examples/langchain-chroma by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3148](https://togithub.com/mudler/LocalAI/pull/3148) - chore(deps): Bump streamlit from 1.37.0 to 1.37.1 in /examples/streamlit-bot by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3151](https://togithub.com/mudler/LocalAI/pull/3151) - chore(deps): Bump grpcio from 1.65.1 to 1.65.4 in /backend/python/vllm by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3152](https://togithub.com/mudler/LocalAI/pull/3152) - chore(deps): Bump langchain from 0.2.11 to 0.2.12 in /examples/langchain/langchainpy-localai-example by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3155](https://togithub.com/mudler/LocalAI/pull/3155) - chore(deps): Bump grpcio from 1.65.1 to 1.65.4 in /backend/python/transformers by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3161](https://togithub.com/mudler/LocalAI/pull/3161) - chore(deps): Bump grpcio from 1.65.1 to 1.65.4 in /backend/python/vall-e-x by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3156](https://togithub.com/mudler/LocalAI/pull/3156) - chore(deps): Bump sqlalchemy from 2.0.31 to 2.0.32 in /examples/langchain/langchainpy-localai-example by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3157](https://togithub.com/mudler/LocalAI/pull/3157) - chore: ⬆️ Update ggerganov/whisper.cpp by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3164](https://togithub.com/mudler/LocalAI/pull/3164) - chore(deps): Bump openai from 1.37.0 to 1.39.0 in /examples/functions by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3134](https://togithub.com/mudler/LocalAI/pull/3134) - chore(deps): Bump openai from 1.37.0 to 1.39.0 in /examples/langchain-chroma by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3149](https://togithub.com/mudler/LocalAI/pull/3149) - chore(deps): Bump openai from 1.37.1 to 1.39.0 in /examples/langchain/langchainpy-localai-example by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3158](https://togithub.com/mudler/LocalAI/pull/3158) - chore: ⬆️ Update ggerganov/llama.cpp by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3166](https://togithub.com/mudler/LocalAI/pull/3166) - chore(deps): Bump tqdm from 4.66.4 to 4.66.5 in /examples/langchain/langchainpy-localai-example by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3159](https://togithub.com/mudler/LocalAI/pull/3159) - chore(deps): Bump llama-index from 0.10.56 to 0.10.61 in /examples/langchain-chroma by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3168](https://togithub.com/mudler/LocalAI/pull/3168) - chore: ⬆️ Update ggerganov/llama.cpp to `1e6f6554aa11fa10160a5fda689e736c3c34169f` by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3189](https://togithub.com/mudler/LocalAI/pull/3189) - chore: ⬆️ Update ggerganov/llama.cpp to `15fa07a5c564d3ed7e7eb64b73272cedb27e73ec` by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3197](https://togithub.com/mudler/LocalAI/pull/3197) - chore: ⬆️ Update ggerganov/whisper.cpp to `6eac06759b87b50132a01be019e9250a3ffc8969` by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3203](https://togithub.com/mudler/LocalAI/pull/3203) - chore: ⬆️ Update ggerganov/llama.cpp to `3a14e00366399040a139c67dd5951177a8cb5695` by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3204](https://togithub.com/mudler/LocalAI/pull/3204) - chore(deps): Bump aiohttp from 3.9.5 to 3.10.2 in /examples/langchain/langchainpy-localai-example in the pip group by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3207](https://togithub.com/mudler/LocalAI/pull/3207) - chore: ⬆️ Update ggerganov/llama.cpp to `b72942fac998672a79a1ae3c03b340f7e629980b` by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3208](https://togithub.com/mudler/LocalAI/pull/3208) - chore: ⬆️ Update ggerganov/whisper.cpp to `81c999fe0a25c4ebbfef10ed8a1a96df9cfc10fd` by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3209](https://togithub.com/mudler/LocalAI/pull/3209) - chore: ⬆️ Update ggerganov/llama.cpp to `6e02327e8b7837358e0406bf90a4632e18e27846` by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3212](https://togithub.com/mudler/LocalAI/pull/3212) - chore(deps): update edgevpn by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3214](https://togithub.com/mudler/LocalAI/pull/3214) - chore: ⬆️ Update ggerganov/llama.cpp to `4134999e01f31256b15342b41c4de9e2477c4a6c` by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3218](https://togithub.com/mudler/LocalAI/pull/3218) - chore(deps): Bump llama-index from 0.10.61 to 0.10.65 in /examples/langchain-chroma by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3225](https://togithub.com/mudler/LocalAI/pull/3225) - chore(deps): Bump langchain-community from 0.2.9 to 0.2.11 in /examples/langchain/langchainpy-localai-example by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3230](https://togithub.com/mudler/LocalAI/pull/3230) - chore(deps): Bump attrs from 23.2.0 to 24.2.0 in /examples/langchain/langchainpy-localai-example by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3232](https://togithub.com/mudler/LocalAI/pull/3232) - chore(deps): Bump pyyaml from 6.0.1 to 6.0.2 in /examples/langchain/langchainpy-localai-example by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3231](https://togithub.com/mudler/LocalAI/pull/3231) - chore(deps): Bump llama-index from 0.10.59 to 0.10.65 in /examples/chainlit by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3238](https://togithub.com/mudler/LocalAI/pull/3238) - chore: ⬆️ Update ggerganov/llama.cpp to `fc4ca27b25464a11b3b86c9dbb5b6ed6065965c2` by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3240](https://togithub.com/mudler/LocalAI/pull/3240) - chore(deps): Bump openai from 1.39.0 to 1.40.5 in /examples/langchain-chroma by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3241](https://togithub.com/mudler/LocalAI/pull/3241) - chore: ⬆️ Update ggerganov/whisper.cpp to `22fcd5fd110ba1ff592b4e23013d870831756259` by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3239](https://togithub.com/mudler/LocalAI/pull/3239) - chore(deps): Bump aiohttp from 3.10.2 to 3.10.3 in /examples/langchain/langchainpy-localai-example by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3234](https://togithub.com/mudler/LocalAI/pull/3234) - chore(deps): Bump openai from 1.39.0 to 1.40.6 in /examples/langchain/langchainpy-localai-example by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3244](https://togithub.com/mudler/LocalAI/pull/3244) - chore: ⬆️ Update ggerganov/llama.cpp to `06943a69f678fb32829ff06d9c18367b17d4b361` by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3245](https://togithub.com/mudler/LocalAI/pull/3245) - chore(deps): Bump openai from 1.39.0 to 1.40.4 in /examples/functions by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3235](https://togithub.com/mudler/LocalAI/pull/3235) - chore: ⬆️ Update ggerganov/llama.cpp to `5fd89a70ead34d1a17015ddecad05aaa2490ca46` by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3248](https://togithub.com/mudler/LocalAI/pull/3248) - chore(deps): bump llama.cpp, rename `llama_add_bos_token` by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3253](https://togithub.com/mudler/LocalAI/pull/3253) - chore: ⬆️ Update ggerganov/llama.cpp to `8b3befc0e2ed8fb18b903735831496b8b0c80949` by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3257](https://togithub.com/mudler/LocalAI/pull/3257) - chore: ⬆️ Update ggerganov/llama.cpp to `2fb9267887d24a431892ce4dccc75c7095b0d54d` by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3260](https://togithub.com/mudler/LocalAI/pull/3260) - chore: ⬆️ Update ggerganov/llama.cpp to `554b049068de24201d19dde2fa83e35389d4585d` by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3263](https://togithub.com/mudler/LocalAI/pull/3263) - chore(deps): Bump langchain from 0.2.12 to 0.2.14 in /examples/langchain-chroma by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3275](https://togithub.com/mudler/LocalAI/pull/3275) - chore(deps): Bump grpcio from 1.65.4 to 1.65.5 in /backend/python/openvoice by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3282](https://togithub.com/mudler/LocalAI/pull/3282) - chore(deps): Bump docs/themes/hugo-theme-relearn from `8b14837` to `82a5e98` by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3274](https://togithub.com/mudler/LocalAI/pull/3274) - chore(deps): Bump grpcio from 1.65.4 to 1.65.5 in /backend/python/bark by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3285](https://togithub.com/mudler/LocalAI/pull/3285) - chore(deps): Bump grpcio from 1.65.1 to 1.65.5 in /backend/python/parler-tts by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3283](https://togithub.com/mudler/LocalAI/pull/3283) - chore(deps): Bump grpcio from 1.65.4 to 1.65.5 in /backend/python/common/template by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3291](https://togithub.com/mudler/LocalAI/pull/3291) - chore(deps): Bump grpcio from 1.65.1 to 1.65.5 in /backend/python/sentencetransformers by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3292](https://togithub.com/mudler/LocalAI/pull/3292) - chore(deps): Bump grpcio from 1.65.4 to 1.65.5 in /backend/python/vall-e-x by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3294](https://togithub.com/mudler/LocalAI/pull/3294) - chore(deps): Bump grpcio from 1.65.4 to 1.65.5 in /backend/python/transformers by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3296](https://togithub.com/mudler/LocalAI/pull/3296) - chore(deps): Bump grpcio from 1.65.0 to 1.65.5 in /backend/python/exllama by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3299](https://togithub.com/mudler/LocalAI/pull/3299) - chore(deps): Bump grpcio from 1.65.4 to 1.65.5 in /backend/python/vllm by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3301](https://togithub.com/mudler/LocalAI/pull/3301) - chore(deps): Bump langchain from 0.2.12 to 0.2.14 in /examples/functions by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3304](https://togithub.com/mudler/LocalAI/pull/3304) - chore(deps): Bump numpy from 2.0.1 to 2.1.0 in /examples/langchain/langchainpy-localai-example by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3310](https://togithub.com/mudler/LocalAI/pull/3310) - chore(deps): Bump grpcio from 1.65.1 to 1.65.5 in /backend/python/mamba by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3313](https://togithub.com/mudler/LocalAI/pull/3313) - chore(deps): Bump grpcio from 1.65.4 to 1.65.5 in /backend/python/coqui by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3306](https://togithub.com/mudler/LocalAI/pull/3306) - chore(deps): Bump grpcio from 1.65.4 to 1.65.5 in /backend/python/transformers-musicgen by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3308](https://togithub.com/mudler/LocalAI/pull/3308) - chore(deps): Bump langchain-community from 0.2.11 to 0.2.12 in /examples/langchain/langchainpy-localai-example by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3311](https://togithub.com/mudler/LocalAI/pull/3311) - chore: ⬆️ Update ggerganov/llama.cpp to `cfac111e2b3953cdb6b0126e67a2487687646971` by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3315](https://togithub.com/mudler/LocalAI/pull/3315) - chore(deps): Bump openai from 1.40.4 to 1.41.1 in /examples/functions by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3319](https://togithub.com/mudler/LocalAI/pull/3319) - chore(deps): Bump openai from 1.40.6 to 1.41.1 in /examples/langchain/langchainpy-localai-example by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3320](https://togithub.com/mudler/LocalAI/pull/3320) - chore(deps): Bump llama-index from 0.10.65 to 0.10.67.post1 in /examples/langchain-chroma by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3335](https://togithub.com/mudler/LocalAI/pull/3335) - chore(deps): update edgevpn by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3340](https://togithub.com/mudler/LocalAI/pull/3340) - chore(deps): Bump langchain from 0.2.12 to 0.2.14 in /examples/langchain/langchainpy-localai-example by [@​dependabot](https://togithub.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3307](https://togithub.com/mudler/LocalAI/pull/3307) - chore(deps): update edgevpn by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3346](https://togithub.com/mudler/LocalAI/pull/3346) - chore: ⬆️ Update ggerganov/whisper.cpp to `d65786ea540a5aef21f67cacfa6f134097727780` by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3344](https://togithub.com/mudler/LocalAI/pull/3344) - chore: ⬆️ Update ggerganov/llama.cpp to `2f3c1466ff46a2413b0e363a5005c46538186ee6` by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3345](https://togithub.com/mudler/LocalAI/pull/3345) - chore: ⬆️ Update ggerganov/llama.cpp to `fc54ef0d1c138133a01933296d50a36a1ab64735` by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3356](https://togithub.com/mudler/LocalAI/pull/3356) - chore: ⬆️ Update ggerganov/whisper.cpp to `9e3c5345cd46ea718209db53464e426c3fe7a25e` by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3357](https://togithub.com/mudler/LocalAI/pull/3357) ##### Other Changes - feat(swagger): update swagger by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3196](https://togithub.com/mudler/LocalAI/pull/3196) - fix: devcontainer part 1 by [@​dave-gray101](https://togithub.com/dave-gray101) in [https://github.com/mudler/LocalAI/pull/3254](https://togithub.com/mudler/LocalAI/pull/3254) - fix: devcontainer pt 2 by [@​dave-gray101](https://togithub.com/dave-gray101) in [https://github.com/mudler/LocalAI/pull/3258](https://togithub.com/mudler/LocalAI/pull/3258) - feat: devcontainer part 3 by [@​dave-gray101](https://togithub.com/dave-gray101) in [https://github.com/mudler/LocalAI/pull/3318](https://togithub.com/mudler/LocalAI/pull/3318) - feat: devcontainer part 4 by [@​dave-gray101](https://togithub.com/dave-gray101) in [https://github.com/mudler/LocalAI/pull/3339](https://togithub.com/mudler/LocalAI/pull/3339) - feat(swagger): update swagger by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3343](https://togithub.com/mudler/LocalAI/pull/3343) - chore(anime.js): drop unused by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3351](https://togithub.com/mudler/LocalAI/pull/3351) - chore(p2p): single-node when sharing federated instance by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3354](https://togithub.com/mudler/LocalAI/pull/3354) ##### New Contributors - [@​jermeyhu](https://togithub.com/jermeyhu) made their first contribution in [https://github.com/mudler/LocalAI/pull/3317](https://togithub.com/mudler/LocalAI/pull/3317) **Full Changelog**: mudler/LocalAI@v2.19.4...v2.20.0 </details> --- ### Configuration 📅 **Schedule**: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined). 🚦 **Automerge**: Enabled. ♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox. 🔕 **Ignore**: Close this PR and you won't be reminded about these updates again. --- - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box --- This PR has been generated by [Renovate Bot](https://togithub.com/renovatebot/renovate). <!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzOC41MS4xIiwidXBkYXRlZEluVmVyIjoiMzguNTEuMSIsInRhcmdldEJyYW5jaCI6Im1hc3RlciIsImxhYmVscyI6WyJhdXRvbWVyZ2UiLCJ1cGRhdGUvZG9ja2VyL2dlbmVyYWwvbm9uLW1ham9yIl19-->
Description
Previously torch was being installed in the requirements.txt file, this would pull in the CUDA12 version of torch, which isn't appropriate for other BUILD_TYPE configurations. This PR moves all of the pytorch related packages to BUILD_TYPE specific files so that we always install the correct pytorch package to match the BUILD_TYPE.
Hopefully this will help with issues such as #2737 and #1592.
Notes for Reviewers
Given that pytorch needs "special handling" based on the build_type, this would be a good candidate to pull the install of pytorch out of the requirements files and into the libbackend.sh library to better handle the torch, torchaudio and torchvision installations. Many python backends have requirements-TYPE.txt files purely for handling the complexities of the torch installation, so it is very easy to get it wrong.
Signed commits