-
-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: extract output with regexes from LLMs #3491
Conversation
This changset adds `extract_regex` to the LLM config. It is a list of regexes that can match output and will be used to re extract text from the LLM output. This is particularly useful for LLMs which outputs final results into tags. Signed-off-by: Ettore Di Giacinto <[email protected]>
✅ Deploy Preview for localai ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
core/backend/llm.go
Outdated
mu.Lock() | ||
reg, ok := cutstrings[r] | ||
if !ok { | ||
cutstrings[r] = regexp.MustCompile(r) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
MustCompile strikes me as risky for user-submitted config files. Can we use Compile here and check for errors? I'd assume we rather either return an error or skip regex result selection versus a panic?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
huh, good point. I didn't give much thought at this here as I mimicked what we already do with cutstrings
here:
Line 184 in d51444d
cutstrings[c] = regexp.MustCompile(c) |
From one end I think fatals error here should be required, because that would mean that the configuration is invalid - and probably should go back at the user into fixing it, from one hand probably we can smooth this up with some configuration to make errors more relaxed.
In any case the error here should be improved, panicking won't give a clear explanation to the user of what went wrong - I'm going with that first and then we can review what would be the best strategy here
Signed-off-by: Ettore Di Giacinto <[email protected]>
ccabb21
to
866e8b2
Compare
…1.0 by renovate (#26823) This PR contains the following updates: | Package | Update | Change | |---|---|---| | [docker.io/localai/localai](https://redirect.github.com/mudler/LocalAI) | minor | `v2.20.1-aio-cpu` -> `v2.21.0-aio-cpu` | | [docker.io/localai/localai](https://redirect.github.com/mudler/LocalAI) | minor | `v2.20.1-aio-gpu-nvidia-cuda-11` -> `v2.21.0-aio-gpu-nvidia-cuda-11` | | [docker.io/localai/localai](https://redirect.github.com/mudler/LocalAI) | minor | `v2.20.1-aio-gpu-nvidia-cuda-12` -> `v2.21.0-aio-gpu-nvidia-cuda-12` | | [docker.io/localai/localai](https://redirect.github.com/mudler/LocalAI) | minor | `v2.20.1-cublas-cuda11-ffmpeg-core` -> `v2.21.0-cublas-cuda11-ffmpeg-core` | | [docker.io/localai/localai](https://redirect.github.com/mudler/LocalAI) | minor | `v2.20.1-cublas-cuda11-core` -> `v2.21.0-cublas-cuda11-core` | | [docker.io/localai/localai](https://redirect.github.com/mudler/LocalAI) | minor | `v2.20.1-cublas-cuda12-ffmpeg-core` -> `v2.21.0-cublas-cuda12-ffmpeg-core` | | [docker.io/localai/localai](https://redirect.github.com/mudler/LocalAI) | minor | `v2.20.1-cublas-cuda12-core` -> `v2.21.0-cublas-cuda12-core` | | [docker.io/localai/localai](https://redirect.github.com/mudler/LocalAI) | minor | `v2.20.1-ffmpeg-core` -> `v2.21.0-ffmpeg-core` | | [docker.io/localai/localai](https://redirect.github.com/mudler/LocalAI) | minor | `v2.20.1` -> `v2.21.0` | --- > [!WARNING] > Some dependencies could not be looked up. Check the Dependency Dashboard for more information. --- ### Release Notes <details> <summary>mudler/LocalAI (docker.io/localai/localai)</summary> ### [`v2.21.0`](https://redirect.github.com/mudler/LocalAI/releases/tag/v2.21.0) [Compare Source](https://redirect.github.com/mudler/LocalAI/compare/v2.20.1...v2.21.0) <!-- Release notes generated using configuration in .github/release.yml at master --> ##### 💡 Highlights! LocalAI v2.21 release is out! - Deprecation of the `exllama` backend - AIO images now have `gpt-4o` instead of `gpt-4-vision-preview` for Vision API - vLLM backend now supports embeddings - New endpoint to list system information (`/system`) - `trust_remote_code` is now respected by `sentencetransformers` - Auto warm-up and load models on start - `coqui` backend switched to the community-maintained fork ##### What's Changed ##### Breaking Changes 🛠 - chore(exllama): drop exllama backend by [@​mudler](https://redirect.github.com/mudler) in [https://github.com/mudler/LocalAI/pull/3536](https://redirect.github.com/mudler/LocalAI/pull/3536) - chore(aio): rename gpt-4-vision-preview to gpt-4o by [@​mudler](https://redirect.github.com/mudler) in [https://github.com/mudler/LocalAI/pull/3597](https://redirect.github.com/mudler/LocalAI/pull/3597) ##### Exciting New Features 🎉 - feat: elevenlabs `sound-generation` api by [@​dave-gray101](https://redirect.github.com/dave-gray101) in [https://github.com/mudler/LocalAI/pull/3355](https://redirect.github.com/mudler/LocalAI/pull/3355) - feat(vllm): add support for embeddings by [@​mudler](https://redirect.github.com/mudler) in [https://github.com/mudler/LocalAI/pull/3440](https://redirect.github.com/mudler/LocalAI/pull/3440) - feat: add endpoint to list system informations by [@​mudler](https://redirect.github.com/mudler) in [https://github.com/mudler/LocalAI/pull/3449](https://redirect.github.com/mudler/LocalAI/pull/3449) - feat: extract output with regexes from LLMs by [@​mudler](https://redirect.github.com/mudler) in [https://github.com/mudler/LocalAI/pull/3491](https://redirect.github.com/mudler/LocalAI/pull/3491) - feat: allow setting trust_remote_code for sentencetransformers backend by [@​Nyralei](https://redirect.github.com/Nyralei) in [https://github.com/mudler/LocalAI/pull/3552](https://redirect.github.com/mudler/LocalAI/pull/3552) - feat(api): allow to pass videos to backends by [@​mudler](https://redirect.github.com/mudler) in [https://github.com/mudler/LocalAI/pull/3601](https://redirect.github.com/mudler/LocalAI/pull/3601) - feat(api): allow to pass audios to backends by [@​mudler](https://redirect.github.com/mudler) in [https://github.com/mudler/LocalAI/pull/3603](https://redirect.github.com/mudler/LocalAI/pull/3603) - feat: auto load into memory on startup by [@​sozercan](https://redirect.github.com/sozercan) in [https://github.com/mudler/LocalAI/pull/3627](https://redirect.github.com/mudler/LocalAI/pull/3627) - feat(coqui): switch to maintained community fork by [@​mudler](https://redirect.github.com/mudler) in [https://github.com/mudler/LocalAI/pull/3625](https://redirect.github.com/mudler/LocalAI/pull/3625) ##### Bug fixes 🐛 - fix(p2p): correctly allow to pass extra args to llama.cpp by [@​mudler](https://redirect.github.com/mudler) in [https://github.com/mudler/LocalAI/pull/3368](https://redirect.github.com/mudler/LocalAI/pull/3368) - fix(model-loading): keep track of open GRPC Clients by [@​mudler](https://redirect.github.com/mudler) in [https://github.com/mudler/LocalAI/pull/3377](https://redirect.github.com/mudler/LocalAI/pull/3377) - fix(tts): check error before inspecting result by [@​mudler](https://redirect.github.com/mudler) in [https://github.com/mudler/LocalAI/pull/3415](https://redirect.github.com/mudler/LocalAI/pull/3415) - fix(shutdown): do not shutdown immediately busy backends by [@​mudler](https://redirect.github.com/mudler) in [https://github.com/mudler/LocalAI/pull/3543](https://redirect.github.com/mudler/LocalAI/pull/3543) - fix(parler-tts): fix install with sycl by [@​mudler](https://redirect.github.com/mudler) in [https://github.com/mudler/LocalAI/pull/3624](https://redirect.github.com/mudler/LocalAI/pull/3624) - fix(ci): fixup checksum scanning pipeline by [@​mudler](https://redirect.github.com/mudler) in [https://github.com/mudler/LocalAI/pull/3631](https://redirect.github.com/mudler/LocalAI/pull/3631) - fix(hipblas): do not push all variants to hipblas builds by [@​mudler](https://redirect.github.com/mudler) in [https://github.com/mudler/LocalAI/pull/3630](https://redirect.github.com/mudler/LocalAI/pull/3630) ##### 🧠 Models - chore(model-gallery): add more quants for popular models by [@​mudler](https://redirect.github.com/mudler) in [https://github.com/mudler/LocalAI/pull/3365](https://redirect.github.com/mudler/LocalAI/pull/3365) - models(gallery): add phi-3.5 by [@​mudler](https://redirect.github.com/mudler) in [https://github.com/mudler/LocalAI/pull/3376](https://redirect.github.com/mudler/LocalAI/pull/3376) - models(gallery): add calme-2.1-phi3.5-4b-i1 by [@​mudler](https://redirect.github.com/mudler) in [https://github.com/mudler/LocalAI/pull/3383](https://redirect.github.com/mudler/LocalAI/pull/3383) - models(gallery): add magnum-v3-34b by [@​mudler](https://redirect.github.com/mudler) in [https://github.com/mudler/LocalAI/pull/3384](https://redirect.github.com/mudler/LocalAI/pull/3384) - models(gallery): add phi-3.5-vision by [@​mudler](https://redirect.github.com/mudler) in [https://github.com/mudler/LocalAI/pull/3421](https://redirect.github.com/mudler/LocalAI/pull/3421) - Revert "models(gallery): add phi-3.5-vision" by [@​mudler](https://redirect.github.com/mudler) in [https://github.com/mudler/LocalAI/pull/3422](https://redirect.github.com/mudler/LocalAI/pull/3422) - chore(model-gallery): ⬆️ update checksum by [@​localai-bot](https://redirect.github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3425](https://redirect.github.com/mudler/LocalAI/pull/3425) - feat: Added Piper voice it-paola-medium by [@​fakezeta](https://redirect.github.com/fakezeta) in [https://github.com/mudler/LocalAI/pull/3434](https://redirect.github.com/mudler/LocalAI/pull/3434) - chore(model-gallery): ⬆️ update checksum by [@​localai-bot](https://redirect.github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3442](https://redirect.github.com/mudler/LocalAI/pull/3442) - models(gallery): add hubble-4b-v1 by [@​mudler](https://redirect.github.com/mudler) in [https://github.com/mudler/LocalAI/pull/3444](https://redirect.github.com/mudler/LocalAI/pull/3444) - chore(model-gallery): ⬆️ update checksum by [@​localai-bot](https://redirect.github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3446](https://redirect.github.com/mudler/LocalAI/pull/3446) - models(gallery): add yi-coder (and variants) by [@​mudler](https://redirect.github.com/mudler) in [https://github.com/mudler/LocalAI/pull/3482](https://redirect.github.com/mudler/LocalAI/pull/3482) - chore(model-gallery): ⬆️ update checksum by [@​localai-bot](https://redirect.github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3486](https://redirect.github.com/mudler/LocalAI/pull/3486) - models(gallery): add reflection-llama-3.1-70b by [@​mudler](https://redirect.github.com/mudler) in [https://github.com/mudler/LocalAI/pull/3487](https://redirect.github.com/mudler/LocalAI/pull/3487) - models(gallery): add athena-codegemma-2-2b-it by [@​mudler](https://redirect.github.com/mudler) in [https://github.com/mudler/LocalAI/pull/3490](https://redirect.github.com/mudler/LocalAI/pull/3490) - models(gallery): add azure_dusk-v0.2-iq-imatrix by [@​mudler](https://redirect.github.com/mudler) in [https://github.com/mudler/LocalAI/pull/3538](https://redirect.github.com/mudler/LocalAI/pull/3538) - models(gallery): add mn-12b-lyra-v4-iq-imatrix by [@​mudler](https://redirect.github.com/mudler) in [https://github.com/mudler/LocalAI/pull/3539](https://redirect.github.com/mudler/LocalAI/pull/3539) - models(gallery): add datagemma models by [@​mudler](https://redirect.github.com/mudler) in [https://github.com/mudler/LocalAI/pull/3540](https://redirect.github.com/mudler/LocalAI/pull/3540) - models(gallery): add l3.1-8b-niitama-v1.1-iq-imatrix by [@​mudler](https://redirect.github.com/mudler) in [https://github.com/mudler/LocalAI/pull/3550](https://redirect.github.com/mudler/LocalAI/pull/3550) - models(gallery): add llama-3.1-8b-stheno-v3.4-iq-imatrix by [@​mudler](https://redirect.github.com/mudler) in [https://github.com/mudler/LocalAI/pull/3551](https://redirect.github.com/mudler/LocalAI/pull/3551) - fix: `gallery/index.yaml` comment spacing by [@​dave-gray101](https://redirect.github.com/dave-gray101) in [https://github.com/mudler/LocalAI/pull/3585](https://redirect.github.com/mudler/LocalAI/pull/3585) - models(gallery): add qwen2.5-14b-instruct by [@​mudler](https://redirect.github.com/mudler) in [https://github.com/mudler/LocalAI/pull/3607](https://redirect.github.com/mudler/LocalAI/pull/3607) - models(gallery): add qwen2.5-math-7b-instruct by [@​mudler](https://redirect.github.com/mudler) in [https://github.com/mudler/LocalAI/pull/3609](https://redirect.github.com/mudler/LocalAI/pull/3609) - models(gallery): add qwen2.5-14b_uncencored by [@​mudler](https://redirect.github.com/mudler) in [https://github.com/mudler/LocalAI/pull/3610](https://redirect.github.com/mudler/LocalAI/pull/3610) - models(gallery): add qwen2.5-coder-7b-instruct by [@​mudler](https://redirect.github.com/mudler) in [https://github.com/mudler/LocalAI/pull/3611](https://redirect.github.com/mudler/LocalAI/pull/3611) - models(gallery): add qwen2.5-math-72b-instruct by [@​mudler](https://redirect.github.com/mudler) in [https://github.com/mudler/LocalAI/pull/3612](https://redirect.github.com/mudler/LocalAI/pull/3612) - models(gallery): add qwen2.5-0.5b-instruct, qwen2.5-1.5b-instruct by [@​mudler](https://redirect.github.com/mudler) in [https://github.com/mudler/LocalAI/pull/3613](https://redirect.github.com/mudler/LocalAI/pull/3613) - models(gallery): add qwen2.5 32B, 72B, 32B Instruct by [@​mudler](https://redirect.github.com/mudler) in [https://github.com/mudler/LocalAI/pull/3614](https://redirect.github.com/mudler/LocalAI/pull/3614) - models(gallery): add llama-3.1-supernova-lite-reflection-v1.0-i1 by [@​mudler](https://redirect.github.com/mudler) in [https://github.com/mudler/LocalAI/pull/3615](https://redirect.github.com/mudler/LocalAI/pull/3615) - models(gallery): add llama-3.1-supernova-lite by [@​mudler](https://redirect.github.com/mudler) in [https://github.com/mudler/LocalAI/pull/3616](https://redirect.github.com/mudler/LocalAI/pull/3616) - models(gallery): add llama3.1-8b-shiningvaliant2 by [@​mudler](https://redirect.github.com/mudler) in [https://github.com/mudler/LocalAI/pull/3617](https://redirect.github.com/mudler/LocalAI/pull/3617) - models(gallery): add buddy2 by [@​mudler](https://redirect.github.com/mudler) in [https://github.com/mudler/LocalAI/pull/3618](https://redirect.github.com/mudler/LocalAI/pull/3618) - models(gallery): add llama-3.1-8b-arliai-rpmax-v1.1 by [@​mudler](https://redirect.github.com/mudler) in [https://github.com/mudler/LocalAI/pull/3619](https://redirect.github.com/mudler/LocalAI/pull/3619) - Fix NeuralDaredevil URL by [@​nyx4ris](https://redirect.github.com/nyx4ris) in [https://github.com/mudler/LocalAI/pull/3621](https://redirect.github.com/mudler/LocalAI/pull/3621) - models(gallery): add nightygurps-14b-v1.1 by [@​mudler](https://redirect.github.com/mudler) in [https://github.com/mudler/LocalAI/pull/3633](https://redirect.github.com/mudler/LocalAI/pull/3633) - models(gallery): add gemma-2-9b-arliai-rpmax-v1.1 by [@​mudler](https://redirect.github.com/mudler) in [https://github.com/mudler/LocalAI/pull/3634](https://redirect.github.com/mudler/LocalAI/pull/3634) - models(gallery): add gemma-2-2b-arliai-rpmax-v1.1 by [@​mudler](https://redirect.github.com/mudler) in [https://github.com/mudler/LocalAI/pull/3635](https://redirect.github.com/mudler/LocalAI/pull/3635) - models(gallery): add acolyte-22b-i1 by [@​mudler](https://redirect.github.com/mudler) in [https://github.com/mudler/LocalAI/pull/3636](https://redirect.github.com/mudler/LocalAI/pull/3636) ##### 📖 Documentation and examples - docs: ⬆️ update docs version mudler/LocalAI by [@​localai-bot](https://redirect.github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3366](https://redirect.github.com/mudler/LocalAI/pull/3366) - chore(docs): add Vulkan images links by [@​mudler](https://redirect.github.com/mudler) in [https://github.com/mudler/LocalAI/pull/3620](https://redirect.github.com/mudler/LocalAI/pull/3620) ##### 👒 Dependencies - chore: ⬆️ Update ggerganov/llama.cpp to `3ba780e2a8f0ffe13f571b27f0bbf2ca5a199efc` by [@​localai-bot](https://redirect.github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3361](https://redirect.github.com/mudler/LocalAI/pull/3361) - chore(deps): Bump openai from 1.41.1 to 1.42.0 in /examples/functions by [@​dependabot](https://redirect.github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3390](https://redirect.github.com/mudler/LocalAI/pull/3390) - chore(deps): Bump docs/themes/hugo-theme-relearn from `82a5e98` to `3a0ae52` by [@​dependabot](https://redirect.github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3391](https://redirect.github.com/mudler/LocalAI/pull/3391) - chore(deps): Bump idna from 3.7 to 3.8 in /examples/langchain/langchainpy-localai-example by [@​dependabot](https://redirect.github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3399](https://redirect.github.com/mudler/LocalAI/pull/3399) - chore(deps): Bump llama-index from 0.10.65 to 0.11.1 in /examples/chainlit by [@​dependabot](https://redirect.github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3404](https://redirect.github.com/mudler/LocalAI/pull/3404) - chore(deps): Bump llama-index from 0.10.67.post1 to 0.11.1 in /examples/langchain-chroma by [@​dependabot](https://redirect.github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3406](https://redirect.github.com/mudler/LocalAI/pull/3406) - chore(deps): Bump marshmallow from 3.21.3 to 3.22.0 in /examples/langchain/langchainpy-localai-example by [@​dependabot](https://redirect.github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3400](https://redirect.github.com/mudler/LocalAI/pull/3400) - chore(deps): Bump openai from 1.40.5 to 1.42.0 in /examples/langchain-chroma by [@​dependabot](https://redirect.github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3405](https://redirect.github.com/mudler/LocalAI/pull/3405) - chore(deps): Bump openai from 1.41.1 to 1.42.0 in /examples/langchain/langchainpy-localai-example by [@​dependabot](https://redirect.github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3401](https://redirect.github.com/mudler/LocalAI/pull/3401) - chore(deps): update edgevpn to v0.28 by [@​mudler](https://redirect.github.com/mudler) in [https://github.com/mudler/LocalAI/pull/3412](https://redirect.github.com/mudler/LocalAI/pull/3412) - chore(deps): Bump langchain from 0.2.14 to 0.2.15 in /examples/functions by [@​dependabot](https://redirect.github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3453](https://redirect.github.com/mudler/LocalAI/pull/3453) - chore(deps): Bump certifi from 2024.7.4 to 2024.8.30 in /examples/langchain/langchainpy-localai-example by [@​dependabot](https://redirect.github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3457](https://redirect.github.com/mudler/LocalAI/pull/3457) - chore(deps): Bump yarl from 1.9.4 to 1.9.7 in /examples/langchain/langchainpy-localai-example by [@​dependabot](https://redirect.github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3459](https://redirect.github.com/mudler/LocalAI/pull/3459) - chore(deps): Bump langchain-community from 0.2.12 to 0.2.15 in /examples/langchain/langchainpy-localai-example by [@​dependabot](https://redirect.github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3461](https://redirect.github.com/mudler/LocalAI/pull/3461) - chore(deps): Bump llama-index from 0.11.1 to 0.11.4 in /examples/chainlit by [@​dependabot](https://redirect.github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3462](https://redirect.github.com/mudler/LocalAI/pull/3462) - chore(deps): Bump llama-index from 0.11.1 to 0.11.4 in /examples/langchain-chroma by [@​dependabot](https://redirect.github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3467](https://redirect.github.com/mudler/LocalAI/pull/3467) - chore(deps): Bump docs/themes/hugo-theme-relearn from `3a0ae52` to `550a6ee` by [@​dependabot](https://redirect.github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3472](https://redirect.github.com/mudler/LocalAI/pull/3472) - chore(deps): Bump openai from 1.42.0 to 1.43.0 in /examples/functions by [@​dependabot](https://redirect.github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3452](https://redirect.github.com/mudler/LocalAI/pull/3452) - chore(deps): Bump langchain from 0.2.14 to 0.2.15 in /examples/langchain/langchainpy-localai-example by [@​dependabot](https://redirect.github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3460](https://redirect.github.com/mudler/LocalAI/pull/3460) - chore(deps): Bump openai from 1.42.0 to 1.43.0 in /examples/langchain-chroma by [@​dependabot](https://redirect.github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3468](https://redirect.github.com/mudler/LocalAI/pull/3468) - chore(deps): Bump langchain from 0.2.14 to 0.2.15 in /examples/langchain-chroma by [@​dependabot](https://redirect.github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3466](https://redirect.github.com/mudler/LocalAI/pull/3466) - chore(deps): Bump streamlit from 1.37.1 to 1.38.0 in /examples/streamlit-bot by [@​dependabot](https://redirect.github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3465](https://redirect.github.com/mudler/LocalAI/pull/3465) - chore(deps): Bump openai from 1.42.0 to 1.43.0 in /examples/langchain/langchainpy-localai-example by [@​dependabot](https://redirect.github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3456](https://redirect.github.com/mudler/LocalAI/pull/3456) - chore(deps): Bump langchain-community from 0.2.15 to 0.2.16 in /examples/langchain/langchainpy-localai-example by [@​dependabot](https://redirect.github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3500](https://redirect.github.com/mudler/LocalAI/pull/3500) - chore(deps): Bump openai from 1.43.0 to 1.44.0 in /examples/langchain/langchainpy-localai-example by [@​dependabot](https://redirect.github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3504](https://redirect.github.com/mudler/LocalAI/pull/3504) - chore(deps): Bump docs/themes/hugo-theme-relearn from `550a6ee` to `f696f60` by [@​dependabot](https://redirect.github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3505](https://redirect.github.com/mudler/LocalAI/pull/3505) - chore(deps): Bump langchain from 0.2.15 to 0.2.16 in /examples/langchain-chroma by [@​dependabot](https://redirect.github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3507](https://redirect.github.com/mudler/LocalAI/pull/3507) - chore(deps): Bump peter-evans/create-pull-request from 6 to 7 by [@​dependabot](https://redirect.github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3518](https://redirect.github.com/mudler/LocalAI/pull/3518) - chore(deps): Bump openai from 1.43.0 to 1.44.0 in /examples/functions by [@​dependabot](https://redirect.github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3522](https://redirect.github.com/mudler/LocalAI/pull/3522) - chore(deps): Bump langchain from 0.2.15 to 0.2.16 in /examples/langchain/langchainpy-localai-example by [@​dependabot](https://redirect.github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3502](https://redirect.github.com/mudler/LocalAI/pull/3502) - chore(deps): Bump numpy from 2.1.0 to 2.1.1 in /examples/langchain/langchainpy-localai-example by [@​dependabot](https://redirect.github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3503](https://redirect.github.com/mudler/LocalAI/pull/3503) - chore(deps): Bump llama-index from 0.11.4 to 0.11.7 in /examples/langchain-chroma by [@​dependabot](https://redirect.github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3508](https://redirect.github.com/mudler/LocalAI/pull/3508) - chore(deps): Bump langchain from 0.2.15 to 0.2.16 in /examples/functions by [@​dependabot](https://redirect.github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3521](https://redirect.github.com/mudler/LocalAI/pull/3521) - chore(deps): Bump openai from 1.43.0 to 1.44.1 in /examples/langchain-chroma by [@​dependabot](https://redirect.github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3532](https://redirect.github.com/mudler/LocalAI/pull/3532) - chore(deps): Bump yarl from 1.9.7 to 1.11.0 in /examples/langchain/langchainpy-localai-example by [@​dependabot](https://redirect.github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3501](https://redirect.github.com/mudler/LocalAI/pull/3501) - chore(deps): Bump llama-index from 0.11.4 to 0.11.7 in /examples/chainlit by [@​dependabot](https://redirect.github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3516](https://redirect.github.com/mudler/LocalAI/pull/3516) - chore(deps): update llama.cpp to [`6262d13`](https://redirect.github.com/mudler/LocalAI/commit/6262d13e0b2da91f230129a93a996609a2fa2f2) by [@​mudler](https://redirect.github.com/mudler) in [https://github.com/mudler/LocalAI/pull/3549](https://redirect.github.com/mudler/LocalAI/pull/3549) - chore(deps): Bump docs/themes/hugo-theme-relearn from `f696f60` to `d5a0ee0` by [@​dependabot](https://redirect.github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3558](https://redirect.github.com/mudler/LocalAI/pull/3558) - chore(deps): Bump setuptools from 72.1.0 to 75.1.0 in /backend/python/coqui by [@​dependabot](https://redirect.github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3554](https://redirect.github.com/mudler/LocalAI/pull/3554) - chore(deps): Bump langchain from 0.2.16 to 0.3.0 in /examples/functions by [@​dependabot](https://redirect.github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3559](https://redirect.github.com/mudler/LocalAI/pull/3559) - chore(deps): Bump openai from 1.44.1 to 1.45.1 in /examples/langchain-chroma by [@​dependabot](https://redirect.github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3556](https://redirect.github.com/mudler/LocalAI/pull/3556) - chore(deps): Bump setuptools from 72.1.0 to 75.1.0 in /backend/python/autogptq by [@​dependabot](https://redirect.github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3553](https://redirect.github.com/mudler/LocalAI/pull/3553) - chore(deps): Bump securego/gosec from 2.21.0 to 2.21.2 by [@​dependabot](https://redirect.github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3561](https://redirect.github.com/mudler/LocalAI/pull/3561) - chore(deps): Bump setuptools from 69.5.1 to 75.1.0 in /backend/python/transformers-musicgen by [@​dependabot](https://redirect.github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3564](https://redirect.github.com/mudler/LocalAI/pull/3564) - chore(deps): Bump setuptools from 72.1.0 to 75.1.0 in /backend/python/parler-tts by [@​dependabot](https://redirect.github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3565](https://redirect.github.com/mudler/LocalAI/pull/3565) - chore(deps): Bump sentence-transformers from 3.0.1 to 3.1.0 in /backend/python/sentencetransformers by [@​dependabot](https://redirect.github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3566](https://redirect.github.com/mudler/LocalAI/pull/3566) - chore(deps): Bump llama-index from 0.11.7 to 0.11.9 in /examples/chainlit by [@​dependabot](https://redirect.github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3567](https://redirect.github.com/mudler/LocalAI/pull/3567) - chore(deps): Bump weaviate-client from 4.6.7 to 4.8.1 in /examples/chainlit by [@​dependabot](https://redirect.github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3568](https://redirect.github.com/mudler/LocalAI/pull/3568) - chore(deps): Bump setuptools from 72.1.0 to 75.1.0 in /backend/python/vall-e-x by [@​dependabot](https://redirect.github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3570](https://redirect.github.com/mudler/LocalAI/pull/3570) - chore(deps): Bump greenlet from 3.0.3 to 3.1.0 in /examples/langchain/langchainpy-localai-example by [@​dependabot](https://redirect.github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3571](https://redirect.github.com/mudler/LocalAI/pull/3571) - chore(deps): Bump setuptools from 70.3.0 to 75.1.0 in /backend/python/diffusers by [@​dependabot](https://redirect.github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3575](https://redirect.github.com/mudler/LocalAI/pull/3575) - chore(deps): Bump setuptools from 70.3.0 to 75.1.0 in /backend/python/bark by [@​dependabot](https://redirect.github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3574](https://redirect.github.com/mudler/LocalAI/pull/3574) - chore(deps): Bump setuptools from 72.1.0 to 75.1.0 in /backend/python/rerankers by [@​dependabot](https://redirect.github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3578](https://redirect.github.com/mudler/LocalAI/pull/3578) - chore(deps): Bump setuptools from 69.5.1 to 75.1.0 in /backend/python/transformers by [@​dependabot](https://redirect.github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3579](https://redirect.github.com/mudler/LocalAI/pull/3579) - chore(deps): Bump setuptools from 70.3.0 to 75.1.0 in /backend/python/vllm by [@​dependabot](https://redirect.github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3580](https://redirect.github.com/mudler/LocalAI/pull/3580) - chore(deps): Bump langchain from 0.2.16 to 0.3.0 in /examples/langchain-chroma by [@​dependabot](https://redirect.github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3557](https://redirect.github.com/mudler/LocalAI/pull/3557) - chore(deps): Bump openai from 1.44.0 to 1.45.1 in /examples/functions by [@​dependabot](https://redirect.github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3560](https://redirect.github.com/mudler/LocalAI/pull/3560) - chore(deps): Bump langchain from 0.2.16 to 0.3.0 in /examples/langchain/langchainpy-localai-example by [@​dependabot](https://redirect.github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3577](https://redirect.github.com/mudler/LocalAI/pull/3577) - chore(deps): Bump openai from 1.44.0 to 1.45.1 in /examples/langchain/langchainpy-localai-example by [@​dependabot](https://redirect.github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3573](https://redirect.github.com/mudler/LocalAI/pull/3573) - chore(deps): Bump pypinyin from 0.50.0 to 0.53.0 in /backend/python/openvoice by [@​dependabot](https://redirect.github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3562](https://redirect.github.com/mudler/LocalAI/pull/3562) - chore(deps): Bump yarl from 1.11.0 to 1.11.1 in /examples/langchain/langchainpy-localai-example by [@​dependabot](https://redirect.github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3643](https://redirect.github.com/mudler/LocalAI/pull/3643) - chore(deps): Bump urllib3 from 2.2.2 to 2.2.3 in /examples/langchain/langchainpy-localai-example by [@​dependabot](https://redirect.github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3646](https://redirect.github.com/mudler/LocalAI/pull/3646) - chore(deps): Bump idna from 3.8 to 3.10 in /examples/langchain/langchainpy-localai-example by [@​dependabot](https://redirect.github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3644](https://redirect.github.com/mudler/LocalAI/pull/3644) - chore(deps): Bump sqlalchemy from 2.0.32 to 2.0.35 in /examples/langchain/langchainpy-localai-example by [@​dependabot](https://redirect.github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/3649](https://redirect.github.com/mudler/LocalAI/pull/3649) ##### Other Changes - feat: external backend launching log improvements and relative path support by [@​dave-gray101](https://redirect.github.com/dave-gray101) in [https://github.com/mudler/LocalAI/pull/3348](https://redirect.github.com/mudler/LocalAI/pull/3348) - Update quickstart.md by [@​grant-wilson](https://redirect.github.com/grant-wilson) in [https://github.com/mudler/LocalAI/pull/3373](https://redirect.github.com/mudler/LocalAI/pull/3373) - feat(swagger): update swagger by [@​localai-bot](https://redirect.github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3370](https://redirect.github.com/mudler/LocalAI/pull/3370) - fix: devcontainer `utils.sh` ssh copy improvements by [@​dave-gray101](https://redirect.github.com/dave-gray101) in [https://github.com/mudler/LocalAI/pull/3372](https://redirect.github.com/mudler/LocalAI/pull/3372) - chore(cuda): reduce binary size by [@​mudler](https://redirect.github.com/mudler) in [https://github.com/mudler/LocalAI/pull/3379](https://redirect.github.com/mudler/LocalAI/pull/3379) - chore(deps): update edgevpn by [@​mudler](https://redirect.github.com/mudler) in [https://github.com/mudler/LocalAI/pull/3385](https://redirect.github.com/mudler/LocalAI/pull/3385) - chore: ⬆️ Update ggerganov/llama.cpp to `7d787ed96c32be18603c158ab0276992cf0dc346` by [@​localai-bot](https://redirect.github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3409](https://redirect.github.com/mudler/LocalAI/pull/3409) - chore: ⬆️ Update ggerganov/llama.cpp to `20f1789dfb4e535d64ba2f523c64929e7891f428` by [@​localai-bot](https://redirect.github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3417](https://redirect.github.com/mudler/LocalAI/pull/3417) - chore: ⬆️ Update ggerganov/llama.cpp to `9fe94ccac92693d4ae1bc283ff0574e8b3f4e765` by [@​localai-bot](https://redirect.github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3424](https://redirect.github.com/mudler/LocalAI/pull/3424) - chore(cli): be consistent between workers and expose ExtraLLamaCPPArgs to both by [@​mudler](https://redirect.github.com/mudler) in [https://github.com/mudler/LocalAI/pull/3428](https://redirect.github.com/mudler/LocalAI/pull/3428) - chore(tests): replace runaway models for tests by [@​mudler](https://redirect.github.com/mudler) in [https://github.com/mudler/LocalAI/pull/3432](https://redirect.github.com/mudler/LocalAI/pull/3432) - chore(model-loader): increase test coverage of model loader by [@​mudler](https://redirect.github.com/mudler) in [https://github.com/mudler/LocalAI/pull/3433](https://redirect.github.com/mudler/LocalAI/pull/3433) - chore(deps): update llama.cpp by [@​mudler](https://redirect.github.com/mudler) in [https://github.com/mudler/LocalAI/pull/3438](https://redirect.github.com/mudler/LocalAI/pull/3438) - chore: ⬆️ Update ggerganov/llama.cpp to `a47667cff41f5a198eb791974e0afcc1cddd3229` by [@​localai-bot](https://redirect.github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3441](https://redirect.github.com/mudler/LocalAI/pull/3441) - chore: ⬆️ Update ggerganov/llama.cpp to `8f1d81a0b6f50b9bad72db0b6fcd299ad9ecd48c` by [@​localai-bot](https://redirect.github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3445](https://redirect.github.com/mudler/LocalAI/pull/3445) - fix: untangle pkg/grpc and core/schema for Transcription by [@​dave-gray101](https://redirect.github.com/dave-gray101) in [https://github.com/mudler/LocalAI/pull/3419](https://redirect.github.com/mudler/LocalAI/pull/3419) - chore(deps): update whisper.cpp by [@​mudler](https://redirect.github.com/mudler) in [https://github.com/mudler/LocalAI/pull/3443](https://redirect.github.com/mudler/LocalAI/pull/3443) - chore: ⬆️ Update ggerganov/llama.cpp to `48baa61eccdca9205daf8d620ba28055c2347b64` by [@​localai-bot](https://redirect.github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3474](https://redirect.github.com/mudler/LocalAI/pull/3474) - chore: ⬆️ Update ggerganov/whisper.cpp to `5236f0278420ab776d1787c4330678d80219b4b6` by [@​localai-bot](https://redirect.github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3475](https://redirect.github.com/mudler/LocalAI/pull/3475) - chore: ⬆️ Update ggerganov/llama.cpp to `8962422b1c6f9b8b15f5aeaea42600bcc2d44177` by [@​localai-bot](https://redirect.github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3478](https://redirect.github.com/mudler/LocalAI/pull/3478) - fix: purge a few remaining runway model references by [@​dave-gray101](https://redirect.github.com/dave-gray101) in [https://github.com/mudler/LocalAI/pull/3480](https://redirect.github.com/mudler/LocalAI/pull/3480) - chore: ⬆️ Update ggerganov/llama.cpp to `581c305186a0ff93f360346c57e21fe16e967bb7` by [@​localai-bot](https://redirect.github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3481](https://redirect.github.com/mudler/LocalAI/pull/3481) - chore: ⬆️ Update ggerganov/llama.cpp to `4db04784f96757d74f74c8c110c2a00d55e33514` by [@​localai-bot](https://redirect.github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3485](https://redirect.github.com/mudler/LocalAI/pull/3485) - feat(swagger): update swagger by [@​localai-bot](https://redirect.github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3484](https://redirect.github.com/mudler/LocalAI/pull/3484) - chore: ⬆️ Update ggerganov/llama.cpp to `815b1fb20a53e439882171757825bacb1350de04` by [@​localai-bot](https://redirect.github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3489](https://redirect.github.com/mudler/LocalAI/pull/3489) - chore: ⬆️ Update ggerganov/whisper.cpp to `5caa19240d55bfd6ee316d50fbad32c6e9c39528` by [@​localai-bot](https://redirect.github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3494](https://redirect.github.com/mudler/LocalAI/pull/3494) - fix: speedup and improve cachability of docker build of `builder-sd` by [@​dave-gray101](https://redirect.github.com/dave-gray101) in [https://github.com/mudler/LocalAI/pull/3430](https://redirect.github.com/mudler/LocalAI/pull/3430) - chore: ⬆️ Update ggerganov/whisper.cpp to `a551933542d956ae84634937acd2942eb40efaaf` by [@​localai-bot](https://redirect.github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3534](https://redirect.github.com/mudler/LocalAI/pull/3534) - chore(deps): update llama.cpp by [@​mudler](https://redirect.github.com/mudler) in [https://github.com/mudler/LocalAI/pull/3497](https://redirect.github.com/mudler/LocalAI/pull/3497) - chore(gosec): fix CI by [@​mudler](https://redirect.github.com/mudler) in [https://github.com/mudler/LocalAI/pull/3537](https://redirect.github.com/mudler/LocalAI/pull/3537) - chore: ⬆️ Update ggerganov/llama.cpp to `feff4aa8461da7c432d144c11da4802e41fef3cf` by [@​localai-bot](https://redirect.github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3542](https://redirect.github.com/mudler/LocalAI/pull/3542) - chore: ⬆️ Update ggerganov/whisper.cpp to `049b3a0e53c8a8e4c4576c06a1a4fccf0063a73f` by [@​localai-bot](https://redirect.github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3548](https://redirect.github.com/mudler/LocalAI/pull/3548) - feat: auth v2 - supersedes [#​2894](https://redirect.github.com/mudler/LocalAI/issues/2894) by [@​dave-gray101](https://redirect.github.com/dave-gray101) in [https://github.com/mudler/LocalAI/pull/3476](https://redirect.github.com/mudler/LocalAI/pull/3476) - chore: ⬆️ Update ggerganov/llama.cpp to `23e0d70bacaaca1429d365a44aa9e7434f17823b` by [@​localai-bot](https://redirect.github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3581](https://redirect.github.com/mudler/LocalAI/pull/3581) - Revert "chore(deps): Bump setuptools from 69.5.1 to 75.1.0 in /backend/python/transformers" by [@​mudler](https://redirect.github.com/mudler) in [https://github.com/mudler/LocalAI/pull/3586](https://redirect.github.com/mudler/LocalAI/pull/3586) - chore(refactor): drop duplicated shutdown logics by [@​mudler](https://redirect.github.com/mudler) in [https://github.com/mudler/LocalAI/pull/3589](https://redirect.github.com/mudler/LocalAI/pull/3589) - Revert "chore(deps): Bump securego/gosec from 2.21.0 to 2.21.2" by [@​mudler](https://redirect.github.com/mudler) in [https://github.com/mudler/LocalAI/pull/3590](https://redirect.github.com/mudler/LocalAI/pull/3590) - chore: ⬆️ Update ggerganov/llama.cpp to `8b836ae731bbb2c5640bc47df5b0a78ffcb129cb` by [@​localai-bot](https://redirect.github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3591](https://redirect.github.com/mudler/LocalAI/pull/3591) - chore: ⬆️ Update ggerganov/whisper.cpp to `5b1ce40fa882e9cb8630b48032067a1ed2f1534f` by [@​localai-bot](https://redirect.github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3592](https://redirect.github.com/mudler/LocalAI/pull/3592) - chore: ⬆️ Update ggerganov/llama.cpp to `64c6af3195c3cd4aa3328a1282d29cd2635c34c9` by [@​localai-bot](https://redirect.github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3598](https://redirect.github.com/mudler/LocalAI/pull/3598) - feat(swagger): update swagger by [@​localai-bot](https://redirect.github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3604](https://redirect.github.com/mudler/LocalAI/pull/3604) - chore: ⬆️ Update ggerganov/llama.cpp to `6026da52d6942b253df835070619775d849d0258` by [@​localai-bot](https://redirect.github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3605](https://redirect.github.com/mudler/LocalAI/pull/3605) - chore: ⬆️ Update ggerganov/whisper.cpp to `34972dbe221709323714fc8402f2e24041d48213` by [@​localai-bot](https://redirect.github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3623](https://redirect.github.com/mudler/LocalAI/pull/3623) - chore: ⬆️ Update ggerganov/llama.cpp to `63351143b2ea5efe9f8b9c61f553af8a51f1deff` by [@​localai-bot](https://redirect.github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3622](https://redirect.github.com/mudler/LocalAI/pull/3622) - chore: ⬆️ Update ggerganov/llama.cpp to `d09770cae71b416c032ec143dda530f7413c4038` by [@​localai-bot](https://redirect.github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3626](https://redirect.github.com/mudler/LocalAI/pull/3626) - chore: ⬆️ Update ggerganov/llama.cpp to `c35e586ea57221844442c65a1172498c54971cb0` by [@​localai-bot](https://redirect.github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3629](https://redirect.github.com/mudler/LocalAI/pull/3629) - chore: ⬆️ Update ggerganov/llama.cpp to `f0c7b5edf82aa200656fd88c11ae3a805d7130bf` by [@​localai-bot](https://redirect.github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3653](https://redirect.github.com/mudler/LocalAI/pull/3653) - test: preliminary tests and merge fix for authv2 by [@​dave-gray101](https://redirect.github.com/dave-gray101) in [https://github.com/mudler/LocalAI/pull/3584](https://redirect.github.com/mudler/LocalAI/pull/3584) ##### New Contributors - [@​grant-wilson](https://redirect.github.com/grant-wilson) made their first contribution in [https://github.com/mudler/LocalAI/pull/3373](https://redirect.github.com/mudler/LocalAI/pull/3373) - [@​Nyralei](https://redirect.github.com/Nyralei) made their first contribution in [https://github.com/mudler/LocalAI/pull/3552](https://redirect.github.com/mudler/LocalAI/pull/3552) - [@​nyx4ris](https://redirect.github.com/nyx4ris) made their first contribution in [https://github.com/mudler/LocalAI/pull/3621](https://redirect.github.com/mudler/LocalAI/pull/3621) **Full Changelog**: mudler/LocalAI@v2.20.1...v2.21.0 </details> --- ### Configuration 📅 **Schedule**: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined). 🚦 **Automerge**: Enabled. ♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox. 🔕 **Ignore**: Close this PR and you won't be reminded about these updates again. --- - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box --- This PR has been generated by [Renovate Bot](https://redirect.github.com/renovatebot/renovate). <!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzOC45NC4zIiwidXBkYXRlZEluVmVyIjoiMzguOTQuMyIsInRhcmdldEJyYW5jaCI6Im1hc3RlciIsImxhYmVscyI6WyJhdXRvbWVyZ2UiLCJ1cGRhdGUvZG9ja2VyL2dlbmVyYWwvbm9uLW1ham9yIl19-->
This changset adds
extract_regex
to the LLM config. It is a list of regexes that can match output and will be used to re extract text from the LLM output. This is particularly useful for LLMs which outputs final results into tags.