Skip to content

Commit

Permalink
chore(deps): update container image docker.io/localai/localai to v2.1…
Browse files Browse the repository at this point in the history
…7.0 by renovate (#23480)

This PR contains the following updates:

| Package | Update | Change |
|---|---|---|
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
minor | `v2.16.0-aio-cpu` -> `v2.17.0-aio-cpu` |
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
minor | `v2.16.0-cublas-cuda11-ffmpeg-core` ->
`v2.17.0-cublas-cuda11-ffmpeg-core` |
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
minor | `v2.16.0-cublas-cuda11-core` -> `v2.17.0-cublas-cuda11-core` |
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
minor | `v2.16.0-cublas-cuda12-ffmpeg-core` ->
`v2.17.0-cublas-cuda12-ffmpeg-core` |
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
minor | `v2.16.0-cublas-cuda12-core` -> `v2.17.0-cublas-cuda12-core` |
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
minor | `v2.16.0-ffmpeg-core` -> `v2.17.0-ffmpeg-core` |
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
minor | `v2.16.0` -> `v2.17.0` |

---

> [!WARNING]
> Some dependencies could not be looked up. Check the Dependency
Dashboard for more information.

---

### Release Notes

<details>
<summary>mudler/LocalAI (docker.io/localai/localai)</summary>

###
[`v2.17.0`](https://togithub.com/mudler/LocalAI/releases/tag/v2.17.0)

[Compare
Source](https://togithub.com/mudler/LocalAI/compare/v2.16.0...v2.17.0)

![local-ai-release-2
17-shadow](https://togithub.com/mudler/LocalAI/assets/2420543/69025f5a-96bd-4ffa-9862-6e651c71345d)
Ahoj! this new release of LocalAI comes with tons of updates, and
enhancements behind the scenes!

##### 🌟 Highlights TLDR;

-   Automatic identification of GGUF models
-   New WebUI page to talk with an LLM!
-   https://models.localai.io is live! 🚀
-   Better arm64 and Apple silicon support
-   More models to the gallery!
-   New quickstart installer script
-   Enhancements to mixed grammar support
-   Major improvements to transformers
-   Linux single binary now supports rocm, nvidia, and intel

##### 🤖 Automatic model identification for llama.cpp-based models

Just drop your GGUF files into the model folders, and let LocalAI handle
the configurations. YAML files are now reserved for those who love to
tinker with advanced setups.

##### 🔊 Talk to your LLM!

Introduced a new page that allows direct interaction with the LLM using
audio transcription and TTS capabilities. This feature is so fun - now
you can just talk with any LLM with a couple of clicks away.
![Screenshot from 2024-06-08
12-44-41](https://togithub.com/mudler/LocalAI/assets/2420543/c7926eb9-b91f-47dd-be32-68fdb10e6bc7)

##### 🍏 Apple single-binary

Experience enhanced support for the Apple ecosystem with a comprehensive
single-binary that packs all necessary libraries, ensuring LocalAI runs
smoothly on MacOS and ARM64 architectures.

##### ARM64

Expanded our support for ARM64 with new Docker images and single binary
options, ensuring better compatibility and performance on ARM-based
systems.

Note: currently we support only arm core images, for instance:
`localai/localai:master-ffmpeg-core`,
`localai/localai:latest-ffmpeg-core`,
`localai/localai:v2.17.0-ffmpeg-core`.

##### 🐞 Bug Fixes and small enhancements

We’ve ironed out several issues, including image endpoint response types
and other minor problems, boosting the stability and reliability of our
applications. It is now also possible to enable CSRF when starting
LocalAI, thanks to
[@&#8203;dave-gray101](https://togithub.com/dave-gray101).

##### 🌐 Models and Galleries

Enhanced the model gallery with new additions like Mirai Nova, Mahou,
and several updates to existing models ensuring better performance and
accuracy.

Now you can check new models also in https://models.localai.io, without
running LocalAI!

##### Installation and Setup

A new install.sh script is now available for quick and hassle-free
installations, streamlining the setup process for new users.

    curl https://localai.io/install.sh | sh

Installation can be configured with Environment variables, for example:

    curl https://localai.io/install.sh | VAR=value sh

List of the Environment Variables:

- DOCKER_INSTALL: Set to "true" to enable the installation of Docker
images.
-   USE_AIO: Set to "true" to use the all-in-one LocalAI Docker image.
-   API_KEY: Specify an API key for accessing LocalAI, if required.
-   CORE_IMAGES: Set to "true" to download core LocalAI images.
- PORT: Specifies the port on which LocalAI will run (default is 8080).
- THREADS: Number of processor threads the application should use.
Defaults to the number of logical cores minus one.
- VERSION: Specifies the version of LocalAI to install. Defaults to the
latest available version.
- MODELS_PATH: Directory path where LocalAI models are stored (default
is /usr/share/local-ai/models).

We are looking into improving the installer, and as this is a first
iteration any feedback is welcome! Open up an
[issue](https://togithub.com/mudler/LocalAI/issues/new/choose) if
something doesn't work for you!

##### Enhancements to mixed grammar support

Mixed grammar support continues receiving improvements behind the
scenes.

##### 🐍  Transformers backend enhancements

-   Temperature = 0 correctly handled as greedy search
-   Handles custom words as stop words
-   Implement KV cache
-   Phi 3 no more requires `trust_remote_code: true` flag

Shout-out to [@&#8203;fakezeta](https://togithub.com/fakezeta) for these
enhancements!

##### Install models with the CLI

Now the CLI can install models directly from the gallery. For instance:

    local-ai run <model_name_in gallery>

This command ensures the model is installed in the model folder at
startup.

##### 🐧 Linux single binary now supports rocm, nvidia, and intel

Single binaries for Linux now contain Intel, AMD GPU, and NVIDIA
support. Note that you need to install the dependencies separately in
the system to leverage these features. In upcoming releases, this
requirement will be handled by the installer script.

##### 📣 Let's Make Some Noise!

A gigantic THANK YOU to everyone who’s contributed—your feedback, bug
squashing, and feature suggestions are what make LocalAI shine. To all
our heroes out there supporting other users and sharing their expertise,
you’re the real MVPs!

Remember, LocalAI thrives on community support—not big corporate bucks.
If you love what we're building, show some love! A shoutout on social
(@&#8203;LocalAI_OSS and @&#8203;mudler_it on twitter/X), joining our
sponsors, or simply starring us on GitHub makes all the difference.

Also, if you haven't yet joined our Discord, come on over! Here's the
link: https://discord.gg/uJAeKSAGDy

Thanks a ton, and.. enjoy this release!

##### What's Changed

##### Bug fixes 🐛

- fix: gpu fetch device info by
[@&#8203;sozercan](https://togithub.com/sozercan) in
[https://github.com/mudler/LocalAI/pull/2403](https://togithub.com/mudler/LocalAI/pull/2403)
- fix(watcher): do not emit fatal errors by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2410](https://togithub.com/mudler/LocalAI/pull/2410)
- fix: install pytorch from proper index for hipblas builds by
[@&#8203;cryptk](https://togithub.com/cryptk) in
[https://github.com/mudler/LocalAI/pull/2413](https://togithub.com/mudler/LocalAI/pull/2413)
- fix: pin version of setuptools for intel builds to work around
[#&#8203;2406](https://togithub.com/mudler/LocalAI/issues/2406) by
[@&#8203;cryptk](https://togithub.com/cryptk) in
[https://github.com/mudler/LocalAI/pull/2414](https://togithub.com/mudler/LocalAI/pull/2414)
- bugfix: CUDA acceleration not working by
[@&#8203;fakezeta](https://togithub.com/fakezeta) in
[https://github.com/mudler/LocalAI/pull/2475](https://togithub.com/mudler/LocalAI/pull/2475)
- fix: `pkg/downloader` should respect basePath for `file://` urls by
[@&#8203;dave-gray101](https://togithub.com/dave-gray101) in
[https://github.com/mudler/LocalAI/pull/2481](https://togithub.com/mudler/LocalAI/pull/2481)
- fix: chat webui response parsing by
[@&#8203;sozercan](https://togithub.com/sozercan) in
[https://github.com/mudler/LocalAI/pull/2515](https://togithub.com/mudler/LocalAI/pull/2515)
- fix(stream): do not break channel consumption by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2517](https://togithub.com/mudler/LocalAI/pull/2517)
- fix(Makefile): enable STATIC on dist by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2569](https://togithub.com/mudler/LocalAI/pull/2569)

##### Exciting New Features 🎉

- feat(images): do not install python deps in the core image by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2425](https://togithub.com/mudler/LocalAI/pull/2425)
- feat(hipblas): extend default hipblas GPU_TARGETS by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2426](https://togithub.com/mudler/LocalAI/pull/2426)
- feat(build): add arm64 core containers by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2421](https://togithub.com/mudler/LocalAI/pull/2421)
- feat(functions): allow parallel calls with mixed/no grammars by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2432](https://togithub.com/mudler/LocalAI/pull/2432)
- feat(image): support `response_type` in the OpenAI API request by
[@&#8203;prajwalnayak7](https://togithub.com/prajwalnayak7) in
[https://github.com/mudler/LocalAI/pull/2347](https://togithub.com/mudler/LocalAI/pull/2347)
- feat(swagger): update swagger by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2436](https://togithub.com/mudler/LocalAI/pull/2436)
- feat(functions): better free string matching, allow to expect strings
after JSON by [@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2445](https://togithub.com/mudler/LocalAI/pull/2445)
- build(Makefile): add back single target to build native llama-cpp by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2448](https://togithub.com/mudler/LocalAI/pull/2448)
- feat(functions): allow `response_regex` to be a list by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2447](https://togithub.com/mudler/LocalAI/pull/2447)
- TTS API improvements by [@&#8203;blob42](https://togithub.com/blob42)
in
[https://github.com/mudler/LocalAI/pull/2308](https://togithub.com/mudler/LocalAI/pull/2308)
- feat(transformers): various enhancements to the transformers backend
by [@&#8203;fakezeta](https://togithub.com/fakezeta) in
[https://github.com/mudler/LocalAI/pull/2468](https://togithub.com/mudler/LocalAI/pull/2468)
- feat(webui): enhance card visibility by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2473](https://togithub.com/mudler/LocalAI/pull/2473)
- feat(default): use number of physical cores as default by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2483](https://togithub.com/mudler/LocalAI/pull/2483)
- feat: fiber CSRF by
[@&#8203;dave-gray101](https://togithub.com/dave-gray101) in
[https://github.com/mudler/LocalAI/pull/2482](https://togithub.com/mudler/LocalAI/pull/2482)
- feat(amdgpu): try to build in single binary by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2485](https://togithub.com/mudler/LocalAI/pull/2485)
- feat:`OpaqueErrors` to hide error information by
[@&#8203;dave-gray101](https://togithub.com/dave-gray101) in
[https://github.com/mudler/LocalAI/pull/2486](https://togithub.com/mudler/LocalAI/pull/2486)
- build(intel): bundle intel variants in single-binary by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2494](https://togithub.com/mudler/LocalAI/pull/2494)
- feat(install): add install.sh for quick installs by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2489](https://togithub.com/mudler/LocalAI/pull/2489)
- feat(llama.cpp): guess model defaults from file by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2522](https://togithub.com/mudler/LocalAI/pull/2522)
- feat(ui): add page to talk with voice, transcription, and tts by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2520](https://togithub.com/mudler/LocalAI/pull/2520)
- feat(arm64): enable single-binary builds by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2490](https://togithub.com/mudler/LocalAI/pull/2490)
- feat(util): add util command to print GGUF informations by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2528](https://togithub.com/mudler/LocalAI/pull/2528)
- feat(defaults): add defaults for Command-R models by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2529](https://togithub.com/mudler/LocalAI/pull/2529)
- feat(detection): detect by template in gguf file, add qwen2, phi,
mistral and chatml by [@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2536](https://togithub.com/mudler/LocalAI/pull/2536)
- feat(gallery): show available models in website, allow `local-ai
models install` to install from galleries by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2555](https://togithub.com/mudler/LocalAI/pull/2555)
- feat(gallery): uniform download from CLI by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2559](https://togithub.com/mudler/LocalAI/pull/2559)
- feat(guesser): identify gemma models by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2561](https://togithub.com/mudler/LocalAI/pull/2561)
- feat(binary): support extracted bundled libs on darwin by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2563](https://togithub.com/mudler/LocalAI/pull/2563)
- feat(darwin): embed grpc libs by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2567](https://togithub.com/mudler/LocalAI/pull/2567)
- feat(build): bundle libs for arm64 and x86 linux binaries by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2572](https://togithub.com/mudler/LocalAI/pull/2572)
- feat(libpath): refactor and expose functions for external library
paths by [@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2578](https://togithub.com/mudler/LocalAI/pull/2578)

##### 🧠 Models

- models(gallery): add Mirai Nova by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2405](https://togithub.com/mudler/LocalAI/pull/2405)
- models(gallery): add Mahou by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2411](https://togithub.com/mudler/LocalAI/pull/2411)
- models(gallery): add minicpm by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2412](https://togithub.com/mudler/LocalAI/pull/2412)
- models(gallery): add poppy porpoise 0.85 by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2415](https://togithub.com/mudler/LocalAI/pull/2415)
- models(gallery): add alpha centauri by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2416](https://togithub.com/mudler/LocalAI/pull/2416)
- models(gallery): add cream-phi-13b by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2417](https://togithub.com/mudler/LocalAI/pull/2417)
- models(gallery): add stheno-mahou by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2418](https://togithub.com/mudler/LocalAI/pull/2418)
- models(gallery): add iterative-dpo, fix minicpm by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2422](https://togithub.com/mudler/LocalAI/pull/2422)
- models(gallery): add una-thepitbull by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2435](https://togithub.com/mudler/LocalAI/pull/2435)
- models(gallery): add halu by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2434](https://togithub.com/mudler/LocalAI/pull/2434)
- models(gallery): add neuraldaredevil by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2439](https://togithub.com/mudler/LocalAI/pull/2439)
- models(gallery): add Codestral by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2442](https://togithub.com/mudler/LocalAI/pull/2442)
- models(gallery): add mopeymule by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2449](https://togithub.com/mudler/LocalAI/pull/2449)
- models(gallery): ⬆️ update checksum by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2451](https://togithub.com/mudler/LocalAI/pull/2451)
- models(gallery): add anjir by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2454](https://togithub.com/mudler/LocalAI/pull/2454)
- models(gallery): add llama3-11b by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2455](https://togithub.com/mudler/LocalAI/pull/2455)
- models(gallery): add ultron by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2456](https://togithub.com/mudler/LocalAI/pull/2456)
- models(gallery): add poppy porpoise 1.0 by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2459](https://togithub.com/mudler/LocalAI/pull/2459)
- models(gallery): add Neural SOVLish Devil by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2460](https://togithub.com/mudler/LocalAI/pull/2460)
- models(gallery): add all whisper variants by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2462](https://togithub.com/mudler/LocalAI/pull/2462)
- models(gallery): ⬆️ update checksum by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2463](https://togithub.com/mudler/LocalAI/pull/2463)
- models(gallery): add gemma-2b by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2466](https://togithub.com/mudler/LocalAI/pull/2466)
- models(gallery): add fimbulvetr iqmatrix version by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2470](https://togithub.com/mudler/LocalAI/pull/2470)
- models(gallery): add new poppy porpoise versions by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2471](https://togithub.com/mudler/LocalAI/pull/2471)
- models(gallery): add dolphin-2.9.2-Phi-3-Medium by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2492](https://togithub.com/mudler/LocalAI/pull/2492)
- models(gallery): add dolphin-2.9.2-phi-3-Medium-abliterated by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2495](https://togithub.com/mudler/LocalAI/pull/2495)
- models(gallery): add nyun by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2496](https://togithub.com/mudler/LocalAI/pull/2496)
- models(gallery): add phi-3-4x4b by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2497](https://togithub.com/mudler/LocalAI/pull/2497)
- models(gallery): add llama-3-instruct-8b-SimPO-ExPO by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2498](https://togithub.com/mudler/LocalAI/pull/2498)
- models(gallery): add Llama-3-Yggdrasil-2.0-8B by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2499](https://togithub.com/mudler/LocalAI/pull/2499)
- models(gallery): add l3-8b-stheno-v3.2-iq-imatrix by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2500](https://togithub.com/mudler/LocalAI/pull/2500)
- models(gallery): add llama3-8B-aifeifei-1.0-iq-imatrix by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2509](https://togithub.com/mudler/LocalAI/pull/2509)
- models(gallery): add rawr_llama3\_8b-iq-imatrix by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2510](https://togithub.com/mudler/LocalAI/pull/2510)
- models(gallery): add llama3-8b-feifei-1.0-iq-imatrix by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2511](https://togithub.com/mudler/LocalAI/pull/2511)
- models(gallery): ⬆️ update checksum by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2519](https://togithub.com/mudler/LocalAI/pull/2519)
- models(gallery): add llama3-8B-aifeifei-1.2-iq-imatrix by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2544](https://togithub.com/mudler/LocalAI/pull/2544)
- models(gallery): add hathor-l3-8b-v.01-iq-imatrix by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2545](https://togithub.com/mudler/LocalAI/pull/2545)
- models(gallery): add l3-aethora-15b by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2546](https://togithub.com/mudler/LocalAI/pull/2546)
- models(gallery): add llama-salad-8x8b by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2547](https://togithub.com/mudler/LocalAI/pull/2547)
- models(gallery): add average_normie_v3.69\_8b-iq-imatrix by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2548](https://togithub.com/mudler/LocalAI/pull/2548)
- models(gallery): add duloxetine by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2549](https://togithub.com/mudler/LocalAI/pull/2549)
- models(gallery): add badger-lambda-llama-3-8b by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2550](https://togithub.com/mudler/LocalAI/pull/2550)
- models(gallery): add firefly-gemma-7b by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2576](https://togithub.com/mudler/LocalAI/pull/2576)
- models(gallery): add dolphin-qwen by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2580](https://togithub.com/mudler/LocalAI/pull/2580)
- models(gallery): add tess-v2.5-phi-3-medium-128k-14b by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2581](https://togithub.com/mudler/LocalAI/pull/2581)
- models(gallery): add hathor_stable-v0.2-l3-8b by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2582](https://togithub.com/mudler/LocalAI/pull/2582)
- models(gallery): add samantha-qwen2 by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2586](https://togithub.com/mudler/LocalAI/pull/2586)
- models(gallery): add gemma-1.1-7b-it by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2588](https://togithub.com/mudler/LocalAI/pull/2588)

##### 📖 Documentation and examples

- Update quickstart.md by [@&#8203;mudler](https://togithub.com/mudler)
in
[https://github.com/mudler/LocalAI/pull/2404](https://togithub.com/mudler/LocalAI/pull/2404)
- docs: fix p2p commands by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2472](https://togithub.com/mudler/LocalAI/pull/2472)
- README: update sponsors list by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2476](https://togithub.com/mudler/LocalAI/pull/2476)
- Add integrations by [@&#8203;reid41](https://togithub.com/reid41) in
[https://github.com/mudler/LocalAI/pull/2535](https://togithub.com/mudler/LocalAI/pull/2535)
- docs(gallery): lazy-load images by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2557](https://togithub.com/mudler/LocalAI/pull/2557)
- Fix standard image latest Docker tags by
[@&#8203;nwithan8](https://togithub.com/nwithan8) in
[https://github.com/mudler/LocalAI/pull/2574](https://togithub.com/mudler/LocalAI/pull/2574)

##### 👒 Dependencies

- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2399](https://togithub.com/mudler/LocalAI/pull/2399)
- ⬆️ Update docs version mudler/LocalAI by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2398](https://togithub.com/mudler/LocalAI/pull/2398)
- ⬆️ Update ggerganov/whisper.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2408](https://togithub.com/mudler/LocalAI/pull/2408)
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2409](https://togithub.com/mudler/LocalAI/pull/2409)
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2419](https://togithub.com/mudler/LocalAI/pull/2419)
- ⬆️ Update ggerganov/whisper.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2427](https://togithub.com/mudler/LocalAI/pull/2427)
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2428](https://togithub.com/mudler/LocalAI/pull/2428)
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2433](https://togithub.com/mudler/LocalAI/pull/2433)
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2437](https://togithub.com/mudler/LocalAI/pull/2437)
- ⬆️ Update ggerganov/whisper.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2438](https://togithub.com/mudler/LocalAI/pull/2438)
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2444](https://togithub.com/mudler/LocalAI/pull/2444)
- ⬆️ Update ggerganov/whisper.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2443](https://togithub.com/mudler/LocalAI/pull/2443)
- ⬆️ Update ggerganov/whisper.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2452](https://togithub.com/mudler/LocalAI/pull/2452)
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2453](https://togithub.com/mudler/LocalAI/pull/2453)
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2465](https://togithub.com/mudler/LocalAI/pull/2465)
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2467](https://togithub.com/mudler/LocalAI/pull/2467)
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2477](https://togithub.com/mudler/LocalAI/pull/2477)
- toil: bump grpc version by
[@&#8203;dave-gray101](https://togithub.com/dave-gray101) in
[https://github.com/mudler/LocalAI/pull/2480](https://togithub.com/mudler/LocalAI/pull/2480)
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2487](https://togithub.com/mudler/LocalAI/pull/2487)
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2493](https://togithub.com/mudler/LocalAI/pull/2493)
- deps(whisper): update, add libcufft-dev by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2501](https://togithub.com/mudler/LocalAI/pull/2501)
- ⬆️ Update ggerganov/whisper.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2507](https://togithub.com/mudler/LocalAI/pull/2507)
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2508](https://togithub.com/mudler/LocalAI/pull/2508)
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2518](https://togithub.com/mudler/LocalAI/pull/2518)
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2524](https://togithub.com/mudler/LocalAI/pull/2524)
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2531](https://togithub.com/mudler/LocalAI/pull/2531)
- chore(deps): Update Dockerfile by
[@&#8203;reneleonhardt](https://togithub.com/reneleonhardt) in
[https://github.com/mudler/LocalAI/pull/2532](https://togithub.com/mudler/LocalAI/pull/2532)
- ⬆️ Update ggerganov/whisper.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2539](https://togithub.com/mudler/LocalAI/pull/2539)
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2540](https://togithub.com/mudler/LocalAI/pull/2540)
- ⬆️ Update ggerganov/whisper.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2552](https://togithub.com/mudler/LocalAI/pull/2552)
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2551](https://togithub.com/mudler/LocalAI/pull/2551)
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2554](https://togithub.com/mudler/LocalAI/pull/2554)
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2564](https://togithub.com/mudler/LocalAI/pull/2564)
- ⬆️ Update ggerganov/whisper.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2565](https://togithub.com/mudler/LocalAI/pull/2565)
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2570](https://togithub.com/mudler/LocalAI/pull/2570)
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2575](https://togithub.com/mudler/LocalAI/pull/2575)
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2584](https://togithub.com/mudler/LocalAI/pull/2584)
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2587](https://togithub.com/mudler/LocalAI/pull/2587)

##### Other Changes

- ci: fix sd release by
[@&#8203;sozercan](https://togithub.com/sozercan) in
[https://github.com/mudler/LocalAI/pull/2400](https://togithub.com/mudler/LocalAI/pull/2400)
- ci(grpc-cache): also arm64 by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2423](https://togithub.com/mudler/LocalAI/pull/2423)
- ci: push test images when building PRs by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2424](https://togithub.com/mudler/LocalAI/pull/2424)
- ci: pin build-time protoc by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2461](https://togithub.com/mudler/LocalAI/pull/2461)
- feat(swagger): update swagger by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2464](https://togithub.com/mudler/LocalAI/pull/2464)
- ci: run release build on self-hosted runners by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2505](https://togithub.com/mudler/LocalAI/pull/2505)
- experiment: `-j4` for `build-linux:` by
[@&#8203;dave-gray101](https://togithub.com/dave-gray101) in
[https://github.com/mudler/LocalAI/pull/2514](https://togithub.com/mudler/LocalAI/pull/2514)
- test: e2e /reranker endpoint by
[@&#8203;dave-gray101](https://togithub.com/dave-gray101) in
[https://github.com/mudler/LocalAI/pull/2211](https://togithub.com/mudler/LocalAI/pull/2211)
- ci: pack less libs inside the binary by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2579](https://togithub.com/mudler/LocalAI/pull/2579)

##### New Contributors

- [@&#8203;prajwalnayak7](https://togithub.com/prajwalnayak7) made their
first contribution in
[https://github.com/mudler/LocalAI/pull/2347](https://togithub.com/mudler/LocalAI/pull/2347)
- [@&#8203;reneleonhardt](https://togithub.com/reneleonhardt) made their
first contribution in
[https://github.com/mudler/LocalAI/pull/2532](https://togithub.com/mudler/LocalAI/pull/2532)
- [@&#8203;reid41](https://togithub.com/reid41) made their first
contribution in
[https://github.com/mudler/LocalAI/pull/2535](https://togithub.com/mudler/LocalAI/pull/2535)
- [@&#8203;nwithan8](https://togithub.com/nwithan8) made their first
contribution in
[https://github.com/mudler/LocalAI/pull/2574](https://togithub.com/mudler/LocalAI/pull/2574)

**Full Changelog**:
mudler/LocalAI@v2.16.0...v2.17.0

</details>

---

### Configuration

📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Enabled.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about these
updates again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR has been generated by [Renovate
Bot](https://togithub.com/renovatebot/renovate).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzNy40MTAuMiIsInVwZGF0ZWRJblZlciI6IjM3LjQxMC4yIiwidGFyZ2V0QnJhbmNoIjoibWFzdGVyIiwibGFiZWxzIjpbImF1dG9tZXJnZSIsInVwZGF0ZS9kb2NrZXIvZ2VuZXJhbC9ub24tbWFqb3IiXX0=-->
  • Loading branch information
truecharts-admin authored Jun 18, 2024
1 parent 150de84 commit f4098ac
Show file tree
Hide file tree
Showing 2 changed files with 9 additions and 9 deletions.
4 changes: 2 additions & 2 deletions charts/stable/local-ai/Chart.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ annotations:
truecharts.org/min_helm_version: "3.11"
truecharts.org/train: stable
apiVersion: v2
appVersion: 2.16.0
appVersion: 2.17.0
dependencies:
- name: common
version: 24.1.0
Expand All @@ -33,4 +33,4 @@ sources:
- https://github.com/truecharts/charts/tree/master/charts/stable/local-ai
- https://hub.docker.com/r/localai/localai
type: application
version: 11.1.0
version: 11.8.0
14 changes: 7 additions & 7 deletions charts/stable/local-ai/values.yaml
Original file line number Diff line number Diff line change
@@ -1,27 +1,27 @@
image:
repository: docker.io/localai/localai
pullPolicy: IfNotPresent
tag: v2.16.0@sha256:3473f9694c3899d9e8d3c3c887d52f4e7d46cf643a03d7903f108d097a70611f
tag: v2.17.0@sha256:1c6816d924fb9ead896972668c54fa4745fc62b0b6a9bd88fb9da99cb4790aca
ffmpegImage:
repository: docker.io/localai/localai
pullPolicy: IfNotPresent
tag: v2.16.0-ffmpeg-core@sha256:0a3c62f2a28d7a3ca233ad6c77af750540c62ab8a0bb3d04f15943845a4c2b50
tag: v2.17.0-ffmpeg-core@sha256:178d1f2cb53fba46c025584760616872fe92c3b338bb18e5099f1db859652a5c
cublasCuda12Image:
repository: docker.io/localai/localai
pullPolicy: IfNotPresent
tag: v2.16.0-cublas-cuda12-core@sha256:16753f0714a7b81530263a9990dce03767f22d0a3fe08961aca95c1d0ac77258
tag: v2.17.0-cublas-cuda12-core@sha256:8b7d0a67a50a417bb15f89981a781fb9ae1f61608c8c8ee75c3a3ff363b22c3d
cublasCuda12FfmpegImage:
repository: docker.io/localai/localai
pullPolicy: IfNotPresent
tag: v2.16.0-cublas-cuda12-ffmpeg-core@sha256:ccf9647b91f4c4e20cdffe1de27fc4c8fe587a4554663dedf469bd49dea7d3a5
tag: v2.17.0-cublas-cuda12-ffmpeg-core@sha256:0cee2d9b5e515e9ffdfd67d435e620438d2aa620f4f416d0644e20554fd7ff5a
cublasCuda11Image:
repository: docker.io/localai/localai
pullPolicy: IfNotPresent
tag: v2.16.0-cublas-cuda11-core@sha256:7bcd70e4c7164815ac1bafaf69c8493514e13c4f84d611590cc4001fb44829d8
tag: v2.17.0-cublas-cuda11-core@sha256:a4fa1281d14ecd0b8c4ab23911b1b96e1423a4d082754d778205f6ca944abc9f
cublasCuda11FfmpegImage:
repository: docker.io/localai/localai
pullPolicy: IfNotPresent
tag: v2.16.0-cublas-cuda11-ffmpeg-core@sha256:8e90cd63d3a904d980a2a3ba5c1a78379cf9f4d37afff02d1c00a7aec279c146
tag: v2.17.0-cublas-cuda11-ffmpeg-core@sha256:1337ab3d1b5897287df4c25360d82ff1bbd98e4988a7061f57b713cd7d429cec
allInOneCuda12Image:
repository: docker.io/localai/localai
pullPolicy: IfNotPresent
Expand All @@ -33,7 +33,7 @@ allInOneCuda11Image:
allInOneCpuImage:
repository: docker.io/localai/localai
pullPolicy: IfNotPresent
tag: v2.16.0-aio-cpu@sha256:e3c8e59b16e12f863a2590c15a21fbe1d45bda92c6a300604a833e9bc46b08ca
tag: v2.17.0-aio-cpu@sha256:e06e1f8af3f7a64fcdef8e34bcb5dbb3447ab4e800a99fc99a9a925057521988
securityContext:
container:
runAsNonRoot: false
Expand Down

0 comments on commit f4098ac

Please sign in to comment.