v2.9.0
This release brings many enhancements, fixes, and a special thanks to the community for the amazing work and contributions!
We now have sycl images for Intel GPUs, ROCm images for AMD GPUs,and much more:
- You can find the AMD GPU images tags between the container images available - look for
hipblas
. For example, master-hipblas-ffmpeg-core. Thanks to @fenfir for this nice contribution! - Intel GPU images are tagged with
sycl
. You can find images with two flavors, sycl-f16 and sycl-f32 respectively. For example, master-sycl-f16. Work is in progress to support also diffusers and transformers on Intel GPUs. - Thanks to @christ66 first efforts in supporting the Assistant API were made, and we are planning to support the Assistant API! Stay tuned for more!
- Now LocalAI supports the Tools API endpoint - it also supports the (now deprecated) functions API call as usual. We now also have support for SSE with function calling. See #1726 for more
- Support for Gemma models - did you hear? Google released OSS models and LocalAI supports it already!
- Thanks to @dave-gray101 in #1728 to put efforts in refactoring parts of the code - we are going to support soon more ways to interface with LocalAI, and not only restful api!
Support the project
First off, a massive thank you to each and every one of you who've chipped in to squash bugs and suggest cool new features for LocalAI. Your help, kind words, and brilliant ideas are truly appreciated - more than words can say!
And to those of you who've been heros, giving up your own time to help out fellow users on Discord and in our repo, you're absolutely amazing. We couldn't have asked for a better community.
Just so you know, LocalAI doesn't have the luxury of big corporate sponsors behind it. It's all us, folks. So, if you've found value in what we're building together and want to keep the momentum going, consider showing your support. A little shoutout on your favorite social platforms using @LocalAI_OSS and @mudler_it or joining our sponsorship program can make a big difference.
Also, if you haven't yet joined our Discord, come on over! Here's the link: https://discord.gg/uJAeKSAGDy
Every bit of support, every mention, and every star adds up and helps us keep this ship sailing. Let's keep making LocalAI awesome together!
Thanks a ton, and here's to more exciting times ahead with LocalAI! 🚀
What's Changed
Bug fixes 🐛
Exciting New Features 🎉
- Build docker container for ROCm by @fenfir in #1595
- feat(tools): support Tool calls in the API by @mudler in #1715
- Initial implementation of upload files api. by @christ66 in #1703
- feat(tools): Parallel function calling by @mudler in #1726
- refactor: move part of api packages to core by @dave-gray101 in #1728
- deps(llama.cpp): update, support Gemma models by @mudler in #1734
👒 Dependencies
- deps(llama.cpp): update by @mudler in #1714
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1740
Other Changes
- ⬆️ Update docs version mudler/LocalAI by @localai-bot in #1718
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1705
- Update README.md by @lunamidori5 in #1739
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1750
New Contributors
- @fenfir made their first contribution in #1595
- @christ66 made their first contribution in #1703
- @blob42 made their first contribution in #1730
Full Changelog: v2.8.2...v2.9.0