v2.8.0
This release adds support for Intel GPUs, and it deprecates old ggml-based backends which are by now superseded by llama.cpp (that now supports more architectures out-of-the-box). See also #1651.
Images are now based on Ubuntu 22.04 LTS instead of Debian bullseye.
Intel GPUs
There are now images tagged with "sycl". There are sycl-f16 and sycl-f32 images indicating f16 or f32 support.
For example, to start phi-2 with an Intel GPU it is enough to use the container image like this:
docker run -e DEBUG=true -ti -v $PWD/models:/build/models -p 8080:8080 -v /dev/dri:/dev/dri --rm quay.io/go-skynet/local-ai:master-sycl-f32-ffmpeg-core phi-2
Note
First off, a massive thank you to each and every one of you who've chipped in to squash bugs and suggest cool new features for LocalAI. Your help, kind words, and brilliant ideas are truly appreciated - more than words can say!
And to those of you who've been heros, giving up your own time to help out fellow users on Discord and in our repo, you're absolutely amazing. We couldn't have asked for a better community.
Just so you know, LocalAI doesn't have the luxury of big corporate sponsors behind it. It's all us, folks. So, if you've found value in what we're building together and want to keep the momentum going, consider showing your support. A little shoutout on your favorite social platforms using @LocalAI_OSS and @mudler_it or joining our sponsorship program can make a big difference.
Also, if you haven't yet joined our Discord, come on over! Here's the link: https://discord.gg/uJAeKSAGDy
Every bit of support, every mention, and every star adds up and helps us keep this ship sailing. Let's keep making LocalAI awesome, together.
Thanks a ton, and here's to more exciting times ahead with LocalAI! 🚀
What's Changed
Exciting New Features 🎉
- feat(sycl): Add support for Intel GPUs with sycl (#1647) by @mudler in #1660
- Drop old falcon backend (deprecated) by @mudler in #1675
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1678
- Drop ggml-based gpt2 and starcoder (supported by llama.cpp) by @mudler in #1679
- fix(Dockerfile): sycl dependencies by @mudler in #1686
- feat: Use ubuntu as base for container images, drop deprecated ggml-transformers backends by @mudler in #1689
👒 Dependencies
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1656
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1665
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1669
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1673
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1683
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1688
- ⬆️ Update mudler/go-stable-diffusion by @localai-bot in #1674
Other Changes
- ⬆️ Update docs version mudler/LocalAI by @localai-bot in #1661
- feat(mamba): Add bagel-dpo-2.8b by @richiejp in #1671
- fix (docs): fixed broken links
github/
->github.com/
by @Wansmer in #1672 - Fix HTTP links in README.md by @vfiftyfive in #1677
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1681
- ci: cleanup worker before run by @mudler in #1685
- Revert "fix(Dockerfile): sycl dependencies" by @mudler in #1687
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1691
New Contributors
- @richiejp made their first contribution in #1671
- @Wansmer made their first contribution in #1672
- @vfiftyfive made their first contribution in #1677
Full Changelog: v2.7.0...v2.8.0