-
-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Build docker container for ROCm #1595
Conversation
✅ Deploy Preview for localai canceled.
|
94326aa
to
3797117
Compare
I also updated the go builder to from |
does this actually build? |
It works for me on llama.cpp at least, I havent tested the rest yet. It doesn't include stablediffusion IIRC. I'm still testing is why I still have it WIP |
great work, I love to see this one |
it does build however I still seem to have the same issue as I had when following the arch build instructions in the localai wiki. where the model will load without a problem, then for some reason the api query is never executed and the model just sits there in vram, this issue also persists when running this method in CPU mode also :| |
This works with |
This works with |
@mudler I removed the WIP header,. Is there a good way to test this? I did some manual testing and it seems to work. |
for acceleration tests we have a specific workflow but for now is disabled as it waits for HW (#1252 ). However that doesn't cover container e2e tests (which are still manual) for all the backends |
eff6fb2
to
e70b386
Compare
@fenfir I've updated the PR and based the Dockerfile directly on the rocm images, can you try it out? |
Signed-off-by: mudler <[email protected]>
ok, let's give a shot to the images on master |
….0 by renovate (#18546) This PR contains the following updates: | Package | Update | Change | |---|---|---| | [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) | minor | `v2.8.2-cublas-cuda11-ffmpeg-core` -> `v2.9.0-cublas-cuda11-ffmpeg-core` | | [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) | minor | `v2.8.2-cublas-cuda11-core` -> `v2.9.0-cublas-cuda11-core` | | [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) | minor | `v2.8.2-cublas-cuda12-ffmpeg-core` -> `v2.9.0-cublas-cuda12-ffmpeg-core` | | [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) | minor | `v2.8.2-cublas-cuda12-core` -> `v2.9.0-cublas-cuda12-core` | | [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) | minor | `v2.8.2-ffmpeg-core` -> `v2.9.0-ffmpeg-core` | | [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) | minor | `v2.8.2` -> `v2.9.0` | --- > [!WARNING] > Some dependencies could not be looked up. Check the Dependency Dashboard for more information. --- ### Release Notes <details> <summary>mudler/LocalAI (docker.io/localai/localai)</summary> ### [`v2.9.0`](https://togithub.com/mudler/LocalAI/releases/tag/v2.9.0) [Compare Source](https://togithub.com/mudler/LocalAI/compare/v2.8.2...v2.9.0) This release brings many enhancements, fixes, and a special thanks to the community for the amazing work and contributions! We now have sycl images for Intel GPUs, ROCm images for AMD GPUs,and much more: - You can find the AMD GPU images tags between the container images available - look for `hipblas`. For example, [master-hipblas-ffmpeg-core](https://quay.io/repository/go-skynet/local-ai/tag/master-hipblas-ffmpeg-core). Thanks to [@​fenfir](https://togithub.com/fenfir) for this nice contribution! - Intel GPU images are tagged with `sycl`. You can find images with two flavors, sycl-f16 and sycl-f32 respectively. For example, [master-sycl-f16](https://quay.io/repository/go-skynet/local-ai/tag/master-sycl-f16-core). Work is in progress to support also diffusers and transformers on Intel GPUs. - Thanks to [@​christ66](https://togithub.com/christ66) first efforts in supporting the Assistant API were made, and we are planning to support the Assistant API! Stay tuned for more! - Now LocalAI supports the Tools API endpoint - it also supports the (now deprecated) functions API call as usual. We now also have support for SSE with function calling. See [https://github.com/mudler/LocalAI/pull/1726](https://togithub.com/mudler/LocalAI/pull/1726) for more - Support for Gemma models - did you hear? Google released OSS models and LocalAI supports it already! - Thanks to [@​dave-gray101](https://togithub.com/dave-gray101) in [https://github.com/mudler/LocalAI/pull/1728](https://togithub.com/mudler/LocalAI/pull/1728) to put efforts in refactoring parts of the code - we are going to support soon more ways to interface with LocalAI, and not only restful api! ##### Support the project First off, a massive thank you to each and every one of you who've chipped in to squash bugs and suggest cool new features for LocalAI. Your help, kind words, and brilliant ideas are truly appreciated - more than words can say! And to those of you who've been heros, giving up your own time to help out fellow users on Discord and in our repo, you're absolutely amazing. We couldn't have asked for a better community. Just so you know, LocalAI doesn't have the luxury of big corporate sponsors behind it. It's all us, folks. So, if you've found value in what we're building together and want to keep the momentum going, consider showing your support. A little shoutout on your favorite social platforms using [@​LocalAI_OSS](https://twitter.com/LocalAI_API) and [@​mudler_it](https://twitter.com/mudler_it) or joining our sponsorship program can make a big difference. Also, if you haven't yet joined our Discord, come on over! Here's the link: https://discord.gg/uJAeKSAGDy Every bit of support, every mention, and every star adds up and helps us keep this ship sailing. Let's keep making LocalAI awesome together! Thanks a ton, and here's to more exciting times ahead with LocalAI! 🚀 ##### What's Changed ##### Bug fixes 🐛 - Add TTS dependency for cuda based builds fixes [#​1727](https://togithub.com/mudler/LocalAI/issues/1727) by [@​blob42](https://togithub.com/blob42) in [https://github.com/mudler/LocalAI/pull/1730](https://togithub.com/mudler/LocalAI/pull/1730) ##### Exciting New Features 🎉 - Build docker container for ROCm by [@​fenfir](https://togithub.com/fenfir) in [https://github.com/mudler/LocalAI/pull/1595](https://togithub.com/mudler/LocalAI/pull/1595) - feat(tools): support Tool calls in the API by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/1715](https://togithub.com/mudler/LocalAI/pull/1715) - Initial implementation of upload files api. by [@​christ66](https://togithub.com/christ66) in [https://github.com/mudler/LocalAI/pull/1703](https://togithub.com/mudler/LocalAI/pull/1703) - feat(tools): Parallel function calling by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/1726](https://togithub.com/mudler/LocalAI/pull/1726) - refactor: move part of api packages to core by [@​dave-gray101](https://togithub.com/dave-gray101) in [https://github.com/mudler/LocalAI/pull/1728](https://togithub.com/mudler/LocalAI/pull/1728) - deps(llama.cpp): update, support Gemma models by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/1734](https://togithub.com/mudler/LocalAI/pull/1734) ##### 👒 Dependencies - deps(llama.cpp): update by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/1714](https://togithub.com/mudler/LocalAI/pull/1714) - ⬆️ Update ggerganov/llama.cpp by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/1740](https://togithub.com/mudler/LocalAI/pull/1740) ##### Other Changes - ⬆️ Update docs version mudler/LocalAI by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/1718](https://togithub.com/mudler/LocalAI/pull/1718) - ⬆️ Update ggerganov/llama.cpp by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/1705](https://togithub.com/mudler/LocalAI/pull/1705) - Update README.md by [@​lunamidori5](https://togithub.com/lunamidori5) in [https://github.com/mudler/LocalAI/pull/1739](https://togithub.com/mudler/LocalAI/pull/1739) - ⬆️ Update ggerganov/llama.cpp by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/1750](https://togithub.com/mudler/LocalAI/pull/1750) ##### New Contributors - [@​fenfir](https://togithub.com/fenfir) made their first contribution in [https://github.com/mudler/LocalAI/pull/1595](https://togithub.com/mudler/LocalAI/pull/1595) - [@​christ66](https://togithub.com/christ66) made their first contribution in [https://github.com/mudler/LocalAI/pull/1703](https://togithub.com/mudler/LocalAI/pull/1703) - [@​blob42](https://togithub.com/blob42) made their first contribution in [https://github.com/mudler/LocalAI/pull/1730](https://togithub.com/mudler/LocalAI/pull/1730) **Full Changelog**: mudler/LocalAI@v2.8.2...v2.9.0 </details> --- ### Configuration 📅 **Schedule**: Branch creation - "before 10pm on monday" in timezone Europe/Amsterdam, Automerge - At any time (no schedule defined). 🚦 **Automerge**: Enabled. ♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox. 🔕 **Ignore**: Close this PR and you won't be reminded about these updates again. --- - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box --- This PR has been generated by [Renovate Bot](https://togithub.com/renovatebot/renovate). <!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzNy4yMTMuMCIsInVwZGF0ZWRJblZlciI6IjM3LjIxMy4wIiwidGFyZ2V0QnJhbmNoIjoibWFzdGVyIn0=-->
Description
This PR changes the Dockerfile to build for HipBLAS
Notes for Reviewers
I'm not sure about referencing the
rocm/dev-ubuntu-22.04:6.0-complete
container, I couldn't find anything publicly available that included the libs. I have one locally that I will look at publishing.Signed commits