Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Build with >799a1cb13b0b1b560ab0ceff485caed68faa8f1f to support Mixtral #314

Open
sfxworks opened this issue Dec 15, 2023 · 3 comments
Open

Comments

@sfxworks
Copy link

I believe, in order to resolve mudler/LocalAI#1446, go-llama.cpp needs to be built against at least version 799a1cb13b0b1b560ab0ceff485caed68faa8f1f of llama.cpp to enable mixtral support

@sfxworks sfxworks changed the title Build with >799a1cb13b0b1b560ab0ceff485caed68faa8f1f Build with >799a1cb13b0b1b560ab0ceff485caed68faa8f1f to support Mixtral Dec 15, 2023
@sfxworks
Copy link
Author

sfxworks commented Dec 15, 2023

I believe the auto prs are related
#313

@msuiche
Copy link

msuiche commented Jan 21, 2024

Any update on this

@allnash
Copy link

allnash commented Mar 24, 2024

@mudler how can we help get this in? I get an error building on Apple Metal with the current master and this llama.cpp commit 799a1cb13b0b1b560ab0ceff485caed68faa8f1f

Any advice?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants