GGUF trouble #1080
thebrahman
started this conversation in
General
GGUF trouble
#1080
Replies: 1 comment
-
please share full logs - also try to drop some of the settings I see in there (f16, gpu_layers, batch) and see if it still fails. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Why wont this model work:
https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF/blob/main/llama-2-7b-chat.Q5_K_M.gguf
with this config:
I can run this model from the example on the website:
'luna-ai-llama2-uncensored.ggmlv3.q5_K_M.bin'
with the same config as above but with 'llama-stable'
Beta Was this translation helpful? Give feedback.
All reactions