Replies: 1 comment
-
Hi, I have wrappers to use guidance with various model hosts, which I'm experimenting with. Note, these are pretty hacky and you will probably need to make some changes to make them work for you :) guidance with auto-gptq: https://github.com/andysalerno/guider/blob/master/llama_autogptq.py The ones that I use the most are autogptq and llama-cpp. The transformers one is just the regular huggingface transformers, but configured for 4bit loading. Note, I have nothing to do at all with the guidance project, just a very happy user of it :) |
Beta Was this translation helpful? Give feedback.
-
Hi there,
I've tried using Guidance w/ llama.cpp local models but have been unable to successfully run the sample I put together (taken from here). My local model lives at
<home_dir>/work/llama-cpp/models/guanaco-7B.ggmlv3.q4_1.bin
(I've tested it w/ the command-line chat interface), and every time I try to run the Guidance sample shown below, I get this error:File "<home_dir>/.local/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 158, in validate_repo_id raise HFValidationError( huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '<home_dir>/work/llama-cpp/models/guanaco-7B.ggmlv3.q4_1'. Use
repo_typeargument if needed.
Has anyone been able to use Guidance successfully w/ local models, and if so, how did you specify the local model?
Here is the sample code:
Beta Was this translation helpful? Give feedback.
All reactions