-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Open
Labels
bugSomething isn't workingSomething isn't workingdocumentationImprovements or additions to documentationImprovements or additions to documentation
Description
Prerequisites
Please answer the following questions for yourself before submitting an issue.
- [X ] I am running the latest code. Development is very rapid so there are no tagged versions as of now.
- [ X] I carefully followed the README.md.
- [ X] I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
- [ X] I reviewed the Discussions, and have a new bug or useful enhancement to share.
Expected Behavior
I expect CUDA support to work, or not to be claimed to work.
Current Behavior
CUDA does not work.
Environment and Context
Win11 x64, followed all instructions. Still runs 100% CPU.
- Physical (or virtual) hardware you are using, e.g. for Linux:
$ lscpu
- Operating System, e.g. for Linux:
$ uname -a
- SDK version, e.g. for Linux:
$ python3 --version
$ make --version
$ g++ --version
Failure Information (for bugs)
Your instructions are wrong.
Steps to Reproduce
There is no setup.py.
Try the following:
git clone https://github.com/abetlen/llama-cpp-pythoncd llama-cpp-pythonrm -rf _skbuild/# delete any old buildspython setup.py developcd ./vendor/llama.cpp- Follow llama.cpp's instructions to
cmakellama.cpp - Run llama.cpp's
./mainwith the same arguments you previously passed to llama-cpp-python and see if you can reproduce the issue. If you can, log an issue with llama.cpp
Failure Logs
There is no setup.py. Nothing else matters because your instructions don't even match the reality of which files exist.
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't workingdocumentationImprovements or additions to documentationImprovements or additions to documentation