Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running the example fails, after following the steps (on Readme.md) to build. #295

Open
pnsvk opened this issue Nov 21, 2023 · 1 comment

Comments

@pnsvk
Copy link

pnsvk commented Nov 21, 2023

MAC-CBBH4ACVpp:go-llama.cpp pnsvk$ LIBRARY_PATH=$PWD C_INCLUDE_PATH=$PWD go run ./examples -m /Users/pnsvk/Downloads/mistral-7b-v0.1.Q4_K_M.gguf -t 14
\# github.com/go-skynet/go-llama.cpp
binding.cpp:333:67: warning: format specifies type 'size_t' (aka 'unsigned long') but the argument has type 'int' [-Wformat]
binding.cpp:809:5: warning: deleting pointer to incomplete type 'llama_model' may cause undefined behavior [-Wdelete-incomplete]
./llama.cpp/llama.h:60:12: note: forward declaration of 'llama_model'
\# github.com/go-skynet/go-llama.cpp/examples
/usr/local/go/pkg/tool/darwin_amd64/link: running clang++ failed: exit status 1
ld: warning: -no_pie is deprecated when targeting new OS versions
Undefined symbols for architecture x86_64:
  "_ggml_metal_add_buffer", referenced from:
      _llama_new_context_with_model in libbinding.a(llama.o)
  "_ggml_metal_free", referenced from:
      llama_context::~llama_context() in libbinding.a(llama.o)
  "_ggml_metal_get_concur_list", referenced from:
      _llama_new_context_with_model in libbinding.a(llama.o)
  "_ggml_metal_graph_compute", referenced from:
      llama_eval_internal(llama_context&, int const*, float const*, int, int, int, char const*) in libbinding.a(llama.o)
  "_ggml_metal_graph_find_concurrency", referenced from:
      _llama_new_context_with_model in libbinding.a(llama.o)
  "_ggml_metal_host_free", referenced from:
      _llama_new_context_with_model in libbinding.a(llama.o)
      llm_load_tensors(llama_model_loader&, llama_model&, int, int, int, float const*, bool, bool, ggml_type, bool, void (*)(float, void*), void*) in libbinding.a(llama.o)
      llama_model::~llama_model() in libbinding.a(llama.o)
      llama_context::~llama_context() in libbinding.a(llama.o)
  "_ggml_metal_host_malloc", referenced from:
      _llama_new_context_with_model in libbinding.a(llama.o)
      llm_load_tensors(llama_model_loader&, llama_model&, int, int, int, float const*, bool, bool, ggml_type, bool, void (*)(float, void*), void*) in libbinding.a(llama.o)
  "_ggml_metal_if_optimized", referenced from:
      _llama_new_context_with_model in libbinding.a(llama.o)
  "_ggml_metal_init", referenced from:
      _llama_new_context_with_model in libbinding.a(llama.o)
  "_ggml_metal_log_set_callback", referenced from:
      _llama_new_context_with_model in libbinding.a(llama.o)
  "_ggml_metal_set_n_cb", referenced from:
      llama_eval_internal(llama_context&, int const*, float const*, int, int, int, char const*) in libbinding.a(llama.o)
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)

Please can you help check with this error ?

Following are the two steps I followed, to run it (these are what were described on the readme)

MAC-CBBH4ACVpp:code-checkouts pnsvk$ git clone --recurse-submodules https://github.com/go-skynet/go-llama.cpp

MAC-CBBH4ACVpp:code-checkouts pnsvk$ cd go-llama.cpp
MAC-CBBH4ACVpp:go-llama.cpp pnsvk$ make libbinding.a

MAC-CBBH4ACVpp:go-llama.cpp pnsvk$ LIBRARY_PATH=$PWD C_INCLUDE_PATH=$PWD go run ./examples -m "/model/path/here" -t 14

@macie
Copy link

macie commented Nov 30, 2023

@pnsvk I see that you are using quantized Mistral model, which is quite new. I also cannot run the example with Mistral model with cryptic errors.

llama.cpp is released frequently, but go-llama.cpp cannot keep up. Maybe this is the source of the problem?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants