Skip to content

Commit 1c641e6

Browse files
ochafikHanClinto
andauthored
build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (ggml-org#7809)
* `main`/`server`: rename to `llama` / `llama-server` for consistency w/ homebrew * server: update refs -> llama-server gitignore llama-server * server: simplify nix package * main: update refs -> llama fix examples/main ref * main/server: fix targets * update more names * Update build.yml * rm accidentally checked in bins * update straggling refs * Update .gitignore * Update server-llm.sh * main: target name -> llama-cli * Prefix all example bins w/ llama- * fix main refs * rename {main->llama}-cmake-pkg binary * prefix more cmake targets w/ llama- * add/fix gbnf-validator subfolder to cmake * sort cmake example subdirs * rm bin files * fix llama-lookup-* Makefile rules * gitignore /llama-* * rename Dockerfiles * rename llama|main -> llama-cli; consistent RPM bin prefixes * fix some missing -cli suffixes * rename dockerfile w/ llama-cli * rename(make): llama-baby-llama * update dockerfile refs * more llama-cli(.exe) * fix test-eval-callback * rename: llama-cli-cmake-pkg(.exe) * address gbnf-validator unused fread warning (switched to C++ / ifstream) * add two missing llama- prefixes * Updating docs for eval-callback binary to use new `llama-` prefix. * Updating a few lingering doc references for rename of main to llama-cli * Updating `run-with-preset.py` to use new binary names. Updating docs around `perplexity` binary rename. * Updating documentation references for lookup-merge and export-lora * Updating two small `main` references missed earlier in the finetune docs. * Update apps.nix * update grammar/README.md w/ new llama-* names * update llama-rpc-server bin name + doc * Revert "update llama-rpc-server bin name + doc" This reverts commit e474ef1. * add hot topic notice to README.md * Update README.md * Update README.md * rename gguf-split & quantize bins refs in **/tests.sh --------- Co-authored-by: HanClinto <[email protected]>
1 parent 9635529 commit 1c641e6

File tree

128 files changed

+584
-584
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

128 files changed

+584
-584
lines changed

.devops/cloud-v-pipeline

+1-1
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ node('x86_runner1'){ // Running on x86 runner containing latest vecto
1515
stage('Running llama.cpp'){
1616
sh'''#!/bin/bash
1717
module load gnu-bin2/0.1 # loading latest versions of vector qemu and vector gcc
18-
qemu-riscv64 -L /softwares/gnu-bin2/sysroot -cpu rv64,v=true,vlen=256,elen=64,vext_spec=v1.0 ./main -m /home/alitariq/codellama-7b.Q4_K_M.gguf -p "Anything" -n 9 > llama_log.txt # Running llama.cpp on vector qemu-riscv64
18+
qemu-riscv64 -L /softwares/gnu-bin2/sysroot -cpu rv64,v=true,vlen=256,elen=64,vext_spec=v1.0 ./llama-cli -m /home/alitariq/codellama-7b.Q4_K_M.gguf -p "Anything" -n 9 > llama_log.txt # Running llama.cpp on vector qemu-riscv64
1919
cat llama_log.txt # Printing results
2020
'''
2121
}

.devops/main-cuda.Dockerfile .devops/llama-cli-cuda.Dockerfile

+3-3
Original file line numberDiff line numberDiff line change
@@ -23,13 +23,13 @@ ENV CUDA_DOCKER_ARCH=${CUDA_DOCKER_ARCH}
2323
# Enable CUDA
2424
ENV LLAMA_CUDA=1
2525

26-
RUN make -j$(nproc) main
26+
RUN make -j$(nproc) llama-cli
2727

2828
FROM ${BASE_CUDA_RUN_CONTAINER} as runtime
2929

3030
RUN apt-get update && \
3131
apt-get install -y libgomp1
3232

33-
COPY --from=build /app/main /main
33+
COPY --from=build /app/llama-cli /llama-cli
3434

35-
ENTRYPOINT [ "/main" ]
35+
ENTRYPOINT [ "/llama-cli" ]

.devops/main-intel.Dockerfile .devops/llama-cli-intel.Dockerfile

+3-3
Original file line numberDiff line numberDiff line change
@@ -15,12 +15,12 @@ RUN if [ "${LLAMA_SYCL_F16}" = "ON" ]; then \
1515
export OPT_SYCL_F16="-DLLAMA_SYCL_F16=ON"; \
1616
fi && \
1717
cmake -B build -DLLAMA_SYCL=ON -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx ${OPT_SYCL_F16} && \
18-
cmake --build build --config Release --target main
18+
cmake --build build --config Release --target llama-cli
1919

2020
FROM intel/oneapi-basekit:$ONEAPI_VERSION as runtime
2121

22-
COPY --from=build /app/build/bin/main /main
22+
COPY --from=build /app/build/bin/llama-cli /llama-cli
2323

2424
ENV LC_ALL=C.utf8
2525

26-
ENTRYPOINT [ "/main" ]
26+
ENTRYPOINT [ "/llama-cli" ]

.devops/main-rocm.Dockerfile .devops/llama-cli-rocm.Dockerfile

+2-2
Original file line numberDiff line numberDiff line change
@@ -40,6 +40,6 @@ ENV LLAMA_HIPBLAS=1
4040
ENV CC=/opt/rocm/llvm/bin/clang
4141
ENV CXX=/opt/rocm/llvm/bin/clang++
4242

43-
RUN make -j$(nproc) main
43+
RUN make -j$(nproc) llama-cli
4444

45-
ENTRYPOINT [ "/app/main" ]
45+
ENTRYPOINT [ "/app/llama-cli" ]

.devops/main-vulkan.Dockerfile .devops/llama-cli-vulkan.Dockerfile

+3-3
Original file line numberDiff line numberDiff line change
@@ -15,13 +15,13 @@ RUN wget -qO - https://packages.lunarg.com/lunarg-signing-key-pub.asc | apt-key
1515
WORKDIR /app
1616
COPY . .
1717
RUN cmake -B build -DLLAMA_VULKAN=1 && \
18-
cmake --build build --config Release --target main
18+
cmake --build build --config Release --target llama-cli
1919

2020
# Clean up
2121
WORKDIR /
22-
RUN cp /app/build/bin/main /main && \
22+
RUN cp /app/build/bin/llama-cli /llama-cli && \
2323
rm -rf /app
2424

2525
ENV LC_ALL=C.utf8
2626

27-
ENTRYPOINT [ "/main" ]
27+
ENTRYPOINT [ "/llama-cli" ]

.devops/main.Dockerfile .devops/llama-cli.Dockerfile

+3-3
Original file line numberDiff line numberDiff line change
@@ -9,15 +9,15 @@ WORKDIR /app
99

1010
COPY . .
1111

12-
RUN make -j$(nproc) main
12+
RUN make -j$(nproc) llama-cli
1313

1414
FROM ubuntu:$UBUNTU_VERSION as runtime
1515

1616
RUN apt-get update && \
1717
apt-get install -y libgomp1
1818

19-
COPY --from=build /app/main /main
19+
COPY --from=build /app/llama-cli /llama-cli
2020

2121
ENV LC_ALL=C.utf8
2222

23-
ENTRYPOINT [ "/main" ]
23+
ENTRYPOINT [ "/llama-cli" ]

.devops/llama-cpp-clblast.srpm.spec

+7-7
Original file line numberDiff line numberDiff line change
@@ -36,9 +36,9 @@ make -j LLAMA_CLBLAST=1
3636

3737
%install
3838
mkdir -p %{buildroot}%{_bindir}/
39-
cp -p main %{buildroot}%{_bindir}/llamaclblast
40-
cp -p server %{buildroot}%{_bindir}/llamaclblastserver
41-
cp -p simple %{buildroot}%{_bindir}/llamaclblastsimple
39+
cp -p llama-cli %{buildroot}%{_bindir}/llama-clblast-cli
40+
cp -p llama-server %{buildroot}%{_bindir}/llama-clblast-server
41+
cp -p llama-simple %{buildroot}%{_bindir}/llama-clblast-simple
4242

4343
mkdir -p %{buildroot}/usr/lib/systemd/system
4444
%{__cat} <<EOF > %{buildroot}/usr/lib/systemd/system/llamaclblast.service
@@ -49,7 +49,7 @@ After=syslog.target network.target local-fs.target remote-fs.target nss-lookup.t
4949
[Service]
5050
Type=simple
5151
EnvironmentFile=/etc/sysconfig/llama
52-
ExecStart=/usr/bin/llamaclblastserver $LLAMA_ARGS
52+
ExecStart=/usr/bin/llama-clblast-server $LLAMA_ARGS
5353
ExecReload=/bin/kill -s HUP $MAINPID
5454
Restart=never
5555

@@ -67,9 +67,9 @@ rm -rf %{buildroot}
6767
rm -rf %{_builddir}/*
6868

6969
%files
70-
%{_bindir}/llamaclblast
71-
%{_bindir}/llamaclblastserver
72-
%{_bindir}/llamaclblastsimple
70+
%{_bindir}/llama-clblast-cli
71+
%{_bindir}/llama-clblast-server
72+
%{_bindir}/llama-clblast-simple
7373
/usr/lib/systemd/system/llamaclblast.service
7474
%config /etc/sysconfig/llama
7575

.devops/llama-cpp-cuda.srpm.spec

+7-7
Original file line numberDiff line numberDiff line change
@@ -36,9 +36,9 @@ make -j LLAMA_CUDA=1
3636

3737
%install
3838
mkdir -p %{buildroot}%{_bindir}/
39-
cp -p main %{buildroot}%{_bindir}/llamacppcuda
40-
cp -p server %{buildroot}%{_bindir}/llamacppcudaserver
41-
cp -p simple %{buildroot}%{_bindir}/llamacppcudasimple
39+
cp -p llama-cli %{buildroot}%{_bindir}/llama-cuda-cli
40+
cp -p llama-server %{buildroot}%{_bindir}/llama-cuda-server
41+
cp -p llama-simple %{buildroot}%{_bindir}/llama-cuda-simple
4242

4343
mkdir -p %{buildroot}/usr/lib/systemd/system
4444
%{__cat} <<EOF > %{buildroot}/usr/lib/systemd/system/llamacuda.service
@@ -49,7 +49,7 @@ After=syslog.target network.target local-fs.target remote-fs.target nss-lookup.t
4949
[Service]
5050
Type=simple
5151
EnvironmentFile=/etc/sysconfig/llama
52-
ExecStart=/usr/bin/llamacppcudaserver $LLAMA_ARGS
52+
ExecStart=/usr/bin/llama-cuda-server $LLAMA_ARGS
5353
ExecReload=/bin/kill -s HUP $MAINPID
5454
Restart=never
5555

@@ -67,9 +67,9 @@ rm -rf %{buildroot}
6767
rm -rf %{_builddir}/*
6868

6969
%files
70-
%{_bindir}/llamacppcuda
71-
%{_bindir}/llamacppcudaserver
72-
%{_bindir}/llamacppcudasimple
70+
%{_bindir}/llama-cuda-cli
71+
%{_bindir}/llama-cuda-server
72+
%{_bindir}/llama-cuda-simple
7373
/usr/lib/systemd/system/llamacuda.service
7474
%config /etc/sysconfig/llama
7575

.devops/llama-cpp.srpm.spec

+7-7
Original file line numberDiff line numberDiff line change
@@ -38,9 +38,9 @@ make -j
3838

3939
%install
4040
mkdir -p %{buildroot}%{_bindir}/
41-
cp -p main %{buildroot}%{_bindir}/llama
42-
cp -p server %{buildroot}%{_bindir}/llamaserver
43-
cp -p simple %{buildroot}%{_bindir}/llamasimple
41+
cp -p llama-cli %{buildroot}%{_bindir}/llama-cli
42+
cp -p llama-server %{buildroot}%{_bindir}/llama-server
43+
cp -p llama-simple %{buildroot}%{_bindir}/llama-simple
4444

4545
mkdir -p %{buildroot}/usr/lib/systemd/system
4646
%{__cat} <<EOF > %{buildroot}/usr/lib/systemd/system/llama.service
@@ -51,7 +51,7 @@ After=syslog.target network.target local-fs.target remote-fs.target nss-lookup.t
5151
[Service]
5252
Type=simple
5353
EnvironmentFile=/etc/sysconfig/llama
54-
ExecStart=/usr/bin/llamaserver $LLAMA_ARGS
54+
ExecStart=/usr/bin/llama-server $LLAMA_ARGS
5555
ExecReload=/bin/kill -s HUP $MAINPID
5656
Restart=never
5757

@@ -69,9 +69,9 @@ rm -rf %{buildroot}
6969
rm -rf %{_builddir}/*
7070

7171
%files
72-
%{_bindir}/llama
73-
%{_bindir}/llamaserver
74-
%{_bindir}/llamasimple
72+
%{_bindir}/llama-cli
73+
%{_bindir}/llama-server
74+
%{_bindir}/llama-simple
7575
/usr/lib/systemd/system/llama.service
7676
%config /etc/sysconfig/llama
7777

.devops/server-cuda.Dockerfile .devops/llama-server-cuda.Dockerfile

+3-3
Original file line numberDiff line numberDiff line change
@@ -25,13 +25,13 @@ ENV LLAMA_CUDA=1
2525
# Enable cURL
2626
ENV LLAMA_CURL=1
2727

28-
RUN make -j$(nproc) server
28+
RUN make -j$(nproc) llama-server
2929

3030
FROM ${BASE_CUDA_RUN_CONTAINER} as runtime
3131

3232
RUN apt-get update && \
3333
apt-get install -y libcurl4-openssl-dev libgomp1
3434

35-
COPY --from=build /app/server /server
35+
COPY --from=build /app/llama-server /llama-server
3636

37-
ENTRYPOINT [ "/server" ]
37+
ENTRYPOINT [ "/llama-server" ]

.devops/server-intel.Dockerfile .devops/llama-server-intel.Dockerfile

+3-3
Original file line numberDiff line numberDiff line change
@@ -15,15 +15,15 @@ RUN if [ "${LLAMA_SYCL_F16}" = "ON" ]; then \
1515
export OPT_SYCL_F16="-DLLAMA_SYCL_F16=ON"; \
1616
fi && \
1717
cmake -B build -DLLAMA_SYCL=ON -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx -DLLAMA_CURL=ON ${OPT_SYCL_F16} && \
18-
cmake --build build --config Release --target server
18+
cmake --build build --config Release --target llama-server
1919

2020
FROM intel/oneapi-basekit:$ONEAPI_VERSION as runtime
2121

2222
RUN apt-get update && \
2323
apt-get install -y libcurl4-openssl-dev
2424

25-
COPY --from=build /app/build/bin/server /server
25+
COPY --from=build /app/build/bin/llama-server /llama-server
2626

2727
ENV LC_ALL=C.utf8
2828

29-
ENTRYPOINT [ "/server" ]
29+
ENTRYPOINT [ "/llama-server" ]

.devops/server-rocm.Dockerfile .devops/llama-server-rocm.Dockerfile

+2-2
Original file line numberDiff line numberDiff line change
@@ -45,6 +45,6 @@ ENV LLAMA_CURL=1
4545
RUN apt-get update && \
4646
apt-get install -y libcurl4-openssl-dev
4747

48-
RUN make -j$(nproc)
48+
RUN make -j$(nproc) llama-server
4949

50-
ENTRYPOINT [ "/app/server" ]
50+
ENTRYPOINT [ "/app/llama-server" ]

.devops/server-vulkan.Dockerfile .devops/llama-server-vulkan.Dockerfile

+3-3
Original file line numberDiff line numberDiff line change
@@ -19,13 +19,13 @@ RUN apt-get update && \
1919
WORKDIR /app
2020
COPY . .
2121
RUN cmake -B build -DLLAMA_VULKAN=1 -DLLAMA_CURL=1 && \
22-
cmake --build build --config Release --target server
22+
cmake --build build --config Release --target llama-server
2323

2424
# Clean up
2525
WORKDIR /
26-
RUN cp /app/build/bin/server /server && \
26+
RUN cp /app/build/bin/llama-server /llama-server && \
2727
rm -rf /app
2828

2929
ENV LC_ALL=C.utf8
3030

31-
ENTRYPOINT [ "/server" ]
31+
ENTRYPOINT [ "/llama-server" ]

.devops/server.Dockerfile .devops/llama-server.Dockerfile

+3-3
Original file line numberDiff line numberDiff line change
@@ -11,15 +11,15 @@ COPY . .
1111

1212
ENV LLAMA_CURL=1
1313

14-
RUN make -j$(nproc) server
14+
RUN make -j$(nproc) llama-server
1515

1616
FROM ubuntu:$UBUNTU_VERSION as runtime
1717

1818
RUN apt-get update && \
1919
apt-get install -y libcurl4-openssl-dev libgomp1
2020

21-
COPY --from=build /app/server /server
21+
COPY --from=build /app/llama-server /llama-server
2222

2323
ENV LC_ALL=C.utf8
2424

25-
ENTRYPOINT [ "/server" ]
25+
ENTRYPOINT [ "/llama-server" ]

.devops/nix/apps.nix

+3-3
Original file line numberDiff line numberDiff line change
@@ -6,11 +6,11 @@
66
let
77
inherit (config.packages) default;
88
binaries = [
9-
"llama"
9+
"llama-cli"
1010
"llama-embedding"
1111
"llama-server"
12-
"quantize"
13-
"train-text-from-scratch"
12+
"llama-quantize"
13+
"llama-train-text-from-scratch"
1414
];
1515
mkApp = name: {
1616
type = "app";

.devops/nix/package.nix

+1-3
Original file line numberDiff line numberDiff line change
@@ -243,8 +243,6 @@ effectiveStdenv.mkDerivation (
243243
# TODO(SomeoneSerge): It's better to add proper install targets at the CMake level,
244244
# if they haven't been added yet.
245245
postInstall = ''
246-
mv $out/bin/main${executableSuffix} $out/bin/llama${executableSuffix}
247-
mv $out/bin/server${executableSuffix} $out/bin/llama-server${executableSuffix}
248246
mkdir -p $out/include
249247
cp $src/llama.h $out/include/
250248
'';
@@ -294,7 +292,7 @@ effectiveStdenv.mkDerivation (
294292
license = lib.licenses.mit;
295293

296294
# Accommodates `nix run` and `lib.getExe`
297-
mainProgram = "llama";
295+
mainProgram = "llama-cli";
298296

299297
# These people might respond, on the best effort basis, if you ping them
300298
# in case of Nix-specific regressions or for reviewing Nix-specific PRs.

.devops/tools.sh

+5-5
Original file line numberDiff line numberDiff line change
@@ -10,23 +10,23 @@ shift
1010
if [[ "$arg1" == '--convert' || "$arg1" == '-c' ]]; then
1111
python3 ./convert-hf-to-gguf.py "$@"
1212
elif [[ "$arg1" == '--quantize' || "$arg1" == '-q' ]]; then
13-
./quantize "$@"
13+
./llama-quantize "$@"
1414
elif [[ "$arg1" == '--run' || "$arg1" == '-r' ]]; then
15-
./main "$@"
15+
./llama-cli "$@"
1616
elif [[ "$arg1" == '--finetune' || "$arg1" == '-f' ]]; then
17-
./finetune "$@"
17+
./llama-finetune "$@"
1818
elif [[ "$arg1" == '--all-in-one' || "$arg1" == '-a' ]]; then
1919
echo "Converting PTH to GGML..."
2020
for i in `ls $1/$2/ggml-model-f16.bin*`; do
2121
if [ -f "${i/f16/q4_0}" ]; then
2222
echo "Skip model quantization, it already exists: ${i/f16/q4_0}"
2323
else
2424
echo "Converting PTH to GGML: $i into ${i/f16/q4_0}..."
25-
./quantize "$i" "${i/f16/q4_0}" q4_0
25+
./llama-quantize "$i" "${i/f16/q4_0}" q4_0
2626
fi
2727
done
2828
elif [[ "$arg1" == '--server' || "$arg1" == '-s' ]]; then
29-
./server "$@"
29+
./llama-server "$@"
3030
else
3131
echo "Unknown command: $arg1"
3232
echo "Available commands: "

.dockerignore

+2-2
Original file line numberDiff line numberDiff line change
@@ -12,8 +12,8 @@ build*/
1212

1313
models/*
1414

15-
/main
16-
/quantize
15+
/llama-cli
16+
/llama-quantize
1717

1818
arm_neon.h
1919
compile_commands.json

.github/ISSUE_TEMPLATE/01-bug-low.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ body:
2424
label: Name and Version
2525
description: Which executable and which version of our software are you running? (use `--version` to get a version string)
2626
placeholder: |
27-
$./main --version
27+
$./llama-cli --version
2828
version: 2999 (42b4109e)
2929
built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
3030
validations:

.github/ISSUE_TEMPLATE/02-bug-medium.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ body:
2424
label: Name and Version
2525
description: Which executable and which version of our software are you running? (use `--version` to get a version string)
2626
placeholder: |
27-
$./main --version
27+
$./llama-cli --version
2828
version: 2999 (42b4109e)
2929
built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
3030
validations:

.github/ISSUE_TEMPLATE/03-bug-high.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ body:
2424
label: Name and Version
2525
description: Which executable and which version of our software are you running? (use `--version` to get a version string)
2626
placeholder: |
27-
$./main --version
27+
$./llama-cli --version
2828
version: 2999 (42b4109e)
2929
built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
3030
validations:

.github/ISSUE_TEMPLATE/04-bug-critical.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ body:
2424
label: Name and Version
2525
description: Which executable and which version of our software are you running? (use `--version` to get a version string)
2626
placeholder: |
27-
$./main --version
27+
$./llama-cli --version
2828
version: 2999 (42b4109e)
2929
built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
3030
validations:

.github/workflows/bench.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -119,7 +119,7 @@ jobs:
119119
-DLLAMA_FATAL_WARNINGS=OFF \
120120
-DLLAMA_ALL_WARNINGS=OFF \
121121
-DCMAKE_BUILD_TYPE=Release;
122-
cmake --build build --config Release -j $(nproc) --target server
122+
cmake --build build --config Release -j $(nproc) --target llama-server
123123
124124
- name: Download the dataset
125125
id: download_dataset

0 commit comments

Comments
 (0)