You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Try to start the Docker container with the above configuration.
The Docker container goes through a lengthy build process for several hours
It looks like the rebuild completes(?) or gets a significant way through, then the Container quite with error code 132
Exit 132 is SIGILL (132-128=4) which means illegal instruction. I suspect AVX is still being used by something. My CPU doesn't support AVX at all, but I am expecting to run all inference on the GPU so achieving acceleration on CPU should be unimportant(?)
Expected behavior
LocalAI starts and serves an interface on the exposed host port 8082, ready to run inference on the AMD card.
Logs
The last lines of the startup/rebuild log, before the exit, are:
cp llama.cpp/build/bin/grpc-server .
make[2]: Leaving directory '/build/backend/cpp/llama-fallback'
make[1]: Entering directory '/build'
/usr/bin/upx backend/cpp/llama-fallback/grpc-server
Ultimate Packer for eXecutables
Copyright (C) 1996 - 2020
UPX 3.96 Markus Oberhumer, Laszlo Molnar & John Reiser Jan 23rd 2020
File size Ratio Format Name
-------------------- ------ ----------- -----------
85962576 -> 17753404 20.65% linux/amd64 grpc-server
Packed 1 file.
make[1]: Leaving directory '/build'
cp -rfv backend/cpp/llama-fallback/grpc-server backend-assets/grpc/llama-cpp-fallback
'backend/cpp/llama-fallback/grpc-server' -> 'backend-assets/grpc/llama-cpp-fallback'
I local-ai build info:
I BUILD_TYPE: hipblas
I GO_TAGS:
I LD_FLAGS: -s -w -X "github.com/mudler/LocalAI/internal.Version=v2.19.3" -X "github.com/mudler/LocalAI/internal.Commit=86f8d5b50acd8fe88af4f537be0d42472772b928"
I UPX: /usr/bin/upx
CGO_LDFLAGS="-O3 --rtlib=compiler-rt -unwindlib=libgcc -lhipblas -lrocblas --hip-link -L/opt/rocm/lib/llvm/lib" go build -ldflags "-s -w -X "github.com/mudler/LocalAI/internal.Version=v2.19.3" -X "github.com/mudler/LocalAI/internal.Commit=86f8d5b50acd8fe88af4f537be0d42472772b928"" -tags "" -o local-ai ./
The text was updated successfully, but these errors were encountered:
@chris-hatton looks something behind the scenes is happening - care to share dmesg when that happens?
Also, why using REBUILD? The image should have already binaries for the CPU flagset variants, including non-AVX - did you tried to disable it? your GPU seems also covered by defaults GPU_TARGETS
LocalAI version:
Using Docker image:
localai/localai:latest-aio-gpu-hipblas
Environment, CPU architecture, OS, and Version:
gfx1031
but I've seen this to begfx1030
compatible on a previous ROCm test)6.2.60200
)Describe the bug
To Reproduce
Expected behavior
LocalAI starts and serves an interface on the exposed host port 8082, ready to run inference on the AMD card.
Logs
The last lines of the startup/rebuild log, before the exit, are:
The text was updated successfully, but these errors were encountered: