-
-
Notifications
You must be signed in to change notification settings - Fork 2.5k
Insights: mudler/LocalAI
Overview
-
- 39 Merged pull requests
- 0 Open pull requests
- 3 Closed issues
- 7 New issues
Could not load contribution data
Please try again later
39 Pull requests merged by 3 people
-
chore: ⬆️ Update ggml-org/llama.cpp to
814f795e063c257f33b921eab4073484238a151a
#5331 merged
May 8, 2025 -
chore(model gallery): add qwen3-14b-griffon-i1
#5330 merged
May 7, 2025 -
chore: ⬆️ Update ggml-org/llama.cpp to
91a86a6f354aa73a7aab7bc3d283be410fdc93a5
#5329 merged
May 6, 2025 -
chore(deps): bump llama.cpp to
b34c859146630dff136943abc9852ca173a7c9d6
#5323 merged
May 6, 2025 -
chore(model gallery): add claria-14b
#5326 merged
May 6, 2025 -
chore(model gallery): add goekdeniz-guelmez_josiefied-qwen3-8b-abliterated-v1
#5325 merged
May 6, 2025 -
chore(model gallery): add huihui-ai_qwen3-14b-abliterated
#5324 merged
May 6, 2025 -
chore(model-gallery): ⬆️ update checksum
#5321 merged
May 6, 2025 -
fix(hipblas): do not build all cpu-specific flags
#5322 merged
May 6, 2025 -
chore(deps): bump mxschmitt/action-tmate from 3.21 to 3.22
#5319 merged
May 5, 2025 -
fix(arm64): do not build instructions which are not available
#5318 merged
May 5, 2025 -
chore(model gallery): add allura-org_remnant-qwen3-8b
#5317 merged
May 5, 2025 -
chore: ⬆️ Update ggml-org/llama.cpp to
9fdfcdaeddd1ef57c6d041b89cd8fb7048a0f028
#5316 merged
May 4, 2025 -
fix: use rice when embedding large binaries
#5309 merged
May 4, 2025 -
chore(model gallery): add rei-v3-kto-12b
#5313 merged
May 4, 2025 -
chore(model gallery): add kalomaze_qwen3-16b-a3b
#5312 merged
May 4, 2025 -
chore(model gallery): add qwen3-30b-a1.5b-high-speed
#5311 merged
May 4, 2025 -
chore: ⬆️ Update ggml-org/llama.cpp to
36667c8edcded08063ed51c7d57e9e086bbfc903
#5300 merged
May 4, 2025 -
fix(gpu): do not assume gpu being returned has node and mem
#5310 merged
May 3, 2025 -
chore(defaults): enlarge defaults, drop gpu layers which is infered
#5308 merged
May 3, 2025 -
chore(deps): bump llama.cpp to '1d36b3670b285e69e58b9d687c770a2a0a192194
#5307 merged
May 3, 2025 -
chore(model gallery): add smoothie-qwen3-8b
#5306 merged
May 3, 2025 -
chore(model gallery): add qwen-3-32b-medical-reasoning-i1
#5305 merged
May 3, 2025 -
chore(model gallery): add amoral-qwen3-14b
#5304 merged
May 3, 2025 -
chore(model gallery): add comet_12b_v.5-i1
#5303 merged
May 3, 2025 -
chore(model gallery): add genericrpv3-4b
#5302 merged
May 3, 2025 -
chore(model gallery): add planetoid_27b_v.2
#5301 merged
May 3, 2025 -
feat(llama.cpp): estimate vram usage
#5299 merged
May 2, 2025 -
chore(model gallery): add webthinker-qwq-32b-i1
#5298 merged
May 2, 2025 -
chore(model gallery): add shuttleai_shuttle-3.5
#5297 merged
May 2, 2025 -
chore(model gallery): add microsoft_phi-4-reasoning
#5296 merged
May 2, 2025 -
chore(model gallery): add microsoft_phi-4-reasoning-plus
#5295 merged
May 2, 2025 -
chore(model gallery): add furina-8b
#5294 merged
May 2, 2025 -
chore(model gallery): add josiefied-qwen3-8b-abliterated-v1
#5293 merged
May 2, 2025 -
chore: ⬆️ Update ggml-org/llama.cpp to
d7a14c42a1883a34a6553cbfe30da1e1b84dfd6a
#5292 merged
May 2, 2025 -
chore(model gallery): add microsoft_phi-4-mini-reasoning
#5288 merged
May 1, 2025 -
chore(model gallery): add fast-math-qwen3-14b
#5287 merged
May 1, 2025 -
chore(model gallery): add qwen3-8b-jailbroken
#5286 merged
May 1, 2025 -
chore(model gallery): add qwen3-30b-a3b-abliterated
#5285 merged
May 1, 2025
3 Issues closed by 2 people
-
feat: automatically adjust default gpu_layers by available GPU memory
#3541 closed
May 2, 2025 -
feat: Improve automatic VRAM allocation from llama-cpp
#5257 closed
May 2, 2025 -
Issues running a few Models.
#4231 closed
May 1, 2025
7 Issues opened by 6 people
-
Possible Deadlock on loading model with LOCALAI_SINGLE_ACTIVE_BACKEND=true
#5328 opened
May 6, 2025 -
Whisper API not compatible with OpenAI
#5327 opened
May 6, 2025 -
Backend GGML outdated builds causing it to fail on newer nvidia GPUs
#5315 opened
May 4, 2025 -
Model fails to load with RCP error after clean install
#5314 opened
May 4, 2025 -
error with txt2vid example in docs
#5291 opened
May 1, 2025 -
model used in img2vid example in docs is no longer available/valid
#5290 opened
May 1, 2025
8 Unresolved conversations
Sometimes conversations happen on old items that aren’t yet closed. Here is a list of all the Issues and Pull Requests with unresolved conversations.
-
Completion endpoint does not count tokens when using vLLM backend
#3436 commented on
May 1, 2025 • 0 new comments -
Watchdog Time Out detection error.
#5221 commented on
May 2, 2025 • 0 new comments -
LocalAI v2.28 does not use the ENV file in docker.
#5280 commented on
May 4, 2025 • 0 new comments -
Clean install fails to run any model
#5225 commented on
May 5, 2025 • 0 new comments -
Installation problems on MacOS
#4300 commented on
May 5, 2025 • 0 new comments -
Can't build LocalAI with llama.cpp with CUDA
#3418 commented on
May 6, 2025 • 0 new comments -
chore: :arrow_up: Update PABannier/bark.cpp to `5d5be84f089ab9ea53b7a793f088d3fbf7247495`
#4786 commented on
May 7, 2025 • 0 new comments -
chore: :arrow_up: Update leejet/stable-diffusion.cpp to `10c6501bd05a697e014f1bee3a84e5664290c489`
#4925 commented on
May 7, 2025 • 0 new comments