v2.16.0
Welcome to LocalAI's latest update!
πππ woot woot! So excited to share this release, a lot of new features are landing in LocalAI!!!!! πππ
π Introducing Distributed Llama.cpp Inferencing
Now it is possible to distribute the inferencing workload across different workers with llama.cpp models !
This feature has landed with #2324 and is based on the upstream work of @rgerganov in ggerganov/llama.cpp#6829.
How it works: a front-end server manages the requests compatible with the OpenAI API (LocalAI) and workers (llama.cpp) are used to distribute the workload. This makes possible to run larger models split across different nodes!
How to use it
To start workers to offload the computation you can run:
local-ai llamacpp-worker <listening_address> <listening_port>
However, you can also follow the llama.cpp README and building the rpc-server (https://github.com/ggerganov/llama.cpp/blob/master/examples/rpc/README.md), which is still compatible with LocalAI.
When starting the LocalAI server, which is going to accept the API requests, you can set a list of workers IP/address by specifying the addresses with LLAMACPP_GRPC_SERVERS
:
LLAMACPP_GRPC_SERVERS="address1:port,address2:port" local-ai run
At this point the workload hitting in the LocalAI server should be distributed across the nodes!
π€ Peer2Peer llama.cpp
LocalAI is the first AI Free, Open source project offering complete, decentralized, peer2peer while private, LLM inferencing on top of the libp2p protocol. There is no "public swarm" to offload the computation, but rather empowers you to build your own cluster of local and remote machines to distribute LLM computation.
This feature leverages the ability of llama.cpp to distribute the workload explained just above and features from one of my other projects, https://github.com/mudler/edgevpn.
LocalAI builds on top of the twos, and allows to create a private peer2peer network between nodes, without the need of centralizing connections or manually configuring IP addresses: it unlocks totally decentralized, private, peer-to-peer inferencing capabilities. Works also behind different NAT-ted networks (uses DHT and mDNS as discovery mechanism).
How it works: A pre-shared token can be generated and shared between workers and the server to form a private, decentralized, p2p network.
You can see the feature in action here:
How to use it
- Start the server with
--p2p
:
./local-ai run --p2p
# 1:02AM INF loading environment variables from file envFile=.env
# 1:02AM INF Setting logging to info
# 1:02AM INF P2P mode enabled
# 1:02AM INF No token provided, generating one
# 1:02AM INF Generated Token:
# XXXXXXXXXXX
# 1:02AM INF Press a button to proceed
A token is displayed, copy it and press enter.
You can re-use the same token later restarting the server with --p2ptoken
(or P2P_TOKEN
).
- Start the workers. Now you can copy the local-ai binary in other hosts, and run as many workers with that token:
TOKEN=XXX ./local-ai p2p-llama-cpp-rpc
# 1:06AM INF loading environment variables from file envFile=.env
# 1:06AM INF Setting logging to info
# {"level":"INFO","time":"2024-05-19T01:06:01.794+0200","caller":"config/config.go:288","message":"connmanager disabled\n"}
# {"level":"INFO","time":"2024-05-19T01:06:01.794+0200","caller":"config/config.go:295","message":" go-libp2p resource manager protection enabled"}
# {"level":"INFO","time":"2024-05-19T01:06:01.794+0200","caller":"config/config.go:409","message":"max connections: 100\n"}
# 1:06AM INF Starting llama-cpp-rpc-server on '127.0.0.1:34371'
# {"level":"INFO","time":"2024-05-19T01:06:01.794+0200","caller":"node/node.go:118","message":" Starting EdgeVPN network"}
# create_backend: using CPU backend
# Starting RPC server on 127.0.0.1:34371, backend memory: 31913 MB
# 2024/05/19 01:06:01 failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB). # See https://github.com/quic-go/quic-go/wiki/UDP-Buffer-Sizes for details.
# {"level":"INFO","time":"2024-05-19T01:06:01.805+0200","caller":"node/node.go:172","message":" Node ID: 12D3KooWJ7WQAbCWKfJgjw2oMMGGss9diw3Sov5hVWi8t4DMgx92"}
# {"level":"INFO","time":"2024-05-19T01:06:01.806+0200","caller":"node/node.go:173","message":" Node Addresses: [/ip4/127.0.0.1/tcp/44931 /ip4/127.0.0.1/udp/33251/quic-v1/webtransport/certhash/uEiAWAhZ-W9yx2ZHnKQm3BE_ft5jjoc468z5-Rgr9XdfjeQ/certhash/uEiB8Uwn0M2TQBELaV2m4lqypIAY2S-2ZMf7lt_N5LS6ojw /ip4/127.0.0.1/udp/35660/quic-v1 /ip4/192.168.68.110/tcp/44931 /ip4/192.168.68.110/udp/33251/quic-v1/webtransport/certhash/uEiAWAhZ-W9yx2ZHnKQm3BE_ft5jjoc468z5-Rgr9XdfjeQ/certhash/uEiB8Uwn0M2TQBELaV2m4lqypIAY2S-2ZMf7lt_N5LS6ojw /ip4/192.168.68.110/udp/35660/quic-v1 /ip6/::1/tcp/41289 /ip6/::1/udp/33160/quic-v1/webtransport/certhash/uEiAWAhZ-W9yx2ZHnKQm3BE_ft5jjoc468z5-Rgr9XdfjeQ/certhash/uEiB8Uwn0M2TQBELaV2m4lqypIAY2S-2ZMf7lt_N5LS6ojw /ip6/::1/udp/35701/quic-v1]"}
# {"level":"INFO","time":"2024-05-19T01:06:01.806+0200","caller":"discovery/dht.go:104","message":" Bootstrapping DHT"}
(Note you can also supply the token via args)
At this point, you should see in the server logs messages stating that new workers are found
- Now you can start doing inference as usual on the server (the node used on step 1)
Interested in to try it out? As we are still updating the documentation, you can read the full instructions here #2343
π Advanced Function calling support with Mixed JSON Grammars
LocalAI gets better at function calling with mixed grammars!
With this release, LocalAI introduces a transformative capability: support for mixed JSON BNF grammars. It allows to specify a grammar for the LLM that allows to output structured JSON and free text.
How to use it:
To enable mixed grammars, you can set in the YAML
configuration file function.mixed_mode = true
, for example:
function:
# disable injecting the "answer" tool
disable_no_action: true
grammar:
# This allows the grammar to also return messages
mixed_mode: true
This feature significantly enhances LocalAI's ability to interpret and manipulate JSON data coming from the LLM through a more flexible and powerful grammar system. Users can now combine multiple grammar types within a single JSON structure, allowing for dynamic parsing and validation scenarios.
Grammars can also turned off entirely and leave the user to determine how the data is parsed from the LLM to be correctly interpretated by LocalAI to be still compliant to the OpenAI REST spec.
For example, to interpret Hermes results, one can just annotate regexes in function.json_regex_match
to extract the LLM response:
function:
grammar:
disable: true
# disable injecting the "answer" tool
disable_no_action: true
return_name_in_function_response: true
json_regex_match:
- "(?s)<tool_call>(.*?)</tool_call>"
- "(?s)<tool_call>(.*?)"
replace_llm_results:
# Drop the scratchpad content from responses
- key: "(?s)<scratchpad>.*</scratchpad>"
value: ""
replace_function_results:
# Replace everything that is not JSON array or object, just in case.
- key: '(?s)^[^{\[]*'
value: ""
- key: '(?s)[^}\]]*$'
value: ""
# Drop the scratchpad content from responses
- key: "(?s)<scratchpad>.*</scratchpad>"
value: ""
Note that regex can still be used when enabling mixed grammars is enabled.
This is especially important for models which does not support grammars - such as transformers or OpenVINO models, that now can support as well function calling. As we update the docs, further documentation can be found in the PRs that you can find in the changelog below.
π New Model Additions and Updates
Our model gallery continues to grow with exciting new additions like Aya-35b, Mistral-0.3, Hermes-Theta and updates to existing models ensuring they remain at the cutting edge.
This release is having major enhancements on tool calling support. Besides working on making our default models in AIO images more performant - now you can try an enhanced out-of-the-box experience with function calling in the Hermes model family ( Hermes-2-Pro-Mistral and Hermes-2-Theta-Llama-3)
Our LocalAI function model!
I have fine-tuned a function call model specific to leverage entirely the grammar support of LocalAI, you can find it in the model gallery already and on huggingface
π Single Binary Release: Simplified Deployment and Management
In our continuous effort to streamline the user experience and deployment process, LocalAI v2.16.0 proudly introduces a single binary release. This enhancement, thanks to @sozercan's contributions, consolidates all variants (CUDA and non-cuda releases) and dependencies into one compact executable file.
This change simplifies the installation and update processes, reduces compatibility issues, and speeds up the setup for new users and existing deployments as now binary releases are even more portable than ever!
π§ Bug Fixes and Improvements
A host of bug fixes have been implemented to ensure smoother operation and integration. Key fixes include enhancements to the Intel build process, stability adjustments for setuptools in Python backends, and critical updates ensuring the successful build of p2p configurations.
Migrating Python Backends: From Conda to UV
LocalAI has migrated its Python backends from Conda to UV. This transition, thanks to @cryptk contributions, enhances the efficiency and scalability of our backend operations. Users will experience faster setup times and reduced complexity, streamlining the development process and making it easier to manage dependencies across different environments.
π£ Let's Make Some Noise!
A gigantic THANK YOU to everyone whoβs contributedβyour feedback, bug squashing, and feature suggestions are what make LocalAI shine. To all our heroes out there supporting other users and sharing their expertise, youβre the real MVPs!
Remember, LocalAI thrives on community supportβnot big corporate bucks. If you love what we're building, show some love! A shoutout on social (@LocalAI_OSS and @mudler_it on twitter/X), joining our sponsors, or simply starring us on GitHub makes all the difference.
Also, if you haven't yet joined our Discord, come on over! Here's the link: https://discord.gg/uJAeKSAGDy
Thanks a ton, and.. enjoy this release!
What's Changed
Bug fixes π
- build: do not specify a BUILD_ID by default by @mudler in #2284
- fix: add missing openvino/optimum/etc libraries for Intel, fixes #2289 by @cryptk in #2292
- add setuptools for openvino by @fakezeta in #2301
- fix: add setuptools to all requirements-intel.txt files for python backends by @cryptk in #2333
- ci: correctly build p2p in GO_TAGS by @mudler in #2369
- ci: generate specific image for intel builds by @mudler in #2374
- fix: stablediffusion binary by @sozercan in #2385
Exciting New Features π
- feat: migrate python backends from conda to uv by @cryptk in #2215
- feat: create bash library to handle install/run/test of python backends by @cryptk in #2286
- feat(grammar): support models with specific construct by @mudler in #2291
- feat(ui): display number of available models for installation by @mudler in #2298
- feat: auto select llama-cpp cpu variant by @sozercan in #2305
- feat(llama.cpp): add
flash_attention
andno_kv_offloading
by @mudler in #2310 - feat(functions): support models with no grammar and no regex by @mudler in #2315
- feat(functions): allow to set JSON matcher by @mudler in #2319
- feat: auto select llama-cpp cuda runtime by @sozercan in #2306
- feat(llama.cpp): add distributed llama.cpp inferencing by @mudler in #2324
- feat(functions): mixed JSON BNF grammars by @mudler in #2328
- feat(functions): simplify parsing, read functions as list by @mudler in #2340
- feat(functions): Enable true regex replacement for the regexReplacement option by @lenaxia in #2341
- feat(backends): add openvoice backend by @mudler in #2334
- feat(webui): statically embed js/css assets by @mudler in #2348
- feat(functions): allow to use JSONRegexMatch unconditionally by @mudler in #2349
- feat(functions): don't use yaml.MapSlice by @mudler in #2354
- build: add sha by @mudler in #2356
- feat(llama.cpp): Totally decentralized, private, distributed, p2p inference by @mudler in #2343
- feat(functions): relax mixedgrammars by @mudler in #2365
- models(gallery): add mistral-0.3 and command-r, update functions by @mudler in #2388
π§ Models
- models(gallery): add aloe by @mudler in #2283
- models(gallery): add Llama-3-8B-Instruct-abliterated by @mudler in #2288
- models(gallery): add l3-chaoticsoliloquy-v1.5-4x8b by @mudler in #2295
- models(gallery): add jsl-medllama-3-8b-v2.0 by @mudler in #2296
- models(gallery): add llama-3-refueled by @mudler in #2297
- models(gallery): add aura-llama-Abliterated by @mudler in #2309
- models(gallery): add Bunny-llama by @mudler in #2311
- models(gallery): add lumimaidv2 by @mudler in #2312
- models(gallery): add orthocopter by @mudler in #2313
- fix(gallery) Correct llama3-8b-instruct model file by @tannisroot in #2330
- models(gallery): add hermes-2-theta-llama-3-8b by @mudler in #2331
- models(gallery): add yi 6/9b, sqlcoder, sfr-iterative-dpo by @mudler in #2335
- models(gallery): add anita by @mudler in #2344
- models(gallery): add master-yi by @mudler in #2345
- models(gallery): update poppy porpoise mmproj by @mudler in #2346
- models(gallery): add LocalAI-Llama3-8b-Function-Call-v0.2-GGUF by @mudler in #2355
- models(gallery): add stheno by @mudler in #2358
- fix(gallery): checksum Meta-Llama-3-70B-Instruct.Q4_K_M.gguf - #2364 by @Nold360 in #2366
- models(gallery): add phi-3-medium-4k-instruct by @mudler in #2367
- models(gallery): add hercules and helpingAI by @mudler in #2376
- ci(checksum_checker): do get sha from hf API when available by @mudler in #2380
- models(gallery): β¬οΈ update checksum by @localai-bot in #2383
- models(gallery): β¬οΈ update checksum by @localai-bot in #2386
- models(gallery): add aya-35b by @mudler in #2391
π Documentation and examples
- docs: Update semantic-todo/README.md by @eltociear in #2294
- Add Home Assistant Integration by @valentinfrlch in #2387
- Add warning for running the binary on MacOS by @mauromorales in #2389
π Dependencies
- β¬οΈ Update ggerganov/llama.cpp by @localai-bot in #2281
- β¬οΈ Update docs version mudler/LocalAI by @localai-bot in #2280
- β¬οΈ Update ggerganov/llama.cpp by @localai-bot in #2285
- β¬οΈ Update ggerganov/llama.cpp by @localai-bot in #2290
- feat(swagger): update swagger by @localai-bot in #2302
- β¬οΈ Update ggerganov/llama.cpp by @localai-bot in #2303
- β¬οΈ Update ggerganov/whisper.cpp by @localai-bot in #2317
- β¬οΈ Update ggerganov/whisper.cpp by @localai-bot in #2326
- β¬οΈ Update ggerganov/llama.cpp by @localai-bot in #2316
- β¬οΈ Update ggerganov/whisper.cpp by @localai-bot in #2329
- β¬οΈ Update ggerganov/llama.cpp by @localai-bot in #2337
- β¬οΈ Update ggerganov/llama.cpp by @localai-bot in #2339
- β¬οΈ Update ggerganov/llama.cpp by @localai-bot in #2342
- β¬οΈ Update ggerganov/llama.cpp by @localai-bot in #2351
- β¬οΈ Update ggerganov/whisper.cpp by @localai-bot in #2352
- dependencies(grpcio): bump to fix CI issues by @mudler in #2362
- deps(llama.cpp): update and adapt API changes by @mudler in #2381
- β¬οΈ Update ggerganov/whisper.cpp by @localai-bot in #2361
- β¬οΈ Update go-skynet/go-bert.cpp by @localai-bot in #1225
- β¬οΈ Update ggerganov/llama.cpp by @localai-bot in #2360
Other Changes
- refactor: Minor improvements to BackendConfigLoader by @dave-gray101 in #2353
New Contributors
- @tannisroot made their first contribution in #2330
- @lenaxia made their first contribution in #2341
- @valentinfrlch made their first contribution in #2387
Full Changelog: v2.15.0...v2.16.0