* feat: extract output with regexes from LLMs
This changset adds `extract_regex` to the LLM config. It is a list of
regexes that can match output and will be used to re extract text from
the LLM output. This is particularly useful for LLMs which outputs final
results into tags.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Add tests, enhance output in case of configuration error
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat: add endpoint to list system informations
For now, it lists the available backends, but can be expanded later on
to include more system informations (such as GPU devices detected, RAM,
threads configured, and so on so forth).
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* show also external backends
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* add test
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* chore(cli): be consistent between workers and expose ExtraLLamaCPPArgs to both
Fixes: https://github.com/mudler/LocalAI/issues/3427
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* bump grpcio
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Due to a previous refactor we moved the client constructor tight to the
model address, however that was just a string which we would use to
build the client each time.
With this change we make the loader to return a *Model which carries a
constructor for the client and stores the client on the first
connection.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* initial version of elevenlabs compatible soundgeneration api and cli command
Signed-off-by: Dave Lee <dave@gray101.com>
* minor cleanup
Signed-off-by: Dave Lee <dave@gray101.com>
* restore TTS, add test
Signed-off-by: Dave Lee <dave@gray101.com>
* remove stray s
Signed-off-by: Dave Lee <dave@gray101.com>
* fix
Signed-off-by: Dave Lee <dave@gray101.com>
---------
Signed-off-by: Dave Lee <dave@gray101.com>
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
This was noticed by models returning content besides function calls.
Sadly we can't test that easily in the CI so it got unnoticed.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* chore(p2p): single-node when sharing federated instance
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* chore: refactor out and extract into functions
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix(anime.js): correctly set the static path
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Drop anime.js (unused)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix(p2p): avoid starting the node twice
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix(p2p): keep exposing service if we don't start the llama.cpp runner
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* refactor: extract proxy into functions
* feat(federation): do not allocate services, directly connect with libp2p
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Add more debug messages
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat: allow to disable gallery endpoints
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* improve p2p messaging
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* improve error handling
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Make sure to close the listening socket when context is exhausted
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(explorer): allow to specify a worker target
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(explorer): correctly load balance requests
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(explorer): mark load balanced by default
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix: make sure to delete tunnels that might not exist anymore
If a worker goes off and on might change tunnel address, and we want to
load balance only on the active tunnels.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* WIP
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Fixups
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Wire up a simple explorer DB
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* wip
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* WIP
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* refactor: group services id so can be identified easily in the ledger table
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(discovery): discovery service now gather worker informations correctly
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(explorer): display network token
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(explorer): display form to add new networks
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(explorer): stop from overwriting networks
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(explorer): display only networks with active workers
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(explorer): list only clusters in a network if it has online workers
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* remove invalid and inactive networks
if networks have no workers delete them from the database, similarly,
if invalid.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* ci: add workflow to deploy new explorer versions automatically
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* build-api: build with p2p tag
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Allow to specify a connection timeout
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* logging
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Better p2p defaults
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Set loglevel
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Fix dht enable
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Default to info for loglevel
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Add navbar
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Slightly improve rendering
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Allow to copy the token easily
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* ci fixups
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
chore: drop gpt4all
gpt4all is already supported in llama.cpp - the backend was kept for
keeping compatibility with old gpt4all models (prior to gguf format).
It is good time now to clean up and remove it to slim the compilation
process.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
feat(p2p): allow to run multiple clusters in the same network
Allow to specify a network ID via CLI which allows to run multiple
clusters, logically separated within the same network (by using the same
shared token).
Note: This segregation is not "secure" by any means, anyone having the
network token can see the services available in all the network,
however, this provides a way to separate the inference endpoints.
This allows for instance to have a node which is both federated and
having attached a set of llama.cpp workers.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(openai): add json_schema and strict mode
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* handle err vs _
security scanners prefer if we put these branches in, and I tend to agree.
Signed-off-by: Dave <dave@gray101.com>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Signed-off-by: Dave <dave@gray101.com>
Co-authored-by: Dave <dave@gray101.com>
* fix(downloader): be consistent in downloading files
This PR puts some order in the downloader such as functions are re-used
across several places.
This fixes an issue with having uri's inside the model YAML file, it
would resolve to MD5 rather then using the filename
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix(scanner): do raise error only if unsafeFiles are found
Fixes: https://github.com/mudler/LocalAI/issues/3114
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* wip
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* get rid of panics
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* expose it properly from the config
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Simplify
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* forgot to commit
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Remove focus on test
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Small fixups
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* refactor: break down json grammar parser in different files
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix: patch to `refactor_grammars` - propagate errors (#3006)
propagate errors around
Signed-off-by: Dave Lee <dave@gray101.com>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Signed-off-by: Dave Lee <dave@gray101.com>
Co-authored-by: Dave <dave@gray101.com>
* feat(functions): enhance parsing with broken JSON when we parse the raw results
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* breaking: make function name by default
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(grammar): dynamically generate grammars with mutating keys
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* refactor: simplify condition
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Update docs
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Fixes:
```
panic: invalid argument to IntN
goroutine 401 [running]:
math/rand/v2.(*Rand).IntN(...)
/home/mudler/_git/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.4.linux-amd64/src/math/rand/v2/rand.go:190
math/rand/v2.IntN(...)
/home/mudler/_git/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.4.linux-amd64/src/math/rand/v2/rand.go:307
github.com/mudler/LocalAI/core/cli.Proxy.func2()
/home/mudler/_git/LocalAI/core/cli/federated.go:104 +0x76e
created by github.com/mudler/LocalAI/core/cli.Proxy in goroutine 1
/home/mudler/_git/LocalAI/core/cli/federated.go:91 +0x3c5
```
When no nodes are found and something is trying to hit the federated
endpoint (and no tunnels are ready yet).
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* docs(swagger): finish convering gallery section
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* docs: add section to explain how to install models with local-ai run
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Minor docs adjustments
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* docs(swagger): core more localai/openai endpoints
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix swagger descriptions for backend_monitor.go
Signed-off-by: Dave <dave@gray101.com>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Signed-off-by: Dave <dave@gray101.com>
Co-authored-by: Dave <dave@gray101.com>
* feat(llama.cpp): add embeddings
Also enable embeddings by default for llama.cpp models
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix(Makefile): prepare llama.cpp sources only once
Otherwise we keep cloning llama.cpp for each of the variants
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* do not set embeddings to false
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* docs: add embeddings to the YAML config reference
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* models(gallery): add deepseek-v2-lite
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Update deepseek.yaml
The trailing space here is presumably part of the template string - try use a chomp keep to get yaml lint to accept it?
Signed-off-by: Dave <dave@gray101.com>
* Update deepseek.yaml
chomp didn't fix, erase the space and see what happens.
Signed-off-by: Dave <dave@gray101.com>
* Update deepseek.yaml
Signed-off-by: Dave <dave@gray101.com>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Signed-off-by: Dave <dave@gray101.com>
Co-authored-by: Dave <dave@gray101.com>
fix(model-list): be consistent, skip known files from listing
This changeset does two things:
- Removes the dependency of listing models from the OpenAI schema.
- Tries to reduce confusion between ListModels() in model loader and in
the service - now there is only one ListModels which is in services
and does not depend anymore on the OpenAI schema
- The OpenAI-schema functions were moved nearby the OpenAI specific
endpoints that needs the schema
- Drops the ListModel Service structure as there was no real need for
it.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* start by checking /scan during the checksum update
Signed-off-by: Dave Lee <dave@gray101.com>
* add back in golang side features: downloader/uri gets struct and scan function, gallery uses it, and secscan/models calls it.
Signed-off-by: Dave Lee <dave@gray101.com>
* add a param to scan specific urls - useful for debugging
Signed-off-by: Dave Lee <dave@gray101.com>
* helpful printouts
Signed-off-by: Dave Lee <dave@gray101.com>
* fix offsets
Signed-off-by: Dave Lee <dave@gray101.com>
* fix error and naming
Signed-off-by: Dave Lee <dave@gray101.com>
* expose error
Signed-off-by: Dave Lee <dave@gray101.com>
* fix json tags
Signed-off-by: Dave Lee <dave@gray101.com>
* slight wording change
Signed-off-by: Dave Lee <dave@gray101.com>
* go mod tidy - getting warnings
Signed-off-by: Dave Lee <dave@gray101.com>
* split out python to make editing easier, add some simple code to delete contaminated entries from gallery
Signed-off-by: Dave Lee <dave@gray101.com>
* o7 to my favorite part of our old name, go-skynet
Signed-off-by: Dave Lee <dave@gray101.com>
* merge fix
Signed-off-by: Dave Lee <dave@gray101.com>
* merge fix
Signed-off-by: Dave Lee <dave@gray101.com>
* merge fix
Signed-off-by: Dave Lee <dave@gray101.com>
* address review comments
Signed-off-by: Dave Lee <dave@gray101.com>
* forgot secscan could accept multiple URL at once
Signed-off-by: Dave Lee <dave@gray101.com>
* invert naming and actually use it
Signed-off-by: Dave Lee <dave@gray101.com>
* missed cli/models.go
Signed-off-by: Dave Lee <dave@gray101.com>
* Update .github/check_and_update.py
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
Signed-off-by: Dave <dave@gray101.com>
---------
Signed-off-by: Dave Lee <dave@gray101.com>
Signed-off-by: Dave <dave@gray101.com>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
This allows LocalAI to be less noisy avoiding to connect outside.
Needed if e.g. there is no plan into using p2p across separate networks.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Wip p2p enhancements
* get online state
* Pass-by token to show in the dashboard
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Style
* Minor fixups
* parametrize SearchID
* Refactoring
* Allow to expose/bind more services
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Add federation
* Display federated mode in the WebUI
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Small fixups
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* make federated nodes visible from the WebUI
* Fix version display
* improve web page
* live page update
* visual enhancements
* enhancements
* visual enhancements
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
This fixes a breakage in rendering the template. Now the models passed
by to the renderer have the ID field rather then Name
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* refactor(gallery): move under core/
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix(unarchive): do not allow symlinks
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
contains simple fixes to warnings and errors, removes a broken / outdated test, runs go mod tidy, and as the actual change, centralizes base64 image handling
Signed-off-by: Dave Lee <dave@gray101.com>
* Support specifying oci:// and ollama:// for model URLs
Fixes: https://github.com/mudler/LocalAI/issues/2527
Fixes: https://github.com/mudler/LocalAI/issues/1028
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Lower watcher warnings
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Allow to install ollama models from CLI
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fixup tests
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Do not keep file ownership
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Skip test on darwin
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* WIP
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* gen a static page instead (we force DNS redirects to it)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(gallery): install models from CLI, unify install
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Uniform graphic of model page
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Makefile: update targets
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Slightly enhance gallery view
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(ui): add page to talk with voice, transcription, and tts
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Enhance graphics and status reporting
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Better UX by blocking unvalid actions
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* wip: guess informations from gguf file
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* update go mod
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Small fixups
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Identify llama3
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Do not try to guess the name, as reading gguf files can be expensive
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Allow to disable guessing
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* pass basePath down to pkg/downloader
Signed-off-by: Dave Lee <dave@gray101.com>
* enforce
Signed-off-by: Dave Lee <dave@gray101.com>
---------
Signed-off-by: Dave Lee <dave@gray101.com>
some minor renames and refactorings within BackendConfigLoader - make things more consistent, remove underused code, rename things for clarity
Signed-off-by: Dave Lee <dave@gray101.com>
* feat(functions): relax mixedgrammars
Extend even more the functionalities and when mixed mode is enabled,
tolerate also both strings and JSON in the result - in this case we make
sure that the JSON can be correctly parsed.
This also updates the examples and the gallery model to configure the
grammar.
The changeset also breaks current function/grammar configuration as it
reserves now a stanza in the YAML config.
For example:
```yaml
function:
grammar:
# This allows the grammar to also return messages
mixed_mode: true
# Suffix to add to the grammar
# prefix: '<tool_call>\n'
# Force parallel calls in the grammar
# parallel_calls: true
```
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* refactor, add a way to disable mixed json and freestring
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Fix linting issues
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(llama.cpp): Enable decentralized, distributed inference
As https://github.com/mudler/LocalAI/pull/2324 introduced distributed inferencing thanks to
@rgerganov implementation in https://github.com/ggerganov/llama.cpp/pull/6829 in upstream llama.cpp, now
it is possible to distribute the workload to remote llama.cpp gRPC server.
This changeset now uses mudler/edgevpn to establish a secure, distributed network between the nodes using a shared token.
The token is generated automatically when starting the server with the `--p2p` flag, and can be used by starting the workers
with `local-ai worker p2p-llama-cpp-rpc` by passing the token via environment variable (TOKEN) or with args (--token).
As per how mudler/edgevpn works, a network is established between the server and the workers with dht and mdns discovery protocols,
the llama.cpp rpc server is automatically started and exposed to the underlying p2p network so the API server can connect on.
When the HTTP server is started, it will discover the workers in the network and automatically create the port-forwards to the service locally.
Then llama.cpp is configured to use the services.
This feature is behind the "p2p" GO_FLAGS
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* go mod tidy
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* ci: add p2p tag
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* better message
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>