* refactor(gallery): move under core/
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix(unarchive): do not allow symlinks
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
contains simple fixes to warnings and errors, removes a broken / outdated test, runs go mod tidy, and as the actual change, centralizes base64 image handling
Signed-off-by: Dave Lee <dave@gray101.com>
* Support specifying oci:// and ollama:// for model URLs
Fixes: https://github.com/mudler/LocalAI/issues/2527
Fixes: https://github.com/mudler/LocalAI/issues/1028
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Lower watcher warnings
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Allow to install ollama models from CLI
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fixup tests
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Do not keep file ownership
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Skip test on darwin
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
When offering fallback libs, use the proper env var for darwin
Note: this does not include the libraries itself, but only sets the
proper env var for the libs to be picked up on darwin.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* WIP
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* gen a static page instead (we force DNS redirects to it)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(gallery): install models from CLI, unify install
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Uniform graphic of model page
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Makefile: update targets
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Slightly enhance gallery view
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* ci: try to build for arm64
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Allow to skip hipblas on make dist
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* use arm64 cross compiler
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* correctly target go arm64
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* create a separate target
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* cross-compile grpc
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Add Protobuf include dirs
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* temp disable CUDA build
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* aarch64 builds: Reduce backends
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Even less backends
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Even less backends
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(startup): allow to load libs from extracted assets
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* makefile: set arch
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(amdgpu): try to build in single binary
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Release space from worker
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* pass basePath down to pkg/downloader
Signed-off-by: Dave Lee <dave@gray101.com>
* enforce
Signed-off-by: Dave Lee <dave@gray101.com>
---------
Signed-off-by: Dave Lee <dave@gray101.com>
* models(gallery): add mistral-0.3 and command-r, update functions
Add also disable_parallel_new_lines to disable newlines in the JSON
output when forcing parallel tools. Some models (like mistral) might be
very sensible to that when being used for function calling.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* models(gallery): add aya-23-8b
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(functions): relax mixedgrammars
Extend even more the functionalities and when mixed mode is enabled,
tolerate also both strings and JSON in the result - in this case we make
sure that the JSON can be correctly parsed.
This also updates the examples and the gallery model to configure the
grammar.
The changeset also breaks current function/grammar configuration as it
reserves now a stanza in the YAML config.
For example:
```yaml
function:
grammar:
# This allows the grammar to also return messages
mixed_mode: true
# Suffix to add to the grammar
# prefix: '<tool_call>\n'
# Force parallel calls in the grammar
# parallel_calls: true
```
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* refactor, add a way to disable mixed json and freestring
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Fix linting issues
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(llama.cpp): Enable decentralized, distributed inference
As https://github.com/mudler/LocalAI/pull/2324 introduced distributed inferencing thanks to
@rgerganov implementation in https://github.com/ggerganov/llama.cpp/pull/6829 in upstream llama.cpp, now
it is possible to distribute the workload to remote llama.cpp gRPC server.
This changeset now uses mudler/edgevpn to establish a secure, distributed network between the nodes using a shared token.
The token is generated automatically when starting the server with the `--p2p` flag, and can be used by starting the workers
with `local-ai worker p2p-llama-cpp-rpc` by passing the token via environment variable (TOKEN) or with args (--token).
As per how mudler/edgevpn works, a network is established between the server and the workers with dht and mdns discovery protocols,
the llama.cpp rpc server is automatically started and exposed to the underlying p2p network so the API server can connect on.
When the HTTP server is started, it will discover the workers in the network and automatically create the port-forwards to the service locally.
Then llama.cpp is configured to use the services.
This feature is behind the "p2p" GO_FLAGS
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* go mod tidy
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* ci: add p2p tag
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* better message
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(functions): allow to use JSONRegexMatch unconditionally
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(functions): make json_regex_match a list
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
feat(functions): support mixed JSON BNF grammar
This PR provides new options to control how functions are extracted from
the LLM, and also provides more control on how JSON grammars can be used
(also in conjunction).
New YAML settings introduced:
- `grammar_message`: when enabled, the generated grammar can also decide
to push strings and not only JSON objects. This allows the LLM to pick
to either respond freely or using JSON.
- `grammar_prefix`: Allows to prefix a string to the JSON grammar
definition.
- `replace_results`: Is a map that allows to replace strings in the LLM
result.
As an example, consider the following settings for Hermes-2-Pro-Mistral,
which allow extracting both JSON results coming from the model, and the
ones coming from the grammar:
```yaml
function:
# disable injecting the "answer" tool
disable_no_action: true
# This allows the grammar to also return messages
grammar_message: true
# Suffix to add to the grammar
grammar_prefix: '<tool_call>\n'
return_name_in_function_response: true
# Without grammar uncomment the lines below
# Warning: this is relying only on the capability of the
# LLM model to generate the correct function call.
# no_grammar: true
# json_regex_match: "(?s)<tool_call>(.*?)</tool_call>"
replace_results:
"<tool_call>": ""
"\'": "\""
```
Note: To disable entirely grammars usage in the example above, uncomment the
`no_grammar` and `json_regex_match`.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* auto select cpu variant
Signed-off-by: Sertac Ozercan <sozercan@gmail.com>
* remove cuda target for now
Signed-off-by: Sertac Ozercan <sozercan@gmail.com>
* fix metal
Signed-off-by: Sertac Ozercan <sozercan@gmail.com>
* fix path
Signed-off-by: Sertac Ozercan <sozercan@gmail.com>
* cuda
Signed-off-by: Sertac Ozercan <sozercan@gmail.com>
* auto select cuda
Signed-off-by: Sertac Ozercan <sozercan@gmail.com>
* update test
Signed-off-by: Sertac Ozercan <sozercan@gmail.com>
* select CUDA backend only if present
Signed-off-by: mudler <mudler@localai.io>
* ci: keep cuda bin in path
Signed-off-by: mudler <mudler@localai.io>
* Makefile: make dist now builds also cuda
Signed-off-by: mudler <mudler@localai.io>
* Keep pushing fallback in case auto-flagset/nvidia fails
There could be other reasons for which the default binary may fail. For example we might have detected an Nvidia GPU,
however the user might not have the drivers/cuda libraries installed in the system, and so it would fail to start.
We keep the fallback of llama.cpp at the end of the llama.cpp backends to try to fallback loading in case things go wrong
Signed-off-by: mudler <mudler@localai.io>
* Do not build cuda on MacOS
Signed-off-by: mudler <mudler@localai.io>
* cleanup
Signed-off-by: Sertac Ozercan <sozercan@gmail.com>
* Apply suggestions from code review
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
---------
Signed-off-by: Sertac Ozercan <sozercan@gmail.com>
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
Signed-off-by: mudler <mudler@localai.io>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
Co-authored-by: mudler <mudler@localai.io>
When enabling grammar with functions, it might be useful to
allow more flexibility to support models that are fine-tuned against returning
function calls of the form of { "name": "function_name", "arguments" {...} }
rather then { "function": "function_name", "arguments": {..} }.
This might call out to a more generic approach later on, but for the moment being we can easily support both
as we have just to specific different types.
If needed we can expand on this later on
Signed-off-by: mudler <mudler@localai.io>
* feat(ui): allow to set system prompt for chat
Make also the models in the index clickable, and display as table
Fixes#2257
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(vision): support also png with base64 input
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(ui): support vision and upload of files
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* display the processed image
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* make trust remote code stand out
Signed-off-by: mudler <mudler@localai.io>
* feat(ui): track in progress job across index/model gallery
Signed-off-by: mudler <mudler@localai.io>
* minor fixups
Signed-off-by: mudler <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Signed-off-by: mudler <mudler@localai.io>
* ux: change welcome when there are no models installed
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* ux: filter
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* ux: show tags in filter
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* wip
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* make tags clickable
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* allow to delete models from the list
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* ui: display icon of installed models
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* gallery: remove gallery file when removing model
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(gallery): show a re-install button
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* make filter buttons, rename Gallery field
Signed-off-by: mudler <mudler@localai.io>
* show again buttons at end of operations
Signed-off-by: mudler <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Signed-off-by: mudler <mudler@localai.io>
* feat(initializer): do not specify backends to autoload
We can simply try to autoload the backends extracted in the asset dir.
This will allow to build variants of the same backend (for e.g. with different instructions sets),
so to have a single binary for all the variants.
Signed-off-by: mudler <mudler@localai.io>
* refactor(prepare): refactor out llama.cpp prepare steps
Make it so are idempotent and that we can re-build
Signed-off-by: mudler <mudler@localai.io>
* [TEST] feat(build): build noavx version along
Signed-off-by: mudler <mudler@localai.io>
* build: make build parallel
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* build: do not override CMAKE_ARGS
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* build: add fallback variant
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Fixups
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix(huggingface-langchain): fail if no token is set
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix(huggingface-langchain): rename
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix: do not autoload local-store
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix: give priority between the listed backends
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: mudler <mudler@localai.io>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* start breaking up the giant channel refactor now that it's better understood - easier to merge bites
Signed-off-by: Dave Lee <dave@gray101.com>
* add concurrency and base64 back in, along with new base64 tests.
Signed-off-by: Dave Lee <dave@gray101.com>
* Automatic rename of whisper.go's Result to TranscriptResult
Signed-off-by: Dave Lee <dave@gray101.com>
* remove pkg/concurrency - significant changes coming in split 2
Signed-off-by: Dave Lee <dave@gray101.com>
* fix comments
Signed-off-by: Dave Lee <dave@gray101.com>
* add list_model service as another low-risk service to get it out of the way
Signed-off-by: Dave Lee <dave@gray101.com>
* split backend config loader into seperate file from the actual config struct. No changes yet, just reduce cognative load with smaller files of logical blocks
Signed-off-by: Dave Lee <dave@gray101.com>
* rename state.go ==> application.go
Signed-off-by: Dave Lee <dave@gray101.com>
* fix lost import?
Signed-off-by: Dave Lee <dave@gray101.com>
---------
Signed-off-by: Dave Lee <dave@gray101.com>
* feat(gallery): op now supports deletion of models
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Wire things with WebUI(WIP)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* minor improvements
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>