* chore(refactor): track internally started models by ID
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Just extend options, no need to copy
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Improve debugging for rerankers failures
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Simplify model loading with rerankers
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Be more consistent when generating model options
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Uncommitted code
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Make deleteProcess more idiomatic
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Adapt CLI for sound generation
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Fixup threads definition
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Handle corner case where c.Seed is nil
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Consistently use ModelOptions
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Adapt new code to refactoring
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Co-authored-by: Dave <dave@gray101.com>
* chore(refactor): track grpcProcess in the model structure
This avoids to have to handle in two parts the data relative to the same
model. It makes it easier to track and use mutex with.
This also fixes races conditions while accessing to the model.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* chore(tests): run protogen-go before starting aio tests
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* chore(tests): install protoc in aio tests
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* chore(refactor): drop duplicated shutdown logics
- Handle locking in Shutdown and CheckModelIsLoaded in a more go-idiomatic way
- Drop duplicated code and re-organize shutdown code
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix: drop leftover
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* chore: improve logging and add missing locks
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix(shutdown): do not shutdown immediately busy backends
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* chore(refactor): avoid duplicate functions
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix: multiplicative backoff for shutdown (#3547)
* multiplicative backoff for shutdown
Rather than always retry every two seconds, back off the shutdown attempt rate?
Signed-off-by: Dave <dave@gray101.com>
* Update loader.go
Signed-off-by: Dave <dave@gray101.com>
* add clamp of 2 minutes
Signed-off-by: Dave Lee <dave@gray101.com>
---------
Signed-off-by: Dave <dave@gray101.com>
Signed-off-by: Dave Lee <dave@gray101.com>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Signed-off-by: Dave <dave@gray101.com>
Signed-off-by: Dave Lee <dave@gray101.com>
Co-authored-by: Dave <dave@gray101.com>
* feat: add endpoint to list system informations
For now, it lists the available backends, but can be expanded later on
to include more system informations (such as GPU devices detected, RAM,
threads configured, and so on so forth).
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* show also external backends
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* add test
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Due to a previous refactor we moved the client constructor tight to the
model address, however that was just a string which we would use to
build the client each time.
With this change we make the loader to return a *Model which carries a
constructor for the client and stores the client on the first
connection.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* specify workdir when launching external backend for safety / relative paths, bump version, logs
Signed-off-by: Dave Lee <dave@gray101.com>
* sneak in a devcontainer fix
Signed-off-by: Dave Lee <dave@gray101.com>
---------
Signed-off-by: Dave Lee <dave@gray101.com>
chore: drop gpt4all
gpt4all is already supported in llama.cpp - the backend was kept for
keeping compatibility with old gpt4all models (prior to gguf format).
It is good time now to clean up and remove it to slim the compilation
process.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix(cuda): downgrade to 12.0 to increase compatibility range
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* improve messaging
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
fix(model-list): be consistent, skip known files from listing
This changeset does two things:
- Removes the dependency of listing models from the OpenAI schema.
- Tries to reduce confusion between ListModels() in model loader and in
the service - now there is only one ListModels which is in services
and does not depend anymore on the OpenAI schema
- The OpenAI-schema functions were moved nearby the OpenAI specific
endpoints that needs the schema
- Drops the ListModel Service structure as there was no real need for
it.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* refactor(gallery): move under core/
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix(unarchive): do not allow symlinks
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
contains simple fixes to warnings and errors, removes a broken / outdated test, runs go mod tidy, and as the actual change, centralizes base64 image handling
Signed-off-by: Dave Lee <dave@gray101.com>
* feat(amdgpu): try to build in single binary
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Release space from worker
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* auto select cpu variant
Signed-off-by: Sertac Ozercan <sozercan@gmail.com>
* remove cuda target for now
Signed-off-by: Sertac Ozercan <sozercan@gmail.com>
* fix metal
Signed-off-by: Sertac Ozercan <sozercan@gmail.com>
* fix path
Signed-off-by: Sertac Ozercan <sozercan@gmail.com>
* cuda
Signed-off-by: Sertac Ozercan <sozercan@gmail.com>
* auto select cuda
Signed-off-by: Sertac Ozercan <sozercan@gmail.com>
* update test
Signed-off-by: Sertac Ozercan <sozercan@gmail.com>
* select CUDA backend only if present
Signed-off-by: mudler <mudler@localai.io>
* ci: keep cuda bin in path
Signed-off-by: mudler <mudler@localai.io>
* Makefile: make dist now builds also cuda
Signed-off-by: mudler <mudler@localai.io>
* Keep pushing fallback in case auto-flagset/nvidia fails
There could be other reasons for which the default binary may fail. For example we might have detected an Nvidia GPU,
however the user might not have the drivers/cuda libraries installed in the system, and so it would fail to start.
We keep the fallback of llama.cpp at the end of the llama.cpp backends to try to fallback loading in case things go wrong
Signed-off-by: mudler <mudler@localai.io>
* Do not build cuda on MacOS
Signed-off-by: mudler <mudler@localai.io>
* cleanup
Signed-off-by: Sertac Ozercan <sozercan@gmail.com>
* Apply suggestions from code review
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
---------
Signed-off-by: Sertac Ozercan <sozercan@gmail.com>
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
Signed-off-by: mudler <mudler@localai.io>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
Co-authored-by: mudler <mudler@localai.io>
* feat(initializer): do not specify backends to autoload
We can simply try to autoload the backends extracted in the asset dir.
This will allow to build variants of the same backend (for e.g. with different instructions sets),
so to have a single binary for all the variants.
Signed-off-by: mudler <mudler@localai.io>
* refactor(prepare): refactor out llama.cpp prepare steps
Make it so are idempotent and that we can re-build
Signed-off-by: mudler <mudler@localai.io>
* [TEST] feat(build): build noavx version along
Signed-off-by: mudler <mudler@localai.io>
* build: make build parallel
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* build: do not override CMAKE_ARGS
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* build: add fallback variant
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Fixups
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix(huggingface-langchain): fail if no token is set
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix(huggingface-langchain): rename
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix: do not autoload local-store
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix: give priority between the listed backends
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: mudler <mudler@localai.io>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(gallery): op now supports deletion of models
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Wire things with WebUI(WIP)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* minor improvements
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* refactor(template): isolate and add tests
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Signed-off-by: Dave <dave@gray101.com>
Co-authored-by: Dave <dave@gray101.com>
* fix(go-llama): use llama-cpp as default
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
* fix(backends): drop obsoleted lines
---------
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
* feat(intel): add diffusers support
* try to consume upstream container image
* Debug
* Manually install deps
* Map transformers/hf cache dir to modelpath if not specified
* fix(compel): update initialization, pass by all gRPC options
* fix: add dependencies, implement transformers for xpu
* base it from the oneapi image
* Add pillow
* set threads if specified when launching the API
* Skip conda install if intel
* defaults to non-intel
* ci: add to pipelines
* prepare compel only if enabled
* Skip conda install if intel
* fix cleanup
* Disable compel by default
* Install torch 2.1.0 with Intel
* Skip conda on some setups
* Detect python
* Quiet output
* Do not override system python with conda
* Prefer python3
* Fixups
* exllama2: do not install without conda (overrides pytorch version)
* exllama/exllama2: do not install if not using cuda
* Add missing dataset dependency
* Small fixups, symlink to python, add requirements
* Add neural_speed to the deps
* correctly handle model offloading
* fix: device_map == xpu
* go back at calling python, fixed at dockerfile level
* Exllama2 restricted to only nvidia gpus
* Tokenizer to xpu
* feat(tools): support Tools in the API
Co-authored-by: =?UTF-8?q?Stephan=20A=C3=9Fmus?= <stephan.assmus@sap.com>
* feat(tools): support function streaming
* Adhere to new return types when using tools instead of functions
* Keep backward compatibility with function calling
* Evaluate function names in chat templates
* Disable recovery with --debug
* Correctly stream out the entire result
* Detect when llm chooses to reply and to not perform any action in SSE
* Feedback from code review
---------
Co-authored-by: =?UTF-8?q?Stephan=20A=C3=9Fmus?= <stephan.assmus@sap.com>
* cleanup backends
* switch image to ubuntu 22.04
* adapt commands for ubuntu
* transformers cleanup
* no contrib on ubuntu
* Change test model to gguf
* ci: disable bark tests (too cpu-intensive)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* cleanup
* refinements
* use intel base image
* Makefile: Add docker targets
* Change test model
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>