* fix(diffusers): allow to specify width and height without enable-parameters
Let's simplify usage by not gating width and height by parameters
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* chore: use sane defaults
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Instead of trying to derive it from the model file. In backends that
specify HF url this results in a fragile logic.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Some of the dependencies in `requirements.txt`, even if generic, pulls
down the line CUDA libraries.
This changes moves mostly all GPU-specific libs to the build-type, and
tries a safer approach. In `requirements.txt` now are listed only
"first-level" dependencies, for instance, grpc, but libs-dependencies
are moved down to the respective build-type `requirements.txt` to avoid
any mixin.
This should fix#2737 and #1592.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
fix: more places where we are installing grpc that need a version specified
fix: attempt to fix metal tests
fix: metal/brew is forcing an update, they don't have 1.58 available anymore
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* feat(elevenlabs): map elevenlabs API support to TTS
This allows elevenlabs Clients to work automatically with LocalAI by
supporting the elevenlabs API.
The elevenlabs server endpoint is implemented such as it is wired to the
TTS endpoints.
Fixes: https://github.com/mudler/LocalAI/issues/1809
* feat(openai/tts): compat layer with openai tts
Fixes: #1276
* fix: adapt tts CLI
* feat(intel): add diffusers support
* try to consume upstream container image
* Debug
* Manually install deps
* Map transformers/hf cache dir to modelpath if not specified
* fix(compel): update initialization, pass by all gRPC options
* fix: add dependencies, implement transformers for xpu
* base it from the oneapi image
* Add pillow
* set threads if specified when launching the API
* Skip conda install if intel
* defaults to non-intel
* ci: add to pipelines
* prepare compel only if enabled
* Skip conda install if intel
* fix cleanup
* Disable compel by default
* Install torch 2.1.0 with Intel
* Skip conda on some setups
* Detect python
* Quiet output
* Do not override system python with conda
* Prefer python3
* Fixups
* exllama2: do not install without conda (overrides pytorch version)
* exllama/exllama2: do not install if not using cuda
* Add missing dataset dependency
* Small fixups, symlink to python, add requirements
* Add neural_speed to the deps
* correctly handle model offloading
* fix: device_map == xpu
* go back at calling python, fixed at dockerfile level
* Exllama2 restricted to only nvidia gpus
* Tokenizer to xpu
* Dockerfile changes to build for ROCm
* Adjust linker flags for ROCm
* Update conda env for diffusers and transformers to use ROCm pytorch
* Update transformers conda env for ROCm
* ci: build hipblas images
* fixup rebase
* use self-hosted
Signed-off-by: mudler <mudler@localai.io>
* specify LD_LIBRARY_PATH only when BUILD_TYPE=hipblas
---------
Signed-off-by: mudler <mudler@localai.io>
Co-authored-by: mudler <mudler@localai.io>