* feat(vllm): add support for image-to-text
Related to https://github.com/mudler/LocalAI/issues/3670
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(vllm): add support for video-to-text
Closes: https://github.com/mudler/LocalAI/issues/2318
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(vllm): support CPU installations
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(vllm): add bnb
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* chore: add docs reference
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Apply suggestions from code review
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
Some of the dependencies in `requirements.txt`, even if generic, pulls
down the line CUDA libraries.
This changes moves mostly all GPU-specific libs to the build-type, and
tries a safer approach. In `requirements.txt` now are listed only
"first-level" dependencies, for instance, grpc, but libs-dependencies
are moved down to the respective build-type `requirements.txt` to avoid
any mixin.
This should fix#2737 and #1592.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>