LocalAI/backend/python/vllm
Ettore Di Giacinto d19bea4af2
chore(vllm): do not install from source (#3745)
chore(vllm): do not install from source by default

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-07 12:27:37 +02:00
..
backend.py feat(vllm): add support for image-to-text and video-to-text (#3729) 2024-10-04 23:42:05 +02:00
install.sh chore(vllm): do not install from source (#3745) 2024-10-07 12:27:37 +02:00
Makefile feat: create bash library to handle install/run/test of python backends (#2286) 2024-05-11 18:32:46 +02:00
README.md refactor: move backends into the backends directory (#1279) 2023-11-13 22:40:16 +01:00
requirements-after.txt fix(python): move vllm to after deps, drop diffusers main deps 2024-08-07 23:34:37 +02:00
requirements-cpu.txt fix(python): move vllm to after deps, drop diffusers main deps 2024-08-07 23:34:37 +02:00
requirements-cublas11-after.txt feat(venv): shared env (#3195) 2024-08-07 19:45:14 +02:00
requirements-cublas11.txt feat(vllm): add support for image-to-text and video-to-text (#3729) 2024-10-04 23:42:05 +02:00
requirements-cublas12-after.txt feat(venv): shared env (#3195) 2024-08-07 19:45:14 +02:00
requirements-cublas12.txt feat(vllm): add support for image-to-text and video-to-text (#3729) 2024-10-04 23:42:05 +02:00
requirements-hipblas.txt feat(vllm): add support for image-to-text and video-to-text (#3729) 2024-10-04 23:42:05 +02:00
requirements-install.txt feat: migrate python backends from conda to uv (#2215) 2024-05-10 15:08:08 +02:00
requirements-intel.txt feat(vllm): add support for image-to-text and video-to-text (#3729) 2024-10-04 23:42:05 +02:00
requirements.txt chore(deps): bump grpcio to 1.66.2 (#3690) 2024-09-30 09:09:51 +02:00
run.sh feat: create bash library to handle install/run/test of python backends (#2286) 2024-05-11 18:32:46 +02:00
test.py feat(vllm): add support for embeddings (#3440) 2024-09-02 21:44:32 +02:00
test.sh feat: create bash library to handle install/run/test of python backends (#2286) 2024-05-11 18:32:46 +02:00

Creating a separate environment for the vllm project

make vllm