LocalAI/backend/python/vllm
Ettore Di Giacinto 68fc014c6d
feat(vllm): add support for embeddings (#3440)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-09-02 21:44:32 +02:00
..
backend.py feat(vllm): add support for embeddings (#3440) 2024-09-02 21:44:32 +02:00
install.sh fix: add missing openvino/optimum/etc libraries for Intel, fixes #2289 (#2292) 2024-05-12 09:01:45 +02:00
Makefile feat: create bash library to handle install/run/test of python backends (#2286) 2024-05-11 18:32:46 +02:00
README.md refactor: move backends into the backends directory (#1279) 2023-11-13 22:40:16 +01:00
requirements-after.txt fix(python): move vllm to after deps, drop diffusers main deps 2024-08-07 23:34:37 +02:00
requirements-cpu.txt fix(python): move vllm to after deps, drop diffusers main deps 2024-08-07 23:34:37 +02:00
requirements-cublas11-after.txt feat(venv): shared env (#3195) 2024-08-07 19:45:14 +02:00
requirements-cublas11.txt fix(python): move vllm to after deps, drop diffusers main deps 2024-08-07 23:34:37 +02:00
requirements-cublas12-after.txt feat(venv): shared env (#3195) 2024-08-07 19:45:14 +02:00
requirements-cublas12.txt fix(python): move vllm to after deps, drop diffusers main deps 2024-08-07 23:34:37 +02:00
requirements-hipblas.txt fix(python): move vllm to after deps, drop diffusers main deps 2024-08-07 23:34:37 +02:00
requirements-install.txt feat: migrate python backends from conda to uv (#2215) 2024-05-10 15:08:08 +02:00
requirements-intel.txt fix(python): move vllm to after deps, drop diffusers main deps 2024-08-07 23:34:37 +02:00
requirements.txt chore(cli): be consistent between workers and expose ExtraLLamaCPPArgs to both (#3428) 2024-08-30 00:10:17 +02:00
run.sh feat: create bash library to handle install/run/test of python backends (#2286) 2024-05-11 18:32:46 +02:00
test.py feat(vllm): add support for embeddings (#3440) 2024-09-02 21:44:32 +02:00
test.sh feat: create bash library to handle install/run/test of python backends (#2286) 2024-05-11 18:32:46 +02:00

Creating a separate environment for the vllm project

make vllm