LocalAI/backend/python/vllm
Ettore Di Giacinto b325807c60
fix(intel): pin torch and intel-extensions (#4435)
* fix(intel): pin torch version

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fix(intel): pin intel packages version

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-12-19 15:39:32 +01:00
..
backend.py feat(vllm): expose 'load_format' (#3943) 2024-10-23 15:34:57 +02:00
install.sh chore(deps): bump grpcio to 1.68.1 (#4301) 2024-12-02 19:13:26 +01:00
Makefile feat: create bash library to handle install/run/test of python backends (#2286) 2024-05-11 18:32:46 +02:00
README.md refactor: move backends into the backends directory (#1279) 2023-11-13 22:40:16 +01:00
requirements-after.txt fix(python): move vllm to after deps, drop diffusers main deps 2024-08-07 23:34:37 +02:00
requirements-cpu.txt fix(dependencies): pin pytorch version (#3872) 2024-10-18 09:11:59 +02:00
requirements-cublas11-after.txt feat(venv): shared env (#3195) 2024-08-07 19:45:14 +02:00
requirements-cublas11.txt fix(dependencies): pin pytorch version (#3872) 2024-10-18 09:11:59 +02:00
requirements-cublas12-after.txt feat(venv): shared env (#3195) 2024-08-07 19:45:14 +02:00
requirements-cublas12.txt fix(dependencies): pin pytorch version (#3872) 2024-10-18 09:11:59 +02:00
requirements-hipblas.txt fix(dependencies): pin pytorch version (#3872) 2024-10-18 09:11:59 +02:00
requirements-install.txt feat: migrate python backends from conda to uv (#2215) 2024-05-10 15:08:08 +02:00
requirements-intel.txt fix(intel): pin torch and intel-extensions (#4435) 2024-12-19 15:39:32 +01:00
requirements.txt chore(deps): bump grpcio to 1.68.1 (#4301) 2024-12-02 19:13:26 +01:00
run.sh feat: create bash library to handle install/run/test of python backends (#2286) 2024-05-11 18:32:46 +02:00
test.py feat(vllm): add support for embeddings (#3440) 2024-09-02 21:44:32 +02:00
test.sh feat: create bash library to handle install/run/test of python backends (#2286) 2024-05-11 18:32:46 +02:00

Creating a separate environment for the vllm project

make vllm