LocalAI/backend/python/transformers
Ettore Di Giacinto e51792784a
chore(deps): bump grpcio to 1.68.1 (#4301)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-12-02 19:13:26 +01:00
..
backend.py feat(transformers): Use downloaded model for Transformers backend if it already exists. (#3777) 2024-10-10 08:42:59 +00:00
install.sh fix: add missing openvino/optimum/etc libraries for Intel, fixes #2289 (#2292) 2024-05-12 09:01:45 +02:00
Makefile feat: create bash library to handle install/run/test of python backends (#2286) 2024-05-11 18:32:46 +02:00
README.md feat(transformers): add embeddings with Automodel (#1308) 2023-11-20 21:21:17 +01:00
requirements-cpu.txt fix(dependencies): pin pytorch version (#3872) 2024-10-18 09:11:59 +02:00
requirements-cublas11.txt fix(dependencies): pin pytorch version (#3872) 2024-10-18 09:11:59 +02:00
requirements-cublas12.txt fix(dependencies): pin pytorch version (#3872) 2024-10-18 09:11:59 +02:00
requirements-hipblas.txt fix(dependencies): pin pytorch version (#3872) 2024-10-18 09:11:59 +02:00
requirements-intel.txt fix(python): move accelerate and GPU-specific libs to build-type (#3194) 2024-08-07 17:02:32 +02:00
requirements.txt chore(deps): bump grpcio to 1.68.1 (#4301) 2024-12-02 19:13:26 +01:00
run.sh bugfix: CUDA acceleration not working (#2475) 2024-06-03 22:41:42 +02:00
test.py feat: create bash library to handle install/run/test of python backends (#2286) 2024-05-11 18:32:46 +02:00
test.sh feat: create bash library to handle install/run/test of python backends (#2286) 2024-05-11 18:32:46 +02:00

Creating a separate environment for the transformers project

make transformers