This website requires JavaScript.
Explore
Help
Sign In
ExternalVendorCode
/
LocalAI
Watch
1
Star
0
Fork
0
You've already forked LocalAI
mirror of
https://github.com/mudler/LocalAI.git
synced
2024-12-21 13:37:51 +00:00
Code
Issues
Actions
12
Packages
Projects
Releases
Wiki
Activity
59cbf38b4b
LocalAI
/
backend
/
python
/
vllm
/
requirements-cpu.txt
3 lines
36 B
Plaintext
Raw
Normal View
History
Unescape
Escape
fix(python): move accelerate and GPU-specific libs to build-type (#3194) Some of the dependencies in `requirements.txt`, even if generic, pulls down the line CUDA libraries. This changes moves mostly all GPU-specific libs to the build-type, and tries a safer approach. In `requirements.txt` now are listed only "first-level" dependencies, for instance, grpc, but libs-dependencies are moved down to the respective build-type `requirements.txt` to avoid any mixin. This should fix #2737 and #1592. Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-08-07 15:02:32 +00:00
accelerate
fix(dependencies): pin pytorch version (#3872) Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-18 07:11:59 +00:00
torch==2.4.1
fix(python): move vllm to after deps, drop diffusers main deps Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-08-07 21:34:37 +00:00
transformers
Reference in New Issue
Copy Permalink