This website requires JavaScript.
Explore
Help
Sign In
ExternalVendorCode
/
LocalAI
Watch
1
Star
0
Fork
0
You've already forked LocalAI
mirror of
https://github.com/mudler/LocalAI.git
synced
2024-12-25 07:11:03 +00:00
Code
Issues
Actions
12
Packages
Projects
Releases
Wiki
Activity
23499ddc8a
LocalAI
/
backend
/
python
/
transformers-musicgen
/
requirements-cublas11.txt
4 lines
99 B
Plaintext
Raw
Normal View
History
Unescape
Escape
fix: ensure correct version of torch is always installed based on BUILD_TYPE(#2890) * fix: ensure correct version of torch is always installed based on BUILD_TYPE Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com> * Move causal-conv1d installation to build_types Signed-off-by: mudler <mudler@localai.io> * Move mamba-ssd install to build-type requirements.txt Signed-off-by: mudler <mudler@localai.io> --------- Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com> Signed-off-by: mudler <mudler@localai.io> Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com> Co-authored-by: mudler <mudler@localai.io>
2024-08-05 16:38:33 +00:00
--extra-index-url https://download.pytorch.org/whl/cu118
fix(python): move accelerate and GPU-specific libs to build-type (#3194) Some of the dependencies in `requirements.txt`, even if generic, pulls down the line CUDA libraries. This changes moves mostly all GPU-specific libs to the build-type, and tries a safer approach. In `requirements.txt` now are listed only "first-level" dependencies, for instance, grpc, but libs-dependencies are moved down to the respective build-type `requirements.txt` to avoid any mixin. This should fix #2737 and #1592. Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-08-07 15:02:32 +00:00
transformers
accelerate
fix(dependencies): pin pytorch version (#3872) Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-18 07:11:59 +00:00
torch==2.4.1+cu118
Reference in New Issue
Copy Permalink