LocalAI/backend/python/vllm/requirements-cublas12.txt

4 lines
49 B
Plaintext
Raw Normal View History

accelerate
torch==2.4.1
transformers
bitsandbytes