LocalAI/backend/python/vllm/requirements-cublas12.txt

5 lines
54 B
Plaintext
Raw Normal View History

accelerate
torch==2.4.1
transformers
bitsandbytes
vllm