LocalAI/backend/python/coqui
Ludovic Leroux 12c0d9443e
feat: use tokenizer.apply_chat_template() in vLLM (#1990)
Use tokenizer.apply_chat_template() in vLLM

Signed-off-by: Ludovic LEROUX <ludovic@inpher.io>
2024-04-11 19:20:22 +02:00
..
backend_pb2_grpc.py feat: use tokenizer.apply_chat_template() in vLLM (#1990) 2024-04-11 19:20:22 +02:00
backend_pb2.py feat: use tokenizer.apply_chat_template() in vLLM (#1990) 2024-04-11 19:20:22 +02:00
coqui_server.py feat: 🐍 add mamba support (#1589) 2024-01-19 23:42:50 +01:00
Makefile feat: add 🐸 coqui (#1489) 2023-12-24 19:38:54 +01:00
README.md feat: add 🐸 coqui (#1489) 2023-12-24 19:38:54 +01:00
run.sh feat: add 🐸 coqui (#1489) 2023-12-24 19:38:54 +01:00
test.py feat: add 🐸 coqui (#1489) 2023-12-24 19:38:54 +01:00
test.sh feat: add 🐸 coqui (#1489) 2023-12-24 19:38:54 +01:00

Creating a separate environment for ttsbark project

make coqui

Testing the gRPC server

make test