LocalAI/pkg/grpc
mintyleaf 2bc4b56a79
feat: stream tokens usage (#4415)
* Use pb.Reply instead of []byte with Reply.GetMessage() in llama grpc to get the proper usage data in reply streaming mode at the last [DONE] frame

* Fix 'hang' on empty message from the start

Seems like that empty message marker trick was unnecessary

---------

Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2024-12-18 09:48:50 +01:00
..
base feat(silero): add Silero-vad backend (#4204) 2024-11-20 14:48:40 +01:00
backend.go feat: stream tokens usage (#4415) 2024-12-18 09:48:50 +01:00
client.go feat: stream tokens usage (#4415) 2024-12-18 09:48:50 +01:00
embed.go feat: stream tokens usage (#4415) 2024-12-18 09:48:50 +01:00
interface.go feat(silero): add Silero-vad backend (#4204) 2024-11-20 14:48:40 +01:00
server.go feat(silero): add Silero-vad backend (#4204) 2024-11-20 14:48:40 +01:00