LocalAI/core
mintyleaf 2bc4b56a79
feat: stream tokens usage (#4415)
* Use pb.Reply instead of []byte with Reply.GetMessage() in llama grpc to get the proper usage data in reply streaming mode at the last [DONE] frame

* Fix 'hang' on empty message from the start

Seems like that empty message marker trick was unnecessary

---------

Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2024-12-18 09:48:50 +01:00
..
application feat(template): read jinja templates from gguf files (#4332) 2024-12-08 13:50:33 +01:00
backend feat: stream tokens usage (#4415) 2024-12-18 09:48:50 +01:00
cli feat(template): read jinja templates from gguf files (#4332) 2024-12-08 13:50:33 +01:00
clients feat(store): add Golang client (#1977) 2024-04-16 15:54:14 +02:00
config feat(template): read jinja templates from gguf files (#4332) 2024-12-08 13:50:33 +01:00
dependencies_manager fix: be consistent in downloading files, check for scanner errors (#3108) 2024-08-02 20:06:25 +02:00
explorer feat(explorer): make possible to run sync in a separate process (#3224) 2024-08-12 19:25:44 +02:00
gallery feat(backends): Drop bert.cpp (#4272) 2024-11-27 16:34:28 +01:00
http chore(tests): stabilize tts test (#4417) 2024-12-17 00:46:48 +01:00
p2p fix(p2p): parse maddr correctly (#4219) 2024-11-21 14:06:49 +01:00
schema feat(silero): add Silero-vad backend (#4204) 2024-11-20 14:48:40 +01:00
services Fix: listmodelservice / welcome endpoint use LOOSE_ONLY (#3791) 2024-10-11 23:49:00 +02:00