LocalAI/core
mintyleaf 0d6c3a7d57
feat: include tokens usage for streamed output (#4282)
Use pb.Reply instead of []byte with Reply.GetMessage() in llama grpc to get the proper usage data in reply streaming mode at the last [DONE] frame

Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2024-11-28 14:47:56 +01:00
..
backend feat: include tokens usage for streamed output (#4282) 2024-11-28 14:47:56 +01:00
cli fix(p2p): parse correctly ExtraLLamaCPPArgs (#4220) 2024-11-21 15:17:48 +01:00
clients feat(store): add Golang client (#1977) 2024-04-16 15:54:14 +02:00
config chore(refactor): imply modelpath (#4208) 2024-11-20 18:06:35 +01:00
dependencies_manager fix: be consistent in downloading files, check for scanner errors (#3108) 2024-08-02 20:06:25 +02:00
explorer feat(explorer): make possible to run sync in a separate process (#3224) 2024-08-12 19:25:44 +02:00
gallery feat(backends): Drop bert.cpp (#4272) 2024-11-27 16:34:28 +01:00
http feat: include tokens usage for streamed output (#4282) 2024-11-28 14:47:56 +01:00
p2p fix(p2p): parse maddr correctly (#4219) 2024-11-21 14:06:49 +01:00
schema feat(silero): add Silero-vad backend (#4204) 2024-11-20 14:48:40 +01:00
services Fix: listmodelservice / welcome endpoint use LOOSE_ONLY (#3791) 2024-10-11 23:49:00 +02:00
startup chore(refactor): drop unnecessary code in loader (#4096) 2024-11-08 21:54:25 +01:00
application.go feat(model-list): be consistent, skip known files from listing (#2760) 2024-07-10 15:28:39 +02:00