LocalAI/core/http
mintyleaf 0d6c3a7d57
feat: include tokens usage for streamed output (#4282)
Use pb.Reply instead of []byte with Reply.GetMessage() in llama grpc to get the proper usage data in reply streaming mode at the last [DONE] frame

Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2024-11-28 14:47:56 +01:00
..
ctx chore: get model also from query (#3716) 2024-10-02 20:20:50 +02:00
elements feat(ui): move model detailed info to a modal (#4086) 2024-11-06 18:25:59 +01:00
endpoints feat: include tokens usage for streamed output (#4282) 2024-11-28 14:47:56 +01:00
middleware feat: add WebUI API token authorization (#4197) 2024-11-19 18:43:02 +01:00
routes feat(silero): add Silero-vad backend (#4204) 2024-11-20 14:48:40 +01:00
static feat(ui): move model detailed info to a modal (#4086) 2024-11-06 18:25:59 +01:00
views feat: add WebUI API token authorization (#4197) 2024-11-19 18:43:02 +01:00
app_test.go feat(backends): Drop bert.cpp (#4272) 2024-11-27 16:34:28 +01:00
app.go feat: allow to disable '/metrics' endpoints for local stats (#3945) 2024-10-23 15:34:32 +02:00
explorer.go feat(explorer): make possible to run sync in a separate process (#3224) 2024-08-12 19:25:44 +02:00
http_suite_test.go fix: rename fiber entrypoint from http/api to http/app (#2096) 2024-04-21 22:39:28 +02:00
render.go rf: centralize base64 image handling (#2595) 2024-06-24 08:34:36 +02:00