LocalAI/core/backend
Ettore Di Giacinto d5da8c3509
feat(templates): extract text from multimodal requests (#3866)
When offloading template construction to the backend, we want to keep
text around in case of multimodal requests.

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-17 17:33:50 +02:00
..
backend_suite_test.go feat: extract output with regexes from LLMs (#3491) 2024-09-13 13:27:36 +02:00
embeddings.go feat: track internally started models by ID (#3693) 2024-10-02 08:55:58 +02:00
image.go feat: track internally started models by ID (#3693) 2024-10-02 08:55:58 +02:00
llm_test.go feat: extract output with regexes from LLMs (#3491) 2024-09-13 13:27:36 +02:00
llm.go feat(templates): extract text from multimodal requests (#3866) 2024-10-17 17:33:50 +02:00
options.go feat: track internally started models by ID (#3693) 2024-10-02 08:55:58 +02:00
rerank.go feat: track internally started models by ID (#3693) 2024-10-02 08:55:58 +02:00
soundgeneration.go feat: track internally started models by ID (#3693) 2024-10-02 08:55:58 +02:00
stores.go chore: fix go.mod module (#2635) 2024-06-23 08:24:36 +00:00
token_metrics.go feat: track internally started models by ID (#3693) 2024-10-02 08:55:58 +02:00
tokenize.go chore: simplify model loading (#3715) 2024-10-02 08:59:06 +02:00
transcript.go feat: track internally started models by ID (#3693) 2024-10-02 08:55:58 +02:00
tts.go feat: track internally started models by ID (#3693) 2024-10-02 08:55:58 +02:00