LocalAI/backend
Ettore Di Giacinto 35561edb6e
feat(llama.cpp): support embeddings endpoints (#2871)
* feat(llama.cpp): add embeddings

Also enable embeddings by default for llama.cpp models

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fix(Makefile): prepare llama.cpp sources only once

Otherwise we keep cloning llama.cpp for each of the variants

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* do not set embeddings to false

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* docs: add embeddings to the YAML config reference

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-07-15 22:54:16 +02:00
..
cpp feat(llama.cpp): support embeddings endpoints (#2871) 2024-07-15 22:54:16 +02:00
go feat(whisper): add translate option (#2649) 2024-06-24 19:21:22 +02:00
python Revert "chore(deps): Bump numpy from 1.26.4 to 2.0.0 in /backend/python/openvoice" (#2868) 2024-07-15 08:31:27 +02:00
backend.proto feat(whisper): add translate option (#2649) 2024-06-24 19:21:22 +02:00