LocalAI/core/config
Ettore Di Giacinto 35561edb6e
feat(llama.cpp): support embeddings endpoints (#2871)
* feat(llama.cpp): add embeddings

Also enable embeddings by default for llama.cpp models

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fix(Makefile): prepare llama.cpp sources only once

Otherwise we keep cloning llama.cpp for each of the variants

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* do not set embeddings to false

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* docs: add embeddings to the YAML config reference

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-07-15 22:54:16 +02:00
..
application_config.go feat: HF /scan endpoint (#2566) 2024-07-10 13:18:32 +02:00
backend_config_loader.go chore: fix go.mod module (#2635) 2024-06-23 08:24:36 +00:00
backend_config_test.go refactor: gallery inconsistencies (#2647) 2024-06-24 17:32:12 +02:00
backend_config.go feat(llama.cpp): support embeddings endpoints (#2871) 2024-07-15 22:54:16 +02:00
config_suite_test.go dependencies(grpcio): bump to fix CI issues (#2362) 2024-05-21 14:33:47 +02:00
config_test.go feat(llama.cpp): guess model defaults from file (#2522) 2024-06-08 22:13:02 +02:00
gallery.go refactor: gallery inconsistencies (#2647) 2024-06-24 17:32:12 +02:00
guesser.go feat: models(gallery): add deepseek-v2-lite (#2658) 2024-07-13 17:09:59 -04:00