mirror of
https://github.com/mudler/LocalAI.git
synced 2025-03-21 11:35:21 +00:00
* feat(aio): update AIO image defaults cpu: - text-to-text: llama3.1 - embeddings: granite-embeddings - vision: moonream2 gpu/intel: - text-to-text: localai-functioncall-qwen2.5-7b-v0.5 - embeddings: granite-embeddings - vision: minicpm Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * feat(aio): use minicpm as moondream2 stopped working https://github.com/ggml-org/llama.cpp/pull/12322#issuecomment-2717483759 Signed-off-by: Ettore Di Giacinto <mudler@localai.io> --------- Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
12 lines
420 B
YAML
12 lines
420 B
YAML
embeddings: true
|
|
name: text-embedding-ada-002
|
|
parameters:
|
|
model: huggingface://bartowski/granite-embedding-107m-multilingual-GGUF/granite-embedding-107m-multilingual-f16.gguf
|
|
|
|
usage: |
|
|
You can test this model with curl like this:
|
|
|
|
curl http://localhost:8080/embeddings -X POST -H "Content-Type: application/json" -d '{
|
|
"input": "Your text string goes here",
|
|
"model": "text-embedding-ada-002"
|
|
}' |