LocalAI/aio/cpu
Ettore Di Giacinto 3c3050f68e
feat(backends): Drop bert.cpp (#4272)
* feat(backends): Drop bert.cpp

use llama.cpp 3.2 as a drop-in replacement for bert.cpp

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* chore(tests): make test more robust

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-11-27 16:34:28 +01:00
..
embeddings.yaml feat(backends): Drop bert.cpp (#4272) 2024-11-27 16:34:28 +01:00
image-gen.yaml feat(aio): add intel profile (#1901) 2024-03-26 18:45:25 +01:00
README.md feat(aio): entrypoint, update workflows (#1872) 2024-03-21 22:09:04 +01:00
rerank.yaml feat(rerankers): Add new backend, support jina rerankers API (#2121) 2024-04-25 00:19:02 +02:00
speech-to-text.yaml feat(aio): add tests, update model definitions (#1880) 2024-03-22 21:13:11 +01:00
text-to-speech.yaml feat(aio): add tests, update model definitions (#1880) 2024-03-22 21:13:11 +01:00
text-to-text.yaml models(gallery): add mistral-0.3 and command-r, update functions (#2388) 2024-05-23 19:16:08 +02:00
vision.yaml chore(aio): rename gpt-4-vision-preview to gpt-4o (#3597) 2024-09-18 15:55:46 +02:00

AIO CPU size

Use this image with CPU-only.

Please keep using only C++ backends so the base image is as small as possible (without CUDA, cuDNN, python, etc).