mirror of
https://github.com/mudler/LocalAI.git
synced 2024-12-20 21:23:10 +00:00
be6c4e6061
* fix(llama-cpp): consistently select fallback We didn't took in consideration the case where the host has the CPU flagset, but the binaries were not actually present in the asset dir. This made possible for instance for models that specified the llama-cpp backend directly in the config to not eventually pick-up the fallback binary in case the optimized binaries were not present. Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * chore: adjust and simplify selection Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * fix: move failure recovery to BackendLoader() Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * comments Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * minor fixups Signed-off-by: Ettore Di Giacinto <mudler@localai.io> --------- Signed-off-by: Ettore Di Giacinto <mudler@localai.io> |
||
---|---|---|
.. | ||
assets | ||
concurrency | ||
downloader | ||
functions | ||
grpc | ||
langchain | ||
library | ||
model | ||
oci | ||
stablediffusion | ||
startup | ||
store | ||
templates | ||
tinydream | ||
utils | ||
xsync | ||
xsysinfo |