* fix(llama-cpp): consistently select fallback
We didn't took in consideration the case where the host has the CPU
flagset, but the binaries were not actually present in the asset dir.
This made possible for instance for models that specified the llama-cpp
backend directly in the config to not eventually pick-up the fallback
binary in case the optimized binaries were not present.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* chore: adjust and simplify selection
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix: move failure recovery to BackendLoader()
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* comments
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* minor fixups
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>