LocalAI/pkg/model
Ettore Di Giacinto b82577d642
fix(llama.cpp): consider also native builds (#3839)
This is in order to identify also builds which are not using
alternatives based on capabilities.

For instance, there are cases when we build the backend only natively in
the host.

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-15 09:41:53 +02:00
..
filters.go chore(refactor): drop duplicated shutdown logics (#3589) 2024-09-17 16:51:40 +02:00
initializers.go fix(llama.cpp): consider also native builds (#3839) 2024-10-15 09:41:53 +02:00
loader_options.go fix(llama-cpp): consistently select fallback (#3789) 2024-10-11 16:55:57 +02:00
loader_test.go feat: track internally started models by ID (#3693) 2024-10-02 08:55:58 +02:00
loader.go feat(shutdown): allow force shutdown of backends (#3733) 2024-10-05 10:41:35 +02:00
model_suite_test.go tests: add template tests (#2063) 2024-04-18 10:57:24 +02:00
model.go chore(refactor): track grpcProcess in the model structure (#3663) 2024-09-26 12:44:55 +02:00
process.go fix(initializer): correctly reap dangling processes (#3717) 2024-10-02 20:37:40 +02:00
template_test.go fix(model-loading): keep track of open GRPC Clients (#3377) 2024-08-25 14:36:09 +02:00
template.go fix(model-loading): keep track of open GRPC Clients (#3377) 2024-08-25 14:36:09 +02:00
watchdog.go fix(model-loading): keep track of open GRPC Clients (#3377) 2024-08-25 14:36:09 +02:00