* WIP
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* gen a static page instead (we force DNS redirects to it)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(gallery): install models from CLI, unify install
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Uniform graphic of model page
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Makefile: update targets
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Slightly enhance gallery view
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(llama.cpp): Enable decentralized, distributed inference
As https://github.com/mudler/LocalAI/pull/2324 introduced distributed inferencing thanks to
@rgerganov implementation in https://github.com/ggerganov/llama.cpp/pull/6829 in upstream llama.cpp, now
it is possible to distribute the workload to remote llama.cpp gRPC server.
This changeset now uses mudler/edgevpn to establish a secure, distributed network between the nodes using a shared token.
The token is generated automatically when starting the server with the `--p2p` flag, and can be used by starting the workers
with `local-ai worker p2p-llama-cpp-rpc` by passing the token via environment variable (TOKEN) or with args (--token).
As per how mudler/edgevpn works, a network is established between the server and the workers with dht and mdns discovery protocols,
the llama.cpp rpc server is automatically started and exposed to the underlying p2p network so the API server can connect on.
When the HTTP server is started, it will discover the workers in the network and automatically create the port-forwards to the service locally.
Then llama.cpp is configured to use the services.
This feature is behind the "p2p" GO_FLAGS
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* go mod tidy
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* ci: add p2p tag
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* better message
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat: migrate to alecthomas/kong for CLI
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* feat: bring in new flag for granular log levels
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* chore: go mod tidy
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* feat: allow loading cli flag values from ["./localai.yaml", "~/.config/localai.yaml", "/etc/localai.yaml"] in that order
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* feat: load from .env file instead of a yaml file
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* feat: better loading for environment files
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* feat(doc): add initial documentation about configuration
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* fix: remove test log lines
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* feat: integrate new documentation into existing pages
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* feat: add documentation on .env files
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* fix: cleanup some documentation table errors
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* feat: refactor CLI logic out to it's own package under core/cli
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
---------
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>