mirror of
https://github.com/mudler/LocalAI.git
synced 2025-01-18 02:40:01 +00:00
docs: fix p2p commands (#2472)
Also change icons on GPT vision page Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
This commit is contained in:
parent
bae2a649fd
commit
148adebe16
@ -20,7 +20,7 @@ This functionality enables LocalAI to distribute inference requests across multi
|
||||
To start workers for distributing the computational load, run:
|
||||
|
||||
```bash
|
||||
local-ai llamacpp-worker <listening_address> <listening_port>
|
||||
local-ai worker llama-cpp-rpc <listening_address> <listening_port>
|
||||
```
|
||||
|
||||
Alternatively, you can build the RPC server following the llama.cpp [README](https://github.com/ggerganov/llama.cpp/blob/master/examples/rpc/README.md), which is compatible with LocalAI.
|
||||
@ -71,7 +71,7 @@ To reuse the same token later, restart the server with `--p2ptoken` or `P2P_TOKE
|
||||
2. Start the workers. Copy the `local-ai` binary to other hosts and run as many workers as needed using the token:
|
||||
|
||||
```bash
|
||||
TOKEN=XXX ./local-ai p2p-llama-cpp-rpc
|
||||
TOKEN=XXX ./local-ai worker p2p-llama-cpp-rpc
|
||||
# 1:06AM INF loading environment variables from file envFile=.env
|
||||
# 1:06AM INF Setting logging to info
|
||||
# {"level":"INFO","time":"2024-05-19T01:06:01.794+0200","caller":"config/config.go:288","message":"connmanager disabled\n"}
|
||||
|
@ -1,7 +1,7 @@
|
||||
|
||||
+++
|
||||
disableToc = false
|
||||
title = "🆕 GPT Vision"
|
||||
title = "🥽 GPT Vision"
|
||||
weight = 14
|
||||
url = "/features/gpt-vision/"
|
||||
+++
|
||||
|
Loading…
Reference in New Issue
Block a user