docs: fix p2p commands (#2472)

Also change icons on GPT vision page

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
This commit is contained in:
Ettore Di Giacinto 2024-06-03 16:58:53 +02:00 committed by GitHub
parent bae2a649fd
commit 148adebe16
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
2 changed files with 3 additions and 3 deletions

View File

@ -20,7 +20,7 @@ This functionality enables LocalAI to distribute inference requests across multi
To start workers for distributing the computational load, run:
```bash
local-ai llamacpp-worker <listening_address> <listening_port>
local-ai worker llama-cpp-rpc <listening_address> <listening_port>
```
Alternatively, you can build the RPC server following the llama.cpp [README](https://github.com/ggerganov/llama.cpp/blob/master/examples/rpc/README.md), which is compatible with LocalAI.
@ -71,7 +71,7 @@ To reuse the same token later, restart the server with `--p2ptoken` or `P2P_TOKE
2. Start the workers. Copy the `local-ai` binary to other hosts and run as many workers as needed using the token:
```bash
TOKEN=XXX ./local-ai p2p-llama-cpp-rpc
TOKEN=XXX ./local-ai worker p2p-llama-cpp-rpc
# 1:06AM INF loading environment variables from file envFile=.env
# 1:06AM INF Setting logging to info
# {"level":"INFO","time":"2024-05-19T01:06:01.794+0200","caller":"config/config.go:288","message":"connmanager disabled\n"}

View File

@ -1,7 +1,7 @@
+++
disableToc = false
title = "🆕 GPT Vision"
title = "🥽 GPT Vision"
weight = 14
url = "/features/gpt-vision/"
+++