mirror of
https://github.com/mudler/LocalAI.git
synced 2024-12-24 06:46:39 +00:00
Update README.md
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
This commit is contained in:
parent
46847f3bd4
commit
961a993b88
19
README.md
19
README.md
@ -56,16 +56,17 @@ curl https://localai.io/install.sh | sh
|
|||||||
|
|
||||||
Or run with docker:
|
Or run with docker:
|
||||||
```bash
|
```bash
|
||||||
|
# CPU only image:
|
||||||
docker run -ti --name local-ai -p 8080:8080 localai/localai:latest-cpu
|
docker run -ti --name local-ai -p 8080:8080 localai/localai:latest-cpu
|
||||||
# Alternative images:
|
|
||||||
# - if you have an Nvidia GPU:
|
# Nvidia GPU:
|
||||||
# docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-gpu-nvidia-cuda-12
|
docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-gpu-nvidia-cuda-12
|
||||||
# - without preconfigured models
|
|
||||||
# docker run -ti --name local-ai -p 8080:8080 localai/localai:latest
|
# CPU and GPU image (bigger size):
|
||||||
# - without preconfigured models for Nvidia GPUs
|
docker run -ti --name local-ai -p 8080:8080 localai/localai:latest
|
||||||
# docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-gpu-nvidia-cuda-12
|
|
||||||
## AIO images (it will pre-download a set of models ready for use, see https://localai.io/basics/container/)
|
# AIO images (it will pre-download a set of models ready for use, see https://localai.io/basics/container/)
|
||||||
# docker run -ti --name local-ai -p 8080:8080 localai/localai:latest-aio-cpu
|
docker run -ti --name local-ai -p 8080:8080 localai/localai:latest-aio-cpu
|
||||||
```
|
```
|
||||||
|
|
||||||
To load models:
|
To load models:
|
||||||
|
Loading…
Reference in New Issue
Block a user