diff --git a/README.md b/README.md index 44beeb71..7647105b 100644 --- a/README.md +++ b/README.md @@ -66,6 +66,19 @@ docker run -ti --name local-ai -p 8080:8080 localai/localai:latest-aio-cpu # docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-gpu-nvidia-cuda-12 ``` +To load models: + +```bash +# Start LocalAI with the phi-2 model +local-ai run huggingface://TheBloke/phi-2-GGUF/phi-2.Q8_0.gguf +# Install and run a model from the Ollama OCI registry +local-ai run ollama://gemma:2b +# Run a model from a configuration file +local-ai run https://gist.githubusercontent.com/.../phi-2.yaml +# Install and run a model from a standard OCI registry (e.g., Docker Hub) +local-ai run oci://localai/phi-2:latest +``` + [💻 Getting started](https://localai.io/basics/getting_started/index.html) ## 📰 Latest project news