mirror of
https://github.com/mudler/LocalAI.git
synced 2024-12-21 21:47:51 +00:00
dd982acf2c
* feat(img2vid): Initial support for img2vid * doc(SD): fix SDXL Example * Minor fixups for img2vid * docs(img2img): fix example curl call * feat(txt2vid): initial support Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com> * diffusers: be retro-compatible with CUDA settings * docs(img2vid, txt2vid): examples * Add notice on docs --------- Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
1.2 KiB
1.2 KiB
+++ disableToc = false title = "Easy Setup - Stable Diffusion" weight = 2 +++
To set up a Stable Diffusion model is super easy.
In your models folder make a file called stablediffusion.yaml
, then edit that file with the following. (You can change Linaqruf/animagine-xl
with what ever sd-lx
model you would like.
name: animagine-xl
parameters:
model: Linaqruf/animagine-xl
backend: diffusers
# Force CPU usage - set to true for GPU
f16: false
diffusers:
cuda: false # Enable for GPU usage (CUDA)
scheduler_type: dpm_2_a
If you are using docker, you will need to run in the localai folder with the docker-compose.yaml
file in it
docker-compose down #windows
docker compose down #linux/mac
Then in your .env
file uncomment this line.
COMPEL=0
After that we can reinstall the LocalAI docker VM by running in the localai folder with the docker-compose.yaml
file in it
docker-compose up #windows
docker compose up #linux/mac
Then to download and setup the model, Just send in a normal OpenAI
request! LocalAI will do the rest!
curl http://localhost:8080/v1/images/generations -H "Content-Type: application/json" -d '{
"prompt": "Two Boxes, 1blue, 1red",
"size": "256x256"
}'