LocalAI/backend
Ettore Di Giacinto 44a5dac312
feat(backend): add stablediffusion-ggml (#4289)
* feat(backend): add stablediffusion-ggml

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* chore(ci): track stablediffusion-ggml

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fixups

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Use default scheduler and sampler if not specified

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fixups

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Move cfg scale out of diffusers block

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Make it working

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fix: set free_params_immediately to false to call the model in sequence

https://github.com/leejet/stable-diffusion.cpp/issues/366

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-12-03 22:41:22 +01:00
..
cpp fix(llama.cpp): embed metal file into result binary for darwin (#4279) 2024-11-28 04:17:00 +00:00
go feat(backend): add stablediffusion-ggml (#4289) 2024-12-03 22:41:22 +01:00
python chore(deps): bump grpcio to 1.68.1 (#4301) 2024-12-02 19:13:26 +01:00
backend.proto feat(backend): add stablediffusion-ggml (#4289) 2024-12-03 22:41:22 +01:00