LocalAI/backend
2023-12-20 00:33:24 +00:00
..
cpp update(llama.cpp): update server, correctly propagate LLAMA_VERSION (#1440) 2023-12-15 08:26:48 +01:00
go refactor: rename llama-stable to llama-ggml (#1287) 2023-11-18 08:18:43 +01:00
python test only model load on petals 2023-12-20 00:33:24 +00:00
backend.proto feat(diffusers): update, add autopipeline, controlnet (#1432) 2023-12-13 19:20:22 +01:00