LocalAI/pkg
Ettore Di Giacinto bdd6769b2d
feat(default): use number of physical cores as default (#2483)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-06-04 15:23:29 +02:00
..
assets feat(llama.cpp): add distributed llama.cpp inferencing (#2324) 2024-05-15 01:17:02 +02:00
downloader fix: reduce chmod permissions for created files and directories (#2137) 2024-04-26 00:47:06 +02:00
functions feat(functions): allow response_regex to be a list (#2447) 2024-05-31 22:52:02 +02:00
gallery feat(ui): prompt for chat, support vision, enhancements (#2259) 2024-05-08 00:42:34 +02:00
grpc refactor(application): introduce application global state (#2072) 2024-04-29 17:42:37 +00:00
langchain feat(llama.cpp): do not specify backends to autoload and add llama.cpp variants (#2232) 2024-05-04 17:56:12 +02:00
model dependencies(grpcio): bump to fix CI issues (#2362) 2024-05-21 14:33:47 +02:00
stablediffusion feat: support upscaled image generation with esrgan (#509) 2023-06-05 17:21:38 +02:00
startup feat: Galleries UI (#2104) 2024-04-23 09:22:58 +02:00
store feat(stores): Vector store backend (#1795) 2024-03-22 21:14:04 +01:00
templates fix: reduce chmod permissions for created files and directories (#2137) 2024-04-26 00:47:06 +02:00
tinydream feat: add tiny dream stable diffusion support (#1283) 2023-12-24 19:27:24 +00:00
utils feat(llama.cpp): Totally decentralized, private, distributed, p2p inference (#2343) 2024-05-20 19:17:59 +02:00
xsync feat(ui): prompt for chat, support vision, enhancements (#2259) 2024-05-08 00:42:34 +02:00
xsysinfo feat(default): use number of physical cores as default (#2483) 2024-06-04 15:23:29 +02:00