LocalAI/pkg
Ettore Di Giacinto ea330d452d
models(gallery): add mistral-0.3 and command-r, update functions ()
* models(gallery): add mistral-0.3 and command-r, update functions

Add also disable_parallel_new_lines to disable newlines in the JSON
output when forcing parallel tools. Some models (like mistral) might be
very sensible to that when being used for function calling.

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* models(gallery): add aya-23-8b

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-05-23 19:16:08 +02:00
..
assets feat(llama.cpp): add distributed llama.cpp inferencing () 2024-05-15 01:17:02 +02:00
downloader fix: reduce chmod permissions for created files and directories () 2024-04-26 00:47:06 +02:00
functions models(gallery): add mistral-0.3 and command-r, update functions () 2024-05-23 19:16:08 +02:00
gallery feat(ui): prompt for chat, support vision, enhancements () 2024-05-08 00:42:34 +02:00
grpc refactor(application): introduce application global state () 2024-04-29 17:42:37 +00:00
langchain feat(llama.cpp): do not specify backends to autoload and add llama.cpp variants () 2024-05-04 17:56:12 +02:00
model dependencies(grpcio): bump to fix CI issues () 2024-05-21 14:33:47 +02:00
stablediffusion feat: support upscaled image generation with esrgan () 2023-06-05 17:21:38 +02:00
startup feat: Galleries UI () 2024-04-23 09:22:58 +02:00
store feat(stores): Vector store backend () 2024-03-22 21:14:04 +01:00
templates fix: reduce chmod permissions for created files and directories () 2024-04-26 00:47:06 +02:00
tinydream feat: add tiny dream stable diffusion support () 2023-12-24 19:27:24 +00:00
utils feat(llama.cpp): Totally decentralized, private, distributed, p2p inference () 2024-05-20 19:17:59 +02:00
xsync feat(ui): prompt for chat, support vision, enhancements () 2024-05-08 00:42:34 +02:00
xsysinfo feat(startup): show CPU/GPU information with --debug () 2024-05-05 09:10:23 +02:00