Dave
7fb8b4191f
feat: "simple" chat/edit/completion template system prompt from config ( #856 )
2023-08-03 00:19:55 +02:00
Dave
ce8e9dc690
feature: model list :: filter query string parameter ( #830 )
2023-07-31 19:14:32 +02:00
Aman Gupta Karmani
12fe0932c4
feat: cancel stream generation if client disappears ( #792 )
2023-07-24 23:10:54 +02:00
Dave
c6bf67f446
feat(llama2): add template for chat messages ( #782 )
...
Co-authored-by: Aman Karmani <aman@tmm1.net>
Lays some of the groundwork for LLAMA2 compatibility as well as other future models with complex prompting schemes.
Started small refactoring in pkg/model/loader.go regarding template loading. Currently still a part of ModelLoader, but should be easy to add template loading for situations other than overall prompt templates and the new chat-specific per-message templates
Adds support for new chat-endpoint-specific, per-message templates as an alternative to the existing Role: XYZ sprintf method.
Includes a temporary prompt template as an example, since I have a few questions before we merge in the model-gallery side changes (see )
Minor debug logging changes.
2023-07-22 11:31:39 -04:00
Ettore Di Giacinto
1d0ed95a54
feat: move other backends to grpc
...
This finally makes everything more consistent
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-07-15 01:19:43 +02:00
Ettore Di Giacinto
b816009db0
feat: add falcon ggllm via grpc client
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-07-15 01:19:43 +02:00
Ettore Di Giacinto
85f0f8227d
refactor: drop code dups ( #234 )
2023-05-11 16:34:16 +02:00
Ettore Di Giacinto
59e3c02002
make use of new bindings for gpt4all ( #232 )
2023-05-11 14:31:19 +02:00
Matthew Campbell
032dee256f
Keep whisper models in memory ( #233 )
2023-05-11 14:05:07 +02:00
Ettore Di Giacinto
11675932ac
feat: add dolly/redpajama/bloomz models support ( #214 )
2023-05-11 01:12:58 +02:00
Ettore Di Giacinto
f8ee20991c
feat: add bert.cpp embeddings ( #222 )
2023-05-10 15:20:21 +02:00
Ettore Di Giacinto
c839b334eb
feat: add embeddings for go-llama.cpp backend ( #190 )
2023-05-05 11:20:06 +02:00
Ettore Di Giacinto
714bfcd45b
fix: missing returning error and free callback stream ( #187 )
2023-05-04 19:49:43 +02:00
Ettore Di Giacinto
751b7eca62
feat: add rwkv support ( #158 )
...
Signed-off-by: mudler <mudler@mocaccino.org>
2023-05-03 11:45:22 +02:00
Ettore Di Giacinto
1ae7150810
feat: allow to specify default backend for model ( #156 )
...
Signed-off-by: mudler <mudler@c3os.io>
2023-05-03 00:31:28 +02:00
Ettore Di Giacinto
156e15a4fa
Bump llama.cpp, downgrade gpt4all-j ( #149 )
2023-05-02 16:07:18 +02:00
Ettore Di Giacinto
92452d46da
feat: add new gpt4all-j binding ( #142 )
2023-05-01 20:00:15 +02:00
Ettore Di Giacinto
c806eae0de
feat: config files and SSE ( #83 )
...
Signed-off-by: mudler <mudler@mocaccino.org>
Signed-off-by: Tyler Gillson <tyler.gillson@gmail.com>
Co-authored-by: Tyler Gillson <tyler.gillson@gmail.com>
2023-04-26 21:18:18 -07:00
Ettore Di Giacinto
f816dfae65
Add support for stablelm ( #48 )
...
Signed-off-by: mudler <mudler@mocaccino.org>
2023-04-21 00:06:55 +02:00
Ettore Di Giacinto
1c4fbaae20
Add support for cerebras ( #45 )
...
Signed-off-by: mudler <mudler@c3os.io>
2023-04-20 19:33:36 +02:00
Ettore Di Giacinto
d517a54e28
Major API enhancements ( #44 )
2023-04-20 18:33:02 +02:00
Ettore Di Giacinto
7fec26f5d3
Enhancements ( #34 )
...
Signed-off-by: mudler <mudler@c3os.io>
2023-04-19 17:10:29 +02:00
mudler
5556aa46dd
Small refinements and refactors
2023-04-12 00:02:39 +02:00
mudler
ae30bd346d
Reorganize repository layout
2023-04-11 23:43:43 +02:00