Commit Graph

116 Commits

Author SHA1 Message Date
c71c729bc2 debug 2023-07-21 10:53:26 +02:00
e459f114cd fix: fix tests, small refactors
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-07-20 23:52:04 +02:00
982a7e86a8 feat: add huggingface embeddings backend
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-07-20 22:10:42 +02:00
94916749c5 feat: add external grpc and model autoloading 2023-07-20 22:10:12 +02:00
1d2ae46ddc tests: clean up logs
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-07-20 01:36:34 +02:00
3feb632eb4 refactor: rename "llama-master" and "llama" (#776)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-07-20 00:36:16 +02:00
6352448b72 feat: add llama-master backend (#752)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-07-17 23:58:15 +02:00
d0e67cce75 fix: make last stream message to send empty content
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-07-16 00:09:28 +02:00
17294ae5e5 fix: make first stream message to send empty content (#751)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-07-15 22:50:52 +02:00
1d0ed95a54 feat: move other backends to grpc
This finally makes everything more consistent

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-07-15 01:19:43 +02:00
5dcfdbe51d feat: various refactorings
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-07-15 01:19:43 +02:00
f2f1d7fe72 feat: use gRPC for transformers
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-07-15 01:19:43 +02:00
ae533cadef feat: move gpt4all to a grpc service
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-07-15 01:19:43 +02:00
58f6aab637 feat: move llama to a grpc
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-07-15 01:19:43 +02:00
b816009db0 feat: add falcon ggllm via grpc client
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-07-15 01:19:43 +02:00
dcf35dd25f Fixup custom role encoding
Signed-off-by: mudler <mudler@localai.io>
2023-07-09 11:13:19 +02:00
e70322676c Allow to customize no action behavior
Signed-off-by: mudler <mudler@localai.io>
2023-07-09 10:53:46 +02:00
b3f43ab938 Add a way to disable default action 2023-07-09 10:02:21 +02:00
bbc4468908 Make functions more compatible with OpenAI specs 2023-07-09 10:02:09 +02:00
55befe396a Add grammar_json to the request parameters to facilitate JSON generation 2023-07-06 19:08:04 +02:00
483fddccf9 minor fixups 2023-07-06 11:55:19 +02:00
05aed255db Customize function call in templates 2023-07-05 18:24:44 +02:00
0f1326b2bd fixups 2023-07-04 23:40:22 +02:00
b722e7eb7e feat: cleanups, small enhancements
Signed-off-by: mudler <mudler@localai.io>
2023-07-04 18:58:19 +02:00
f09ddd2983 feat: add grammar and functions call support 2023-07-04 18:58:19 +02:00
a6839fd238 feat: [whisper] Partial support for verbose_json format in transcribe endpoint (#721) 2023-07-04 14:31:31 +02:00
3593cb0c87 feat: update llama, enable NUMA (#684) 2023-06-27 09:00:10 +02:00
02136531a3 fix: return index and delta in stream token (#680)
Signed-off-by: mudler <mudler@localai.io>
2023-06-26 18:49:36 +02:00
d3a486a4f8 feat: Add '/version' endpoint and display it in the CLI (#679) 2023-06-26 15:12:43 +02:00
2b957df56c fix: rename /models/list to /models/available (#678) 2023-06-26 15:12:26 +02:00
78f3c3da48 refactor: consolidate usage of GetURI (#674)
Signed-off-by: mudler <mudler@localai.io>
2023-06-26 12:25:38 +02:00
60db5957d3 Gallery repository (#663)
Signed-off-by: mudler <mudler@localai.io>
2023-06-24 08:18:17 +02:00
a7bb029d23 feat: add tts with go-piper (#649)
Signed-off-by: mudler <mudler@localai.io>
2023-06-22 17:53:10 +02:00
2f5feb4841 Add LowVRAM option parameter (#642) 2023-06-20 20:33:47 +02:00
295f3030a9 feat: add typical_p to model parameters (#598)
Signed-off-by: mudler <mudler@mocaccino.org>
2023-06-14 19:33:20 +02:00
10ddd72b58 fix: set default batch size (#597) 2023-06-14 19:09:27 +02:00
e37361985c deps: update gpt4all bindings, fix search path on new versions (#592) 2023-06-14 13:24:53 +02:00
84946e9275 feat: display download progress when installing models (#543) 2023-06-08 21:33:18 +02:00
c9bbba4872 tests: add llama tests with openllama (#538)
Signed-off-by: mudler <mudler@mocaccino.org>
2023-06-08 00:36:11 +02:00
5abbb134d9 feat: extend model configuration for llama.cpp (#536) 2023-06-07 21:46:19 +02:00
d62aef2016 feat: add experimental support for falcon-7b (#516)
Signed-off-by: mudler <mudler@mocaccino.org>
2023-06-06 17:23:19 +02:00
b503725dc7 fix: downgrade gpt4all (#503)
Signed-off-by: mudler <mudler@mocaccino.org>
2023-06-05 09:42:50 +02:00
96794851b3 feat: add support for Stream: true to completionEndpoint (#465) 2023-06-03 00:27:03 +02:00
78ad4813df feat: Update gpt4all, support multiple implementations in runtime (#472)
Signed-off-by: mudler <mudler@mocaccino.org>
2023-06-01 23:38:52 +02:00
c8a4a4f4e9 feat: Add new test cases for LoadConfigs (#447)
Signed-off-by: Aisuko <urakiny@gmail.com>
2023-06-01 16:20:45 +02:00
3ba07a5928 feat: add LangChainGo Huggingface backend (#446)
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2023-06-01 12:00:06 +02:00
49ce24984c feat: Add more test-cases and remove dev container (#433)
Signed-off-by: Aisuko <urakiny@gmail.com>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2023-05-30 13:01:55 +02:00
f401181cb5 fix: switch back to upstream for rwkv bindings (#432) 2023-05-30 12:35:32 +02:00
aacb96df7a fix: correctly handle errors from App constructor (#430)
Signed-off-by: mudler <mudler@mocaccino.org>
2023-05-30 12:00:30 +02:00
217dbb448e feat: allow to set a prompt cache path and enable saving state (#395)
Signed-off-by: mudler <mudler@mocaccino.org>
2023-05-27 14:29:11 +02:00