Ettore Di Giacinto
|
9decd0813c
|
feat: update go-gpt2 (#359)
Signed-off-by: mudler <mudler@mocaccino.org>
|
2023-05-23 21:47:47 +02:00 |
|
Robert Hambrock
|
4aa78843c0
|
fix: spec compliant instantiation and termination of streams (#341)
|
2023-05-21 15:24:04 +02:00 |
|
Ettore Di Giacinto
|
6f54cab3f0
|
feat: allow to set cors (#339)
|
2023-05-21 14:38:25 +02:00 |
|
Ettore Di Giacinto
|
05a3d569b0
|
feat: allow to override model config (#323)
|
2023-05-20 17:03:53 +02:00 |
|
Ettore Di Giacinto
|
4e381cbe92
|
feat: support shorter urls for github repositories (#314)
|
2023-05-20 09:06:30 +02:00 |
|
Ettore Di Giacinto
|
1fade53a61
|
feat: minor enhancements to /models/apply (#297)
|
2023-05-19 08:31:11 +02:00 |
|
Ettore Di Giacinto
|
cc9aa9eb3f
|
feat: add /models/apply endpoint to prepare models (#286)
|
2023-05-18 15:59:03 +02:00 |
|
Ettore Di Giacinto
|
3f739575d8
|
Minor fixes (#285)
|
2023-05-17 21:01:46 +02:00 |
|
Ettore Di Giacinto
|
9d051c5d4f
|
feat: add image generation with ncnn-stablediffusion (#272)
|
2023-05-16 19:32:53 +02:00 |
|
Ettore Di Giacinto
|
acd03d15f2
|
feat: add support for cublas/openblas in the llama.cpp backend (#258)
|
2023-05-16 16:26:25 +02:00 |
|
Ettore Di Giacinto
|
a035de2fdd
|
tests: add rwkv (#261)
|
2023-05-15 08:15:01 +02:00 |
|
Ettore Di Giacinto
|
2488c445b6
|
feat: bert.cpp token embeddings (#241)
|
2023-05-12 17:16:49 +02:00 |
|
Ettore Di Giacinto
|
b4241d0a0d
|
tests: enable whisper (#239)
|
2023-05-12 14:10:18 +02:00 |
|
Ettore Di Giacinto
|
8250391e49
|
Add support for gptneox/replit (#238)
|
2023-05-12 11:36:35 +02:00 |
|
Ettore Di Giacinto
|
fd1df4e971
|
whisper: add tests and allow to set upload size (#237)
|
2023-05-12 10:04:20 +02:00 |
|
Ettore Di Giacinto
|
4413defca5
|
feat: add starcoder (#236)
|
2023-05-11 20:20:07 +02:00 |
|
Ettore Di Giacinto
|
85f0f8227d
|
refactor: drop code dups (#234)
|
2023-05-11 16:34:16 +02:00 |
|
Ettore Di Giacinto
|
59e3c02002
|
make use of new bindings for gpt4all (#232)
|
2023-05-11 14:31:19 +02:00 |
|
Matthew Campbell
|
032dee256f
|
Keep whisper models in memory (#233)
|
2023-05-11 14:05:07 +02:00 |
|
Matthew Campbell
|
6b5e2b2bf5
|
Upload transcription API wasn't reading the data from the post (#229)
|
2023-05-11 10:43:05 +02:00 |
|
Ettore Di Giacinto
|
11675932ac
|
feat: add dolly/redpajama/bloomz models support (#214)
|
2023-05-11 01:12:58 +02:00 |
|
Ettore Di Giacinto
|
f8ee20991c
|
feat: add bert.cpp embeddings (#222)
|
2023-05-10 15:20:21 +02:00 |
|
Ettore Di Giacinto
|
9f426578cf
|
feat: add transcript endpoint (#211)
|
2023-05-09 11:43:50 +02:00 |
|
Ettore Di Giacinto
|
89dfa0f5fc
|
feat: add experimental support for embeddings as arrays (#207)
|
2023-05-08 19:31:18 +02:00 |
|
Dave
|
07ec2e441d
|
mini fix - OpenAI documentation url (#200)
|
2023-05-06 00:42:08 +02:00 |
|
mudler
|
8c8cf38d4d
|
tests: use 1 core
|
2023-05-05 23:29:34 +02:00 |
|
mudler
|
009ee47fe2
|
Don't allow 0 as thread count
|
2023-05-05 22:51:20 +02:00 |
|
mudler
|
ec2adc2c03
|
tests: use 3 cores
|
2023-05-05 22:07:01 +02:00 |
|
mudler
|
e62ee2bc06
|
fix: remove trailing 0s from embeddings
This happens when no max_tokens are set, so by default go-llama
allocates more space for the slice and padding happens.
|
2023-05-05 18:35:03 +02:00 |
|
mudler
|
b49721cdd1
|
fix: respect config from file for backends settings
|
2023-05-05 18:05:10 +02:00 |
|
mudler
|
64c0a7967f
|
fix: pass prediction options when using the model
|
2023-05-05 15:56:02 +02:00 |
|
mudler
|
e96eadab40
|
feat: support deprecated embeddings API
|
2023-05-05 15:55:19 +02:00 |
|
mudler
|
e73283121b
|
feat: support arrays for prompt and input
Signed-off-by: mudler <mudler@mocaccino.org>
|
2023-05-05 15:54:59 +02:00 |
|
mudler
|
857d13e8d6
|
debug: wire up go-fiber debugger
|
2023-05-05 15:53:57 +02:00 |
|
Ettore Di Giacinto
|
961cf29217
|
feat: expose mirostat to config (#193)
|
2023-05-05 13:45:37 +02:00 |
|
Ettore Di Giacinto
|
c839b334eb
|
feat: add embeddings for go-llama.cpp backend (#190)
|
2023-05-05 11:20:06 +02:00 |
|
Ettore Di Giacinto
|
714bfcd45b
|
fix: missing returning error and free callback stream (#187)
|
2023-05-04 19:49:43 +02:00 |
|
Ettore Di Giacinto
|
fdf75c6d0e
|
rwkv fixes and examples (#185)
|
2023-05-04 17:32:23 +02:00 |
|
Ettore Di Giacinto
|
c974dad799
|
Return usage in the API responses (#166)
|
2023-05-03 17:29:18 +02:00 |
|
Ettore Di Giacinto
|
67992a7d99
|
feat: support slices or strings in the prompt completion endpoint (#162)
Signed-off-by: mudler <mudler@mocaccino.org>
|
2023-05-03 13:13:31 +02:00 |
|
Ettore Di Giacinto
|
751b7eca62
|
feat: add rwkv support (#158)
Signed-off-by: mudler <mudler@mocaccino.org>
|
2023-05-03 11:45:22 +02:00 |
|
Ettore Di Giacinto
|
1ae7150810
|
feat: allow to specify default backend for model (#156)
Signed-off-by: mudler <mudler@c3os.io>
|
2023-05-03 00:31:28 +02:00 |
|
Ettore Di Giacinto
|
70caf9bf8c
|
feat: support stopwords both string and arrays (#154)
|
2023-05-02 23:30:00 +02:00 |
|
Dave
|
0b226ac027
|
Stop parameter of OpenAIRequest changed to String Array (#153)
|
2023-05-02 22:02:45 +02:00 |
|
Ettore Di Giacinto
|
220d6fd59b
|
feat: add stream events (#152)
|
2023-05-02 20:03:35 +02:00 |
|
Ettore Di Giacinto
|
156e15a4fa
|
Bump llama.cpp, downgrade gpt4all-j (#149)
|
2023-05-02 16:07:18 +02:00 |
|
Ettore Di Giacinto
|
92452d46da
|
feat: add new gpt4all-j binding (#142)
|
2023-05-01 20:00:15 +02:00 |
|
Ettore Di Giacinto
|
52f4d993c1
|
feat: add /edit endpoint (#119)
|
2023-04-29 09:22:09 +02:00 |
|
Ettore Di Giacinto
|
c806eae0de
|
feat: config files and SSE (#83)
Signed-off-by: mudler <mudler@mocaccino.org>
Signed-off-by: Tyler Gillson <tyler.gillson@gmail.com>
Co-authored-by: Tyler Gillson <tyler.gillson@gmail.com>
|
2023-04-26 21:18:18 -07:00 |
|
Ettore Di Giacinto
|
12d83a4184
|
feat: Return OpenAI errors and update docs (#80)
Signed-off-by: mudler <mudler@mocaccino.org>
|
2023-04-24 23:42:03 +02:00 |
|