Commit Graph

202 Commits

Author SHA1 Message Date
Ettore Di Giacinto
1ff30034e8
fix(deps): update go-llama.cpp (#980)
**Description**

This PR bumps llama.cpp (adding support to gguf v2) and changes the
default test model

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-08-30 23:01:55 +02:00
Ettore Di Giacinto
02704e38d3
feat(diffusers): Add lora (#965)
**Description**

This PR fixes #914 

Now diffusers respects the `lora_adapter` configuration parameter.

---------

Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2023-08-27 10:11:16 +02:00
Ettore Di Giacinto
44bc7aa3d0
feat: Allow to load lora adapters for llama.cpp (#955)
**Description**

This PR fixes #

**Notes for Reviewers**


**[Signed
commits](../CONTRIBUTING.md#signing-off-on-commits-developer-certificate-of-origin)**
- [ ] Yes, I signed my commits.
 

<!--
Thank you for contributing to LocalAI! 

Contributing Conventions:

1. Include descriptive PR titles with [<component-name>] prepended.
2. Build and test your changes before submitting a PR. 
3. Sign your commits

By following the community's contribution conventions upfront, the
review process will
be accelerated and your PR merged more quickly.
-->

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-08-25 21:58:46 +02:00
Ettore Di Giacinto
1120847f72
feat: bump llama.cpp, add gguf support (#943)
**Description**

This PR syncs up the `llama` backend to use `gguf`
(https://github.com/go-skynet/go-llama.cpp/pull/180). It also adds
`llama-stable` to the targets so we can still load ggml. It adapts the
current tests to use the `llama-backend` for ggml and uses a `gguf`
model to run tests on the new backend.

In order to consume the new version of go-llama.cpp, it also bump go to
1.21 (images, pipelines, etc)

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-08-24 01:18:58 +02:00
Dave
10b0e13882
feat: backend monitor shutdown endpoint, process based (#938)
This PR adds a new endpoint to the backend monitor section
`/backend/shutdown` which terminates the grpc process for the related
model.
2023-08-23 18:38:37 +02:00
Dave
901f0709c5
Feat: rwkv improvements: (#937) 2023-08-22 18:48:06 +02:00
Ettore Di Giacinto
cc060a283d
fix: drop racy code, refactor and group API schema (#931)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-08-20 14:04:45 +02:00
Ettore Di Giacinto
28db83e17b
fix: disable usage by default (still experimental) (#929)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-08-19 16:15:22 +02:00
Ettore Di Giacinto
afdc0ebfd7
feat: add --single-active-backend to allow only one backend active at the time (#925)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-08-19 01:49:33 +02:00
Ettore Di Giacinto
1079b18ff7
feat(diffusers): be consistent with pipelines, support also depthimg2img (#926)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-08-18 22:06:24 +02:00
Dave
8cb1061c11
Usage Features (#863) 2023-08-18 21:23:14 +02:00
Ettore Di Giacinto
2bacd0180d
feat(diffusers): add img2img and clip_skip, support more kernels schedulers (#906)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-08-17 23:38:59 +02:00
Ettore Di Giacinto
37700f2d98
feat(diffusers): add DPMSolverMultistepScheduler++, DPMSolverMultistepSchedulerSDE++, guidance_scale (#903)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-08-16 01:11:42 +02:00
Ettore Di Giacinto
0ec695f9e4
feat: make initializer accept gRPC delay times (#900)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-08-16 01:11:32 +02:00
Ettore Di Giacinto
a96c3bc885
feat(diffusers): various enhancements (#895) 2023-08-14 23:12:00 +02:00
Michael Nesbitt
1d1cae8e4d
feat: add API_KEY list support (#877)
Co-authored-by: Harold Sun <sunhua@amazon.com>
2023-08-10 00:06:21 +02:00
Ettore Di Giacinto
8c781a6a44
feat: Add Diffusers (#874)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-08-09 08:38:51 +02:00
Ettore Di Giacinto
3c8fc37c56 feat: Add UseFastTokenizer
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-08-08 01:10:05 +02:00
Ettore Di Giacinto
433605e282 feat: add initial Bark backend implementation
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-08-07 22:53:28 +02:00
Ettore Di Giacinto
a843e64fc2 feat: add initial AutoGPTQ backend implementation 2023-08-07 22:53:28 +02:00
Ettore Di Giacinto
acd829a7a0
fix: do not break on newlines on function returns (#864)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-08-04 21:46:36 +02:00
Ettore Di Giacinto
5ca21ee398
feat: add ngqa and RMSNormEps parameters (#860)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-08-03 00:51:08 +02:00
Dave
7fb8b4191f
feat: "simple" chat/edit/completion template system prompt from config (#856) 2023-08-03 00:19:55 +02:00
Ettore Di Giacinto
c309aac8f5
fix(gallery): use inline YAML (#851)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-08-01 19:09:32 +02:00
Ettore Di Giacinto
d603a9cbb5
fix(gallery): preload from file should by in YAML format (#846)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-07-31 21:13:16 +02:00
Dave
ce8e9dc690
feature: model list :: filter query string parameter (#830) 2023-07-31 19:14:32 +02:00
Dave
8e8d474ae8
refactor: Remove remaining uses of depreciated package io/ioutil (#837) 2023-07-30 11:23:43 +00:00
Ettore Di Giacinto
e70b91aaef tests: set a small context_size
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-07-29 10:29:47 +02:00
Ettore Di Giacinto
f085baa77d fix: set default rope if not specified
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-07-29 01:07:16 +02:00
Ettore Di Giacinto
dde12b492b
fix: select function calls if 'name' is set in the request (#827)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-07-28 01:17:11 +02:00
Ettore Di Giacinto
096d98c3d9
fix: add rope settings during model load, fix CUDA (#821)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-07-27 21:56:05 +02:00
Ettore Di Giacinto
b96e30e66c
fix: use bytes in gRPC proto instead of strings (#813)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-07-27 18:41:04 +02:00
Ettore Di Giacinto
569c1d1163
feat: add rope settings and negative prompt, drop grammar backend (#797)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-07-25 19:05:27 +02:00
Aman Gupta Karmani
12fe0932c4
feat: cancel stream generation if client disappears (#792) 2023-07-24 23:10:54 +02:00
Dave
c6bf67f446
feat(llama2): add template for chat messages (#782)
Co-authored-by: Aman Karmani <aman@tmm1.net>

Lays some of the groundwork for LLAMA2 compatibility as well as other future models with complex prompting schemes.

Started small refactoring in pkg/model/loader.go regarding template loading. Currently still a part of ModelLoader, but should be easy to add template loading for situations other than overall prompt templates and the new chat-specific per-message templates
Adds support for new chat-endpoint-specific, per-message templates as an alternative to the existing Role: XYZ sprintf method.
Includes a temporary prompt template as an example, since I have a few questions before we merge in the model-gallery side changes (see )
Minor debug logging changes.
2023-07-22 11:31:39 -04:00
Ettore Di Giacinto
94817b557c
fix: make completions endpoint more close to OpenAI specification (#790)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-07-22 00:53:52 +02:00
Ettore Di Giacinto
c71c729bc2 debug 2023-07-21 10:53:26 +02:00
Ettore Di Giacinto
e459f114cd fix: fix tests, small refactors
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-07-20 23:52:04 +02:00
Ettore Di Giacinto
982a7e86a8 feat: add huggingface embeddings backend
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-07-20 22:10:42 +02:00
Ettore Di Giacinto
94916749c5 feat: add external grpc and model autoloading 2023-07-20 22:10:12 +02:00
Ettore Di Giacinto
1d2ae46ddc tests: clean up logs
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-07-20 01:36:34 +02:00
Ettore Di Giacinto
3feb632eb4
refactor: rename "llama-master" and "llama" (#776)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-07-20 00:36:16 +02:00
Ettore Di Giacinto
6352448b72
feat: add llama-master backend (#752)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-07-17 23:58:15 +02:00
Ettore Di Giacinto
d0e67cce75 fix: make last stream message to send empty content
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-07-16 00:09:28 +02:00
Ettore Di Giacinto
17294ae5e5
fix: make first stream message to send empty content (#751)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-07-15 22:50:52 +02:00
Ettore Di Giacinto
1d0ed95a54 feat: move other backends to grpc
This finally makes everything more consistent

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-07-15 01:19:43 +02:00
Ettore Di Giacinto
5dcfdbe51d feat: various refactorings
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-07-15 01:19:43 +02:00
Ettore Di Giacinto
f2f1d7fe72 feat: use gRPC for transformers
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-07-15 01:19:43 +02:00
Ettore Di Giacinto
ae533cadef feat: move gpt4all to a grpc service
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-07-15 01:19:43 +02:00
Ettore Di Giacinto
58f6aab637 feat: move llama to a grpc
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-07-15 01:19:43 +02:00
Ettore Di Giacinto
b816009db0 feat: add falcon ggllm via grpc client
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-07-15 01:19:43 +02:00
mudler
dcf35dd25f Fixup custom role encoding
Signed-off-by: mudler <mudler@localai.io>
2023-07-09 11:13:19 +02:00
mudler
e70322676c Allow to customize no action behavior
Signed-off-by: mudler <mudler@localai.io>
2023-07-09 10:53:46 +02:00
mudler
b3f43ab938 Add a way to disable default action 2023-07-09 10:02:21 +02:00
mudler
bbc4468908 Make functions more compatible with OpenAI specs 2023-07-09 10:02:09 +02:00
mudler
55befe396a Add grammar_json to the request parameters to facilitate JSON generation 2023-07-06 19:08:04 +02:00
mudler
483fddccf9 minor fixups 2023-07-06 11:55:19 +02:00
mudler
05aed255db Customize function call in templates 2023-07-05 18:24:44 +02:00
mudler
0f1326b2bd fixups 2023-07-04 23:40:22 +02:00
mudler
b722e7eb7e feat: cleanups, small enhancements
Signed-off-by: mudler <mudler@localai.io>
2023-07-04 18:58:19 +02:00
mudler
f09ddd2983 feat: add grammar and functions call support 2023-07-04 18:58:19 +02:00
Luis López
a6839fd238
feat: [whisper] Partial support for verbose_json format in transcribe endpoint (#721) 2023-07-04 14:31:31 +02:00
Ettore Di Giacinto
3593cb0c87
feat: update llama, enable NUMA (#684) 2023-06-27 09:00:10 +02:00
Ettore Di Giacinto
02136531a3
fix: return index and delta in stream token (#680)
Signed-off-by: mudler <mudler@localai.io>
2023-06-26 18:49:36 +02:00
Ettore Di Giacinto
d3a486a4f8
feat: Add '/version' endpoint and display it in the CLI (#679) 2023-06-26 15:12:43 +02:00
Ettore Di Giacinto
2b957df56c
fix: rename /models/list to /models/available (#678) 2023-06-26 15:12:26 +02:00
Ettore Di Giacinto
78f3c3da48
refactor: consolidate usage of GetURI (#674)
Signed-off-by: mudler <mudler@localai.io>
2023-06-26 12:25:38 +02:00
Ettore Di Giacinto
60db5957d3
Gallery repository (#663)
Signed-off-by: mudler <mudler@localai.io>
2023-06-24 08:18:17 +02:00
Ettore Di Giacinto
a7bb029d23
feat: add tts with go-piper (#649)
Signed-off-by: mudler <mudler@localai.io>
2023-06-22 17:53:10 +02:00
Ettore Di Giacinto
2f5feb4841
Add LowVRAM option parameter (#642) 2023-06-20 20:33:47 +02:00
Ettore Di Giacinto
295f3030a9
feat: add typical_p to model parameters (#598)
Signed-off-by: mudler <mudler@mocaccino.org>
2023-06-14 19:33:20 +02:00
Ettore Di Giacinto
10ddd72b58
fix: set default batch size (#597) 2023-06-14 19:09:27 +02:00
Ettore Di Giacinto
e37361985c
deps: update gpt4all bindings, fix search path on new versions (#592) 2023-06-14 13:24:53 +02:00
Ettore Di Giacinto
84946e9275
feat: display download progress when installing models (#543) 2023-06-08 21:33:18 +02:00
Ettore Di Giacinto
c9bbba4872
tests: add llama tests with openllama (#538)
Signed-off-by: mudler <mudler@mocaccino.org>
2023-06-08 00:36:11 +02:00
Ettore Di Giacinto
5abbb134d9
feat: extend model configuration for llama.cpp (#536) 2023-06-07 21:46:19 +02:00
Ettore Di Giacinto
d62aef2016
feat: add experimental support for falcon-7b (#516)
Signed-off-by: mudler <mudler@mocaccino.org>
2023-06-06 17:23:19 +02:00
Ettore Di Giacinto
b503725dc7
fix: downgrade gpt4all (#503)
Signed-off-by: mudler <mudler@mocaccino.org>
2023-06-05 09:42:50 +02:00
Samuel Maynard
96794851b3
feat: add support for Stream: true to completionEndpoint (#465) 2023-06-03 00:27:03 +02:00
Ettore Di Giacinto
78ad4813df
feat: Update gpt4all, support multiple implementations in runtime (#472)
Signed-off-by: mudler <mudler@mocaccino.org>
2023-06-01 23:38:52 +02:00
Aisuko
c8a4a4f4e9
feat: Add new test cases for LoadConfigs (#447)
Signed-off-by: Aisuko <urakiny@gmail.com>
2023-06-01 16:20:45 +02:00
Pavel Zloi
3ba07a5928
feat: add LangChainGo Huggingface backend (#446)
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2023-06-01 12:00:06 +02:00
Aisuko
49ce24984c
feat: Add more test-cases and remove dev container (#433)
Signed-off-by: Aisuko <urakiny@gmail.com>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2023-05-30 13:01:55 +02:00
Ettore Di Giacinto
f401181cb5
fix: switch back to upstream for rwkv bindings (#432) 2023-05-30 12:35:32 +02:00
Ettore Di Giacinto
aacb96df7a
fix: correctly handle errors from App constructor (#430)
Signed-off-by: mudler <mudler@mocaccino.org>
2023-05-30 12:00:30 +02:00
Ettore Di Giacinto
217dbb448e
feat: allow to set a prompt cache path and enable saving state (#395)
Signed-off-by: mudler <mudler@mocaccino.org>
2023-05-27 14:29:11 +02:00
Ettore Di Giacinto
76c881043e
feat: allow to preload models before startup via env var or configs (#391) 2023-05-27 09:26:33 +02:00
Ettore Di Giacinto
bf54b78270
feat: add /healthz and /readyz endpoints for kubernetes (#374) 2023-05-24 22:19:13 +02:00
Ettore Di Giacinto
9decd0813c
feat: update go-gpt2 (#359)
Signed-off-by: mudler <mudler@mocaccino.org>
2023-05-23 21:47:47 +02:00
Robert Hambrock
4aa78843c0
fix: spec compliant instantiation and termination of streams (#341) 2023-05-21 15:24:04 +02:00
Ettore Di Giacinto
6f54cab3f0
feat: allow to set cors (#339) 2023-05-21 14:38:25 +02:00
Ettore Di Giacinto
05a3d569b0
feat: allow to override model config (#323) 2023-05-20 17:03:53 +02:00
Ettore Di Giacinto
4e381cbe92
feat: support shorter urls for github repositories (#314) 2023-05-20 09:06:30 +02:00
Ettore Di Giacinto
1fade53a61
feat: minor enhancements to /models/apply (#297) 2023-05-19 08:31:11 +02:00
Ettore Di Giacinto
cc9aa9eb3f
feat: add /models/apply endpoint to prepare models (#286) 2023-05-18 15:59:03 +02:00
Ettore Di Giacinto
3f739575d8
Minor fixes (#285) 2023-05-17 21:01:46 +02:00
Ettore Di Giacinto
9d051c5d4f
feat: add image generation with ncnn-stablediffusion (#272) 2023-05-16 19:32:53 +02:00
Ettore Di Giacinto
acd03d15f2
feat: add support for cublas/openblas in the llama.cpp backend (#258) 2023-05-16 16:26:25 +02:00
Ettore Di Giacinto
a035de2fdd
tests: add rwkv (#261) 2023-05-15 08:15:01 +02:00
Ettore Di Giacinto
2488c445b6
feat: bert.cpp token embeddings (#241) 2023-05-12 17:16:49 +02:00