LocalAI/api/backend
Ettore Di Giacinto 44bc7aa3d0
feat: Allow to load lora adapters for llama.cpp (#955)
**Description**

This PR fixes #

**Notes for Reviewers**


**[Signed
commits](../CONTRIBUTING.md#signing-off-on-commits-developer-certificate-of-origin)**
- [ ] Yes, I signed my commits.
 

<!--
Thank you for contributing to LocalAI! 

Contributing Conventions:

1. Include descriptive PR titles with [<component-name>] prepended.
2. Build and test your changes before submitting a PR. 
3. Sign your commits

By following the community's contribution conventions upfront, the
review process will
be accelerated and your PR merged more quickly.
-->

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-08-25 21:58:46 +02:00
..
embeddings.go feat: add --single-active-backend to allow only one backend active at the time (#925) 2023-08-19 01:49:33 +02:00
image.go feat: add --single-active-backend to allow only one backend active at the time (#925) 2023-08-19 01:49:33 +02:00
llm.go fix: disable usage by default (still experimental) (#929) 2023-08-19 16:15:22 +02:00
options.go feat: Allow to load lora adapters for llama.cpp (#955) 2023-08-25 21:58:46 +02:00
transcript.go fix: drop racy code, refactor and group API schema (#931) 2023-08-20 14:04:45 +02:00
tts.go feat: add --single-active-backend to allow only one backend active at the time (#925) 2023-08-19 01:49:33 +02:00