Commit Graph

17 Commits

Author SHA1 Message Date
Ettore Di Giacinto
53dbe36f32
feat(tts): respect YAMLs config file, add sycl docs/examples (#1692)
* feat(refactor): refactor config and input reading

* feat(tts): read config file for TTS

* examples(kubernetes): Add simple deployment example

* examples(kubernetes): Add simple deployment for intel arc

* docs(sycl): add sycl example

* feat(tts): do not always pick a first model

* fixups to run vall-e-x on container

* Correctly resolve backend
2024-02-10 21:37:03 +01:00
Ettore Di Giacinto
db926896bd
Revert "[Refactor]: Core/API Split" (#1550)
Revert "[Refactor]: Core/API Split (#1506)"

This reverts commit ab7b4d5ee9.
2024-01-05 18:04:46 +01:00
Dave
ab7b4d5ee9
[Refactor]: Core/API Split (#1506)
Refactors api folder to core, creates firm split between backend code and api frontend.
2024-01-05 15:34:56 +01:00
Ettore Di Giacinto
1fc3a375df
feat: inline templates and accept URLs in models (#1452)
* feat: Allow inline templates

* feat: Allow to specify url in model config files

Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>

* feat: support 'huggingface://' format

* style: reuse-code from gallery

---------

Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2023-12-18 18:58:44 +01:00
Ettore Di Giacinto
66a558ff41
fix: respect OpenAI spec for response format (#1289)
fix: properly respect OpenAI spec for response format

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-11-15 19:36:23 +01:00
Ettore Di Giacinto
0eae727366
🔥 add LaVA support and GPT vision API, Multiple requests for llama.cpp, return JSON types (#1254)
* wip

* wip

* Make it functional

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* wip

* Small fixups

* do not inject space on role encoding, encode img at beginning of messages

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Add examples/config defaults

* Add include dir of current source dir

* cleanup

* fixes

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fixups

* Revert "fixups"

This reverts commit f1a4731cca.

* fixes

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-11-11 13:14:59 +01:00
Jesús Espino
81a5ed9f31
fix(openai): Populate ID and Created fields in OpenAI compatible responses (#1164)
Adding the extra ID and Created fields to any request to the OpenAI
Compatible API to improve the compatibility.

This PR fixes #1103
2023-10-12 02:00:08 +00:00
Ettore Di Giacinto
cc060a283d
fix: drop racy code, refactor and group API schema (#931)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-08-20 14:04:45 +02:00
Dave
8cb1061c11
Usage Features (#863) 2023-08-18 21:23:14 +02:00
Ettore Di Giacinto
acd829a7a0
fix: do not break on newlines on function returns (#864)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-08-04 21:46:36 +02:00
Dave
7fb8b4191f
feat: "simple" chat/edit/completion template system prompt from config (#856) 2023-08-03 00:19:55 +02:00
Aman Gupta Karmani
12fe0932c4
feat: cancel stream generation if client disappears (#792) 2023-07-24 23:10:54 +02:00
Dave
c6bf67f446
feat(llama2): add template for chat messages (#782)
Co-authored-by: Aman Karmani <aman@tmm1.net>

Lays some of the groundwork for LLAMA2 compatibility as well as other future models with complex prompting schemes.

Started small refactoring in pkg/model/loader.go regarding template loading. Currently still a part of ModelLoader, but should be easy to add template loading for situations other than overall prompt templates and the new chat-specific per-message templates
Adds support for new chat-endpoint-specific, per-message templates as an alternative to the existing Role: XYZ sprintf method.
Includes a temporary prompt template as an example, since I have a few questions before we merge in the model-gallery side changes (see )
Minor debug logging changes.
2023-07-22 11:31:39 -04:00
Ettore Di Giacinto
94817b557c
fix: make completions endpoint more close to OpenAI specification (#790)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-07-22 00:53:52 +02:00
Ettore Di Giacinto
d0e67cce75 fix: make last stream message to send empty content
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-07-16 00:09:28 +02:00
Ettore Di Giacinto
17294ae5e5
fix: make first stream message to send empty content (#751)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-07-15 22:50:52 +02:00
Ettore Di Giacinto
5dcfdbe51d feat: various refactorings
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-07-15 01:19:43 +02:00