Commit Graph

81 Commits

Author SHA1 Message Date
f272605b95 more robust approach
Some checks failed
Security Scan / tests (push) Has been cancelled
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-01-14 17:13:58 +01:00
9a0982066f WIP - improve start and end of speech detection
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-01-14 17:13:58 +01:00
30e3c47598 Improve audio detection
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-01-14 17:13:58 +01:00
90206830c1 WIP - to drop
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-01-14 17:13:58 +01:00
7592984b64 Use template evaluator for preparing LLM prompt in wrapped mode
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-01-14 17:13:58 +01:00
c526f05de5 Small adaptations
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-01-14 17:13:58 +01:00
06e438d68b WIP
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-01-14 17:13:58 +01:00
3dd1b300e9 wip
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-01-14 17:13:58 +01:00
ebfe8dd119 gRPC client stubs
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-01-14 17:13:58 +01:00
136fbd25f5 wip(vad)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-01-14 17:13:58 +01:00
59531562a6 Fix lock handling
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-01-14 17:13:58 +01:00
9273395e38 Move to debug calls
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-01-14 17:13:58 +01:00
0318434b17 Attach context for VAD
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-01-14 17:13:58 +01:00
9614422713 chore(vad): try to hook vad to received data from the API (WIP)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-01-14 17:13:58 +01:00
a3fd8caaa6 feat(vad): hook vad detection
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-01-14 17:13:58 +01:00
1796a1713d chore: extract realtime models into two categories
One is anyToAny models that requires a VAD model, and one is
wrappedModel that requires as well VAD models along others in the
pipeline.

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-01-14 17:13:58 +01:00
4f69170273 feat: correctly detect when starting the vad server
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-01-14 17:13:58 +01:00
60c99ddc50 refactor
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-01-14 17:13:58 +01:00
b4fea58076 Load wrapper clients
Testing with:

```yaml
name: gpt-4o
pipeline:
 tts: voice-it-riccardo_fasol-x-low
 transcription: whisper-base-q5_1
 llm: llama-3.2-1b-instruct:q4_k_m
```

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-01-14 17:13:58 +01:00
9e965033bb chore: simplify passing options to ModelOptions
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-01-14 17:13:58 +01:00
f45d11c734 Add model interface to sessions
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-01-14 17:13:58 +01:00
4ca7689f31 debug
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-01-14 17:13:58 +01:00
dcb13a7e6f WIP
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-01-14 17:13:58 +01:00
8f507c39c0 WIP
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-01-14 17:13:58 +01:00
f943c4b803 Revert "feat: include tokens usage for streamed output" (#4336)
Revert "feat: include tokens usage for streamed output (#4282)"

This reverts commit 0d6c3a7d57.
2024-12-08 17:53:36 +01:00
cea5a0ea42 feat(template): read jinja templates from gguf files (#4332)
* Read jinja templates as fallback

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Move templating out of model loader

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Test TemplateMessages

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Set role and content from transformers

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Tests: be more flexible

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* More jinja

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Small refactoring and adaptations

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-12-08 13:50:33 +01:00
0d6c3a7d57 feat: include tokens usage for streamed output (#4282)
Use pb.Reply instead of []byte with Reply.GetMessage() in llama grpc to get the proper usage data in reply streaming mode at the last [DONE] frame

Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2024-11-28 14:47:56 +01:00
4f1ab2366d chore(refactor): imply modelpath (#4208)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-11-20 18:06:35 +01:00
b425a870b0 fix(diffusers): correctly parse height and width request without parametrization (#4082)
* fix(diffusers): allow to specify width and height without enable-parameters

Let's simplify usage by not gating width and height by parameters

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* chore: use sane defaults

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-11-06 08:53:02 +01:00
ccc7cb0287 feat(templates): use a single template for multimodals messages (#3892)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-22 09:34:05 +02:00
a1634b219a fix: roll out bluemonday Sanitize more widely (#3794)
* initial pass: roll out bluemonday sanitization more widely

Signed-off-by: Dave Lee <dave@gray101.com>

* add one additional sanitize - the overall modelslist used by the docs site

Signed-off-by: Dave Lee <dave@gray101.com>

---------

Signed-off-by: Dave Lee <dave@gray101.com>
2024-10-12 09:45:47 +02:00
648ffdf449 feat(multimodal): allow to template placeholders (#3728)
feat(multimodal): allow to template image placeholders

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-04 18:32:29 +02:00
307a835199 groundwork: ListModels Filtering Upgrade (#2773)
* seperate the filtering from the middleware changes

---------

Signed-off-by: Dave Lee <dave@gray101.com>
2024-10-01 18:55:46 +00:00
50a3b54e34 feat(api): add correlationID to Track Chat requests (#3668)
* Add CorrelationID to chat request

Signed-off-by: Siddharth More <siddimore@gmail.com>

* remove get_token_metrics

Signed-off-by: Siddharth More <siddimore@gmail.com>

* Add CorrelationID to proto

Signed-off-by: Siddharth More <siddimore@gmail.com>

* fix correlation method name

Signed-off-by: Siddharth More <siddimore@gmail.com>

* Update core/http/endpoints/openai/chat.go

Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
Signed-off-by: Siddharth More <siddimore@gmail.com>

* Update core/http/endpoints/openai/chat.go

Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
Signed-off-by: Siddharth More <siddimore@gmail.com>

---------

Signed-off-by: Siddharth More <siddimore@gmail.com>
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2024-09-28 17:23:56 +02:00
191bc2e50a feat(api): allow to pass audios to backends (#3603)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-09-19 12:26:53 +02:00
fbb9facda4 feat(api): allow to pass videos to backends (#3601)
This prepares the API to receive videos as well for video understanding.

It works similarly to images, where the request should be in the form:

{
 "type": "video_url",
 "video_url": { "url": "url or base64 data" }
}

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-09-19 11:21:59 +02:00
cf747bcdec feat: extract output with regexes from LLMs (#3491)
* feat: extract output with regexes from LLMs

This changset adds `extract_regex` to the LLM config. It is a list of
regexes that can match output and will be used to re extract text from
the LLM output. This is particularly useful for LLMs which outputs final
results into tags.

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Add tests, enhance output in case of configuration error

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-09-13 13:27:36 +02:00
fbaae8528d fix(chat): re-generated uuid, created, and text on each request (#3359)
This was noticed by models returning content besides function calls.
Sadly we can't test that easily in the CI so it got unnoticed.

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-08-22 10:56:05 +02:00
e198347886 feat(openai): add json_schema format type and strict mode (#3193)
* feat(openai): add json_schema and strict mode

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* handle err vs _

security scanners prefer if we put these branches in, and I tend to agree.

Signed-off-by: Dave <dave@gray101.com>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Signed-off-by: Dave <dave@gray101.com>
Co-authored-by: Dave <dave@gray101.com>
2024-08-07 15:27:02 -04:00
2169c3497d feat(grammar): add llama3.1 schema (#3015)
* wip

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* get rid of panics

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* expose it properly from the config

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Simplify

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* forgot to commit

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Remove focus on test

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Small fixups

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-07-26 20:11:29 +02:00
5eda7f578d refactor: break down json grammar parser in different files (#3004)
* refactor: break down json grammar parser in different files

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fix: patch to `refactor_grammars` - propagate errors (#3006)

propagate errors around

Signed-off-by: Dave Lee <dave@gray101.com>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Signed-off-by: Dave Lee <dave@gray101.com>
Co-authored-by: Dave <dave@gray101.com>
2024-07-25 08:41:00 +02:00
bf9dd1de7f feat(functions): parse broken JSON when we parse the raw results, use dynamic rules for grammar keys (#2912)
* feat(functions): enhance parsing with broken JSON when we parse the raw results

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* breaking: make function name by default

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* feat(grammar): dynamically generate grammars with mutating keys

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* refactor: simplify condition

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Update docs

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-07-18 17:52:22 +02:00
b8b0c7ad0b docs(swagger): core more localai/openai endpoints (#2904)
* docs(swagger): core more localai/openai endpoints

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fix swagger descriptions for backend_monitor.go

Signed-off-by: Dave <dave@gray101.com>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Signed-off-by: Dave <dave@gray101.com>
Co-authored-by: Dave <dave@gray101.com>
2024-07-18 00:38:41 -04:00
59ef426fbf feat(model-list): be consistent, skip known files from listing (#2760)
fix(model-list): be consistent, skip known files from listing

This changeset does two things:

- Removes the dependency of listing models from the OpenAI schema.
- Tries to reduce confusion between ListModels() in model loader and in
  the service - now there is only one ListModels which is in services
and does not depend anymore on the OpenAI schema
- The OpenAI-schema functions were moved nearby the OpenAI specific
  endpoints that needs the schema
- Drops the ListModel Service structure as there was no real need for
  it.

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-07-10 15:28:39 +02:00
f120a0c9f9 docs(swagger): enhance coverage of APIs (#2753)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-07-09 23:09:49 +02:00
03b1cf51fd feat(whisper): add translate option (#2649)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-06-24 19:21:22 +02:00
a181dd0ebc refactor: gallery inconsistencies (#2647)
* refactor(gallery): move under core/

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fix(unarchive): do not allow symlinks

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-06-24 17:32:12 +02:00
12513ebae0 rf: centralize base64 image handling (#2595)
contains simple fixes to warnings and errors, removes a broken / outdated test, runs go mod tidy, and as the actual change, centralizes base64 image handling

Signed-off-by: Dave Lee <dave@gray101.com>
2024-06-24 08:34:36 +02:00
5866fc8ded chore: fix go.mod module (#2635)
Signed-off-by: Sertac Ozercan <sozercan@gmail.com>
2024-06-23 08:24:36 +00:00
aae7ad9d73 feat(llama.cpp): guess model defaults from file (#2522)
* wip: guess informations from gguf file

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* update go mod

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Small fixups

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Identify llama3

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Do not try to guess the name, as reading gguf files can be expensive

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Allow to disable guessing

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-06-08 22:13:02 +02:00