* feat: extract output with regexes from LLMs
This changset adds `extract_regex` to the LLM config. It is a list of
regexes that can match output and will be used to re extract text from
the LLM output. This is particularly useful for LLMs which outputs final
results into tags.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Add tests, enhance output in case of configuration error
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* initial version of elevenlabs compatible soundgeneration api and cli command
Signed-off-by: Dave Lee <dave@gray101.com>
* minor cleanup
Signed-off-by: Dave Lee <dave@gray101.com>
* restore TTS, add test
Signed-off-by: Dave Lee <dave@gray101.com>
* remove stray s
Signed-off-by: Dave Lee <dave@gray101.com>
* fix
Signed-off-by: Dave Lee <dave@gray101.com>
---------
Signed-off-by: Dave Lee <dave@gray101.com>
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
* feat(llama.cpp): add embeddings
Also enable embeddings by default for llama.cpp models
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix(Makefile): prepare llama.cpp sources only once
Otherwise we keep cloning llama.cpp for each of the variants
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* do not set embeddings to false
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* docs: add embeddings to the YAML config reference
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* start by checking /scan during the checksum update
Signed-off-by: Dave Lee <dave@gray101.com>
* add back in golang side features: downloader/uri gets struct and scan function, gallery uses it, and secscan/models calls it.
Signed-off-by: Dave Lee <dave@gray101.com>
* add a param to scan specific urls - useful for debugging
Signed-off-by: Dave Lee <dave@gray101.com>
* helpful printouts
Signed-off-by: Dave Lee <dave@gray101.com>
* fix offsets
Signed-off-by: Dave Lee <dave@gray101.com>
* fix error and naming
Signed-off-by: Dave Lee <dave@gray101.com>
* expose error
Signed-off-by: Dave Lee <dave@gray101.com>
* fix json tags
Signed-off-by: Dave Lee <dave@gray101.com>
* slight wording change
Signed-off-by: Dave Lee <dave@gray101.com>
* go mod tidy - getting warnings
Signed-off-by: Dave Lee <dave@gray101.com>
* split out python to make editing easier, add some simple code to delete contaminated entries from gallery
Signed-off-by: Dave Lee <dave@gray101.com>
* o7 to my favorite part of our old name, go-skynet
Signed-off-by: Dave Lee <dave@gray101.com>
* merge fix
Signed-off-by: Dave Lee <dave@gray101.com>
* merge fix
Signed-off-by: Dave Lee <dave@gray101.com>
* merge fix
Signed-off-by: Dave Lee <dave@gray101.com>
* address review comments
Signed-off-by: Dave Lee <dave@gray101.com>
* forgot secscan could accept multiple URL at once
Signed-off-by: Dave Lee <dave@gray101.com>
* invert naming and actually use it
Signed-off-by: Dave Lee <dave@gray101.com>
* missed cli/models.go
Signed-off-by: Dave Lee <dave@gray101.com>
* Update .github/check_and_update.py
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
Signed-off-by: Dave <dave@gray101.com>
---------
Signed-off-by: Dave Lee <dave@gray101.com>
Signed-off-by: Dave <dave@gray101.com>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
* refactor(gallery): move under core/
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix(unarchive): do not allow symlinks
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* start breaking up the giant channel refactor now that it's better understood - easier to merge bites
Signed-off-by: Dave Lee <dave@gray101.com>
* add concurrency and base64 back in, along with new base64 tests.
Signed-off-by: Dave Lee <dave@gray101.com>
* Automatic rename of whisper.go's Result to TranscriptResult
Signed-off-by: Dave Lee <dave@gray101.com>
* remove pkg/concurrency - significant changes coming in split 2
Signed-off-by: Dave Lee <dave@gray101.com>
* fix comments
Signed-off-by: Dave Lee <dave@gray101.com>
* add list_model service as another low-risk service to get it out of the way
Signed-off-by: Dave Lee <dave@gray101.com>
* split backend config loader into seperate file from the actual config struct. No changes yet, just reduce cognative load with smaller files of logical blocks
Signed-off-by: Dave Lee <dave@gray101.com>
* rename state.go ==> application.go
Signed-off-by: Dave Lee <dave@gray101.com>
* fix lost import?
Signed-off-by: Dave Lee <dave@gray101.com>
---------
Signed-off-by: Dave Lee <dave@gray101.com>
* fix(seed): generate random seed per-request if -1 is set
Also update ci with new workflows and allow the aio tests to run with an
api key
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* docs(openvino): Add OpenVINO example
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Certain engines requires to know during model loading
if the embedding feature has to be enabled, however, it is impractical
to have to set it to ALL the backends that supports embeddings.
There are transformers and sentencentransformers that seamelessly handle
both cases, without having this settings to be explicitly enabled.
The case sussist only for ggml-based models that needs to enable
featuresets during model loading (and thus settings `embedding` is
required), however most of the other engines does not require this.
This change disables the check done at code side, making easier to use
embeddings by not having to specify explicitly `embeddings: true`.
Part of: https://github.com/mudler/LocalAI/issues/1373
* feat(elevenlabs): map elevenlabs API support to TTS
This allows elevenlabs Clients to work automatically with LocalAI by
supporting the elevenlabs API.
The elevenlabs server endpoint is implemented such as it is wired to the
TTS endpoints.
Fixes: https://github.com/mudler/LocalAI/issues/1809
* feat(openai/tts): compat layer with openai tts
Fixes: #1276
* fix: adapt tts CLI
* fix(defaults): set better defaults for inferencing
This changeset aim to have better defaults and to properly detect when
no inference settings are provided with the model.
If not specified, we defaults to mirostat sampling, and offload all the
GPU layers (if a GPU is detected).
Related to https://github.com/mudler/LocalAI/issues/1373 and https://github.com/mudler/LocalAI/issues/1723
* Adapt tests
* Also pre-initialize default seed
* feat(intel): add diffusers support
* try to consume upstream container image
* Debug
* Manually install deps
* Map transformers/hf cache dir to modelpath if not specified
* fix(compel): update initialization, pass by all gRPC options
* fix: add dependencies, implement transformers for xpu
* base it from the oneapi image
* Add pillow
* set threads if specified when launching the API
* Skip conda install if intel
* defaults to non-intel
* ci: add to pipelines
* prepare compel only if enabled
* Skip conda install if intel
* fix cleanup
* Disable compel by default
* Install torch 2.1.0 with Intel
* Skip conda on some setups
* Detect python
* Quiet output
* Do not override system python with conda
* Prefer python3
* Fixups
* exllama2: do not install without conda (overrides pytorch version)
* exllama/exllama2: do not install if not using cuda
* Add missing dataset dependency
* Small fixups, symlink to python, add requirements
* Add neural_speed to the deps
* correctly handle model offloading
* fix: device_map == xpu
* go back at calling python, fixed at dockerfile level
* Exllama2 restricted to only nvidia gpus
* Tokenizer to xpu
* core 1
* api/openai/files fix
* core 2 - core/config
* move over core api.go and tests to the start of core/http
* move over localai specific endpoints to core/http, begin the service/endpoint split there
* refactor big chunk on the plane
* refactor chunk 2 on plane, next step: port and modify changes to request.go
* easy fixes for request.go, major changes not done yet
* lintfix
* json tag lintfix?
* gitignore and .keep files
* strange fix attempt: rename the config dir?
This PR specifically introduces a `core` folder and moves the following packages over, without any other changes:
- `api/backend`
- `api/config`
- `api/options`
- `api/schema`
Once this is merged and we confirm there's no regressions, I can migrate over the remaining changes piece by piece to split up application startup, backend services, http, and mqtt as was the goal of the earlier PRs!