fix(model-list): be consistent, skip known files from listing
This changeset does two things:
- Removes the dependency of listing models from the OpenAI schema.
- Tries to reduce confusion between ListModels() in model loader and in
the service - now there is only one ListModels which is in services
and does not depend anymore on the OpenAI schema
- The OpenAI-schema functions were moved nearby the OpenAI specific
endpoints that needs the schema
- Drops the ListModel Service structure as there was no real need for
it.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Wip p2p enhancements
* get online state
* Pass-by token to show in the dashboard
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Style
* Minor fixups
* parametrize SearchID
* Refactoring
* Allow to expose/bind more services
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Add federation
* Display federated mode in the WebUI
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Small fixups
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* make federated nodes visible from the WebUI
* Fix version display
* improve web page
* live page update
* visual enhancements
* enhancements
* visual enhancements
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* refactor(gallery): move under core/
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix(unarchive): do not allow symlinks
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(ui): allow to set system prompt for chat
Make also the models in the index clickable, and display as table
Fixes#2257
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(vision): support also png with base64 input
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(ui): support vision and upload of files
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* display the processed image
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* make trust remote code stand out
Signed-off-by: mudler <mudler@localai.io>
* feat(ui): track in progress job across index/model gallery
Signed-off-by: mudler <mudler@localai.io>
* minor fixups
Signed-off-by: mudler <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Signed-off-by: mudler <mudler@localai.io>
* ux: change welcome when there are no models installed
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* ux: filter
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* ux: show tags in filter
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* wip
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* make tags clickable
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* allow to delete models from the list
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* ui: display icon of installed models
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* gallery: remove gallery file when removing model
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(gallery): show a re-install button
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* make filter buttons, rename Gallery field
Signed-off-by: mudler <mudler@localai.io>
* show again buttons at end of operations
Signed-off-by: mudler <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Signed-off-by: mudler <mudler@localai.io>
* start breaking up the giant channel refactor now that it's better understood - easier to merge bites
Signed-off-by: Dave Lee <dave@gray101.com>
* add concurrency and base64 back in, along with new base64 tests.
Signed-off-by: Dave Lee <dave@gray101.com>
* Automatic rename of whisper.go's Result to TranscriptResult
Signed-off-by: Dave Lee <dave@gray101.com>
* remove pkg/concurrency - significant changes coming in split 2
Signed-off-by: Dave Lee <dave@gray101.com>
* fix comments
Signed-off-by: Dave Lee <dave@gray101.com>
* add list_model service as another low-risk service to get it out of the way
Signed-off-by: Dave Lee <dave@gray101.com>
* split backend config loader into seperate file from the actual config struct. No changes yet, just reduce cognative load with smaller files of logical blocks
Signed-off-by: Dave Lee <dave@gray101.com>
* rename state.go ==> application.go
Signed-off-by: Dave Lee <dave@gray101.com>
* fix lost import?
Signed-off-by: Dave Lee <dave@gray101.com>
---------
Signed-off-by: Dave Lee <dave@gray101.com>
* feat(gallery): op now supports deletion of models
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Wire things with WebUI(WIP)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* minor improvements
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(elevenlabs): map elevenlabs API support to TTS
This allows elevenlabs Clients to work automatically with LocalAI by
supporting the elevenlabs API.
The elevenlabs server endpoint is implemented such as it is wired to the
TTS endpoints.
Fixes: https://github.com/mudler/LocalAI/issues/1809
* feat(openai/tts): compat layer with openai tts
Fixes: #1276
* fix: adapt tts CLI
* fix(defaults): set better defaults for inferencing
This changeset aim to have better defaults and to properly detect when
no inference settings are provided with the model.
If not specified, we defaults to mirostat sampling, and offload all the
GPU layers (if a GPU is detected).
Related to https://github.com/mudler/LocalAI/issues/1373 and https://github.com/mudler/LocalAI/issues/1723
* Adapt tests
* Also pre-initialize default seed
* core 1
* api/openai/files fix
* core 2 - core/config
* move over core api.go and tests to the start of core/http
* move over localai specific endpoints to core/http, begin the service/endpoint split there
* refactor big chunk on the plane
* refactor chunk 2 on plane, next step: port and modify changes to request.go
* easy fixes for request.go, major changes not done yet
* lintfix
* json tag lintfix?
* gitignore and .keep files
* strange fix attempt: rename the config dir?