* Add CorrelationID to chat request
Signed-off-by: Siddharth More <siddimore@gmail.com>
* remove get_token_metrics
Signed-off-by: Siddharth More <siddimore@gmail.com>
* Add CorrelationID to proto
Signed-off-by: Siddharth More <siddimore@gmail.com>
* fix correlation method name
Signed-off-by: Siddharth More <siddimore@gmail.com>
* Update core/http/endpoints/openai/chat.go
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
Signed-off-by: Siddharth More <siddimore@gmail.com>
* Update core/http/endpoints/openai/chat.go
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
Signed-off-by: Siddharth More <siddimore@gmail.com>
---------
Signed-off-by: Siddharth More <siddimore@gmail.com>
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
This prepares the API to receive videos as well for video understanding.
It works similarly to images, where the request should be in the form:
{
"type": "video_url",
"video_url": { "url": "url or base64 data" }
}
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
fix(model-list): be consistent, skip known files from listing
This changeset does two things:
- Removes the dependency of listing models from the OpenAI schema.
- Tries to reduce confusion between ListModels() in model loader and in
the service - now there is only one ListModels which is in services
and does not depend anymore on the OpenAI schema
- The OpenAI-schema functions were moved nearby the OpenAI specific
endpoints that needs the schema
- Drops the ListModel Service structure as there was no real need for
it.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* refactor(gallery): move under core/
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix(unarchive): do not allow symlinks
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
contains simple fixes to warnings and errors, removes a broken / outdated test, runs go mod tidy, and as the actual change, centralizes base64 image handling
Signed-off-by: Dave Lee <dave@gray101.com>
* wip: guess informations from gguf file
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* update go mod
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Small fixups
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Identify llama3
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Do not try to guess the name, as reading gguf files can be expensive
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Allow to disable guessing
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(ui): allow to set system prompt for chat
Make also the models in the index clickable, and display as table
Fixes#2257
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(vision): support also png with base64 input
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(ui): support vision and upload of files
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* display the processed image
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* make trust remote code stand out
Signed-off-by: mudler <mudler@localai.io>
* feat(ui): track in progress job across index/model gallery
Signed-off-by: mudler <mudler@localai.io>
* minor fixups
Signed-off-by: mudler <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Signed-off-by: mudler <mudler@localai.io>
* fix(defaults): set better defaults for inferencing
This changeset aim to have better defaults and to properly detect when
no inference settings are provided with the model.
If not specified, we defaults to mirostat sampling, and offload all the
GPU layers (if a GPU is detected).
Related to https://github.com/mudler/LocalAI/issues/1373 and https://github.com/mudler/LocalAI/issues/1723
* Adapt tests
* Also pre-initialize default seed
* core 1
* api/openai/files fix
* core 2 - core/config
* move over core api.go and tests to the start of core/http
* move over localai specific endpoints to core/http, begin the service/endpoint split there
* refactor big chunk on the plane
* refactor chunk 2 on plane, next step: port and modify changes to request.go
* easy fixes for request.go, major changes not done yet
* lintfix
* json tag lintfix?
* gitignore and .keep files
* strange fix attempt: rename the config dir?