Commit Graph

3234 Commits

Author SHA1 Message Date
LocalAI [bot]
ec239a0cd0
docs: ⬆️ update docs version mudler/LocalAI (#4359)
⬆️ Update docs version mudler/LocalAI

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-12-11 10:04:16 +01:00
LocalAI [bot]
b74a936178
chore: ⬆️ Update ggerganov/llama.cpp to dafae66cc242eb766797194d3c85c5e502625623 (#4360)
Some checks are pending
build container images / self-hosted-jobs (-aio-gpu-intel-f32, quay.io/go-skynet/intel-oneapi-base:latest, sycl_f32, true, ubuntu:22.04, extras, latest-gpu-intel-f32, latest-aio-gpu-intel-f32, --jobs=3 --output-sync=target, linux/amd64, arc-runner-set, auto, -sycl-f32-ffmpeg) (push) Waiting to run
build container images / self-hosted-jobs (-aio-gpu-nvidia-cuda-11, ubuntu:22.04, cublas, 11, 7, true, extras, latest-gpu-nvidia-cuda-11, latest-aio-gpu-nvidia-cuda-11, --jobs=3 --output-sync=target, linux/amd64, arc-runner-set, auto, -cublas-cuda11-ffmpeg) (push) Waiting to run
build container images / self-hosted-jobs (-aio-gpu-nvidia-cuda-12, ubuntu:22.04, cublas, 12, 0, true, extras, latest-gpu-nvidia-cuda-12, latest-aio-gpu-nvidia-cuda-12, --jobs=3 --output-sync=target, linux/amd64, arc-runner-set, auto, -cublas-cuda12-ffmpeg) (push) Waiting to run
build container images / self-hosted-jobs (quay.io/go-skynet/intel-oneapi-base:latest, sycl_f16, false, ubuntu:22.04, core, --jobs=3 --output-sync=target, linux/amd64, arc-runner-set, false, -sycl-f16-core) (push) Waiting to run
build container images / self-hosted-jobs (quay.io/go-skynet/intel-oneapi-base:latest, sycl_f16, true, ubuntu:22.04, core, --jobs=3 --output-sync=target, linux/amd64, arc-runner-set, false, -sycl-f16-ffmpeg-core) (push) Waiting to run
build container images / self-hosted-jobs (quay.io/go-skynet/intel-oneapi-base:latest, sycl_f32, false, ubuntu:22.04, core, --jobs=3 --output-sync=target, linux/amd64, arc-runner-set, false, -sycl-f32-core) (push) Waiting to run
build container images / self-hosted-jobs (quay.io/go-skynet/intel-oneapi-base:latest, sycl_f32, true, ubuntu:22.04, core, --jobs=3 --output-sync=target, linux/amd64, arc-runner-set, false, -sycl-f32-ffmpeg-core) (push) Waiting to run
build container images / self-hosted-jobs (ubuntu:22.04, , , extras, --jobs=3 --output-sync=target, linux/amd64, arc-runner-set, auto, ) (push) Waiting to run
build container images / self-hosted-jobs (ubuntu:22.04, , true, extras, --jobs=3 --output-sync=target, linux/amd64, arc-runner-set, auto, -ffmpeg) (push) Waiting to run
build container images / self-hosted-jobs (ubuntu:22.04, cublas, 11, 7, , extras, --jobs=3 --output-sync=target, linux/amd64, arc-runner-set, false, -cublas-cuda11) (push) Waiting to run
build container images / self-hosted-jobs (ubuntu:22.04, cublas, 12, 0, , extras, --jobs=3 --output-sync=target, linux/amd64, arc-runner-set, false, -cublas-cuda12) (push) Waiting to run
build container images / core-image-build (-aio-cpu, ubuntu:22.04, , true, core, latest-cpu, latest-aio-cpu, --jobs=4 --output-sync=target, linux/amd64,linux/arm64, arc-runner-set, auto, -ffmpeg-core) (push) Waiting to run
build container images / core-image-build (ubuntu:22.04, cublas, 11, 7, , core, --jobs=4 --output-sync=target, linux/amd64, arc-runner-set, false, -cublas-cuda11-core) (push) Waiting to run
build container images / core-image-build (ubuntu:22.04, cublas, 11, 7, true, core, --jobs=4 --output-sync=target, linux/amd64, arc-runner-set, false, -cublas-cuda11-ffmpeg-core) (push) Waiting to run
build container images / core-image-build (ubuntu:22.04, cublas, 12, 0, , core, --jobs=4 --output-sync=target, linux/amd64, arc-runner-set, false, -cublas-cuda12-core) (push) Waiting to run
build container images / core-image-build (ubuntu:22.04, cublas, 12, 0, true, core, --jobs=4 --output-sync=target, linux/amd64, arc-runner-set, false, -cublas-cuda12-ffmpeg-core) (push) Waiting to run
build container images / core-image-build (ubuntu:22.04, vulkan, true, core, latest-vulkan-ffmpeg-core, --jobs=4 --output-sync=target, linux/amd64, arc-runner-set, false, -vulkan-ffmpeg-core) (push) Waiting to run
Security Scan / tests (push) Waiting to run
Tests extras backends / tests-transformers (push) Waiting to run
Tests extras backends / tests-sentencetransformers (push) Waiting to run
Tests extras backends / tests-rerankers (push) Waiting to run
Tests extras backends / tests-diffusers (push) Waiting to run
Tests extras backends / tests-parler-tts (push) Waiting to run
Tests extras backends / tests-openvoice (push) Waiting to run
Tests extras backends / tests-transformers-musicgen (push) Waiting to run
Tests extras backends / tests-vallex (push) Waiting to run
Tests extras backends / tests-coqui (push) Waiting to run
tests / tests-linux (1.21.x) (push) Waiting to run
tests / tests-aio-container (push) Waiting to run
tests / tests-apple (1.21.x) (push) Waiting to run
⬆️ Update ggerganov/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-12-10 21:45:42 +00:00
Ettore Di Giacinto
de1ddb8ba6
chore(model gallery): add b-nimita-l3-8b-v0.02 (#4357)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-12-10 09:42:47 +01:00
Ettore Di Giacinto
272763f625
chore(model gallery): add intellect-1-instruct (#4356)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-12-10 09:42:37 +01:00
Ettore Di Giacinto
3aff87a5cf
chore(model gallery): add qwen2.5-math-14b-instruct (#4355)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-12-10 09:42:24 +01:00
LocalAI [bot]
885118e863
chore: ⬆️ Update ggerganov/llama.cpp to 26a8406ba9198eb6fdd8329fa717555b4f77f05f (#4353)
⬆️ Update ggerganov/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-12-10 09:10:58 +01:00
dependabot[bot]
a03a9b9e51
chore(deps): Bump docs/themes/hugo-theme-relearn from be85052 to bd1f3d3 (#4348)
chore(deps): Bump docs/themes/hugo-theme-relearn

Bumps [docs/themes/hugo-theme-relearn](https://github.com/McShelby/hugo-theme-relearn) from `be85052` to `bd1f3d3`.
- [Release notes](https://github.com/McShelby/hugo-theme-relearn/releases)
- [Commits](be85052efe...bd1f3d3432)

---
updated-dependencies:
- dependency-name: docs/themes/hugo-theme-relearn
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-09 20:09:26 +00:00
Ettore Di Giacinto
f45d6c746a
chore(model gallery): add tulu-3.1-8b-supernova-smart (#4347)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-12-09 15:58:29 +01:00
Ettore Di Giacinto
5eceb5f67c
chore(model gallery): add impish_mind_8b (#4344)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-12-09 10:24:30 +01:00
Ettore Di Giacinto
a9c0dd3a1e
chore(model gallery): add qwen2.5-7b-homeranvita-nerdmix (#4343)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-12-09 10:24:15 +01:00
LocalAI [bot]
fb17e737f0
docs: ⬆️ update docs version mudler/LocalAI (#4341)
⬆️ Update docs version mudler/LocalAI

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-12-09 09:19:25 +01:00
LocalAI [bot]
b5a21202ed
chore: ⬆️ Update ggerganov/llama.cpp to e52522b8694ae73abf12feb18d29168674aa1c1b (#4342)
⬆️ Update ggerganov/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-12-08 22:54:06 +00:00
Ettore Di Giacinto
e147f1bd3e
chore(model gallery): add bio-medical-llama-3-8b (#4339)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-12-08 18:43:26 +01:00
Ettore Di Giacinto
61839efed2
chore(model gallery): add virtuoso-small (#4338)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-12-08 18:01:25 +01:00
Ettore Di Giacinto
a0fe050055
chore(model gallery): add mn-chunky-lotus-12b (#4337)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-12-08 18:01:16 +01:00
Ettore Di Giacinto
f943c4b803
Revert "feat: include tokens usage for streamed output" (#4336)
Revert "feat: include tokens usage for streamed output (#4282)"

This reverts commit 0d6c3a7d57.
2024-12-08 17:53:36 +01:00
Ettore Di Giacinto
cea5a0ea42
feat(template): read jinja templates from gguf files (#4332)
* Read jinja templates as fallback

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Move templating out of model loader

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Test TemplateMessages

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Set role and content from transformers

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Tests: be more flexible

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* More jinja

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Small refactoring and adaptations

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-12-08 13:50:33 +01:00
LocalAI [bot]
f5e1527a5a
chore: ⬆️ Update ggerganov/llama.cpp to 3573fa8e7b7f0865638b52b4e9b4d2006f0558a2 (#4335)
⬆️ Update ggerganov/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-12-07 21:51:45 +00:00
Ettore Di Giacinto
7184ca546f
chore(model gallery): add llama-3.3-70b-instruct (#4333)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-12-07 10:39:20 +01:00
LocalAI [bot]
5592f5e820
chore: ⬆️ Update ggerganov/llama.cpp to c5ede3849fc021174862f9c0bf8273808d8f0d39 (#4330)
⬆️ Update ggerganov/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-12-06 21:46:51 +00:00
Ettore Di Giacinto
d4c1746c7d
feat(llama.cpp): expose cache_type_k and cache_type_v for quant of kv cache (#4329)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-12-06 10:23:59 +01:00
LocalAI [bot]
88737e1d76
chore: ⬆️ Update ggerganov/llama.cpp to c9c6e01daedac542b174c235872569fce5385982 (#4328)
⬆️ Update ggerganov/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-12-06 09:15:21 +01:00
LocalAI [bot]
ba225f660b
docs: ⬆️ update docs version mudler/LocalAI (#4327)
⬆️ Update docs version mudler/LocalAI

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-12-05 21:54:00 +00:00
Ettore Di Giacinto
3127cd1352
chore(docs): update available backends (#4325)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-12-05 16:57:56 +01:00
PetrFlegr
b90d78d9f6
Updated links of yamls (#4324)
Updated links

Links to deplyment*.yaml was changed

Signed-off-by: PetrFlegr <ptrflegr@gmail.com>
2024-12-05 16:06:51 +01:00
Ettore Di Giacinto
b86a3e4fa6
chore(model gallery): add math-iio-7b-instruct (#4323)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-12-05 10:05:35 +01:00
Ettore Di Giacinto
be907d993f
chore(model gallery): add loki-v2.6-8b-1024k (#4321)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-12-05 10:02:02 +01:00
Ettore Di Giacinto
ab0f8648a3
chore(model gallery): add rp-naughty-v1.0c-8b (#4322)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-12-05 10:01:49 +01:00
LocalAI [bot]
c226149503
chore: ⬆️ Update leejet/stable-diffusion.cpp to 9578fdcc4632dc3de5565f28e2fb16b7c18f8d48 (#4320)
⬆️ Update leejet/stable-diffusion.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-12-05 09:09:11 +01:00
LocalAI [bot]
4a079f893c
chore: ⬆️ Update ggerganov/llama.cpp to 59f4db10883a4f3e855cffbf2c3ab68430e95272 (#4319)
⬆️ Update ggerganov/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-12-04 22:19:35 +00:00
Ettore Di Giacinto
87b7648591
chore(ci): set auto-labeler for dependencies
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2024-12-04 18:35:54 +01:00
Ettore Di Giacinto
cf4f024420
Update README.md
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2024-12-04 11:31:08 +01:00
Ettore Di Giacinto
3c0ac49d90
chore(model gallery): add bark-cpp-small (#4318)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-12-04 09:32:42 +01:00
Ettore Di Giacinto
4307ae5d52
chore(model gallery): add flux.1-dev-ggml (#4317)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-12-04 09:26:51 +01:00
Ettore Di Giacinto
50f71f73d7
chore(model gallery): add homercreativeanvita-mix-qw7b (#4316)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-12-04 09:17:23 +01:00
Ettore Di Giacinto
dc04a43868
chore(model gallery): add chatty-harry_v3.0 (#4315)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-12-04 09:15:47 +01:00
Ettore Di Giacinto
cc04b62d3a
chore(model gallery): add cybercore-qwen-2.1-7b (#4314)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-12-04 09:15:35 +01:00
LocalAI [bot]
feb54e65c2
chore: ⬆️ Update ggerganov/llama.cpp to cc98896db858df7aa40d0e16a505883ef196a482 (#4312)
⬆️ Update ggerganov/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2024-12-03 23:53:11 +00:00
Ettore Di Giacinto
44a5dac312
feat(backend): add stablediffusion-ggml (#4289)
* feat(backend): add stablediffusion-ggml

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* chore(ci): track stablediffusion-ggml

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fixups

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Use default scheduler and sampler if not specified

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fixups

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Move cfg scale out of diffusers block

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Make it working

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fix: set free_params_immediately to false to call the model in sequence

https://github.com/leejet/stable-diffusion.cpp/issues/366

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-12-03 22:41:22 +01:00
Ettore Di Giacinto
074b52bbfe
chore(model gallery): add bggpt-gemma-2-2.6b-it-v1.0 (#4311)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-12-03 18:56:53 +01:00
Ettore Di Giacinto
236a60bab8
chore(model gallery): add qwen2.5-7b-homercreative-mix (#4310)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-12-03 18:56:03 +01:00
Ettore Di Giacinto
7b70f0543b
chore(model gallery): add sparse-llama-3.1-8b-2of4 (#4309)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-12-03 18:55:49 +01:00
LocalAI [bot]
5f33962932
chore: ⬆️ Update ggerganov/llama.cpp to 8648c521010620c2daccfa1d26015c668ba2c717 (#4307)
⬆️ Update ggerganov/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-12-03 09:20:59 +01:00
dependabot[bot]
45b91d501e
chore(deps): Bump docs/themes/hugo-theme-relearn from 28fce6b to be85052 (#4305)
chore(deps): Bump docs/themes/hugo-theme-relearn

Bumps [docs/themes/hugo-theme-relearn](https://github.com/McShelby/hugo-theme-relearn) from `28fce6b` to `be85052`.
- [Release notes](https://github.com/McShelby/hugo-theme-relearn/releases)
- [Commits](28fce6b04c...be85052efe)

---
updated-dependencies:
- dependency-name: docs/themes/hugo-theme-relearn
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-02 22:47:37 +00:00
Ettore Di Giacinto
e51792784a
chore(deps): bump grpcio to 1.68.1 (#4301)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-12-02 19:13:26 +01:00
LocalAI [bot]
28594336e9
chore: ⬆️ Update ggerganov/llama.cpp to 5e1ed95583ca552a98d8528b73e1ff81249c2bf9 (#4299)
⬆️ Update ggerganov/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-12-01 21:44:59 +00:00
Ettore Di Giacinto
9c9359fc96
chore(model gallery): add teleut-7b (#4298)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-12-01 10:12:48 +01:00
Ettore Di Giacinto
bc5d1f255b
chore(model gallery): add skywork-o1-open-llama-3.1-8b (#4297)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-12-01 10:12:35 +01:00
Ettore Di Giacinto
0fcefbc168
chore(model gallery): add volare-i1 (#4296)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-12-01 10:12:17 +01:00
LocalAI [bot]
9044b17e4d
chore: ⬆️ Update ggerganov/llama.cpp to 0c39f44d70d058940fe2afe50cfc789e3e44d756 (#4295)
⬆️ Update ggerganov/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-11-30 21:46:07 +00:00