Commit Graph

644 Commits

Author SHA1 Message Date
LocalAI [bot]
b941732f54
⬆️ Update ggerganov/llama.cpp (#2696)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-07-01 22:52:43 +02:00
LocalAI [bot]
421eb8a727
⬆️ Update ggerganov/llama.cpp (#2689)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-07-01 00:20:11 +00:00
LocalAI [bot]
83d867ad46
⬆️ Update ggerganov/llama.cpp (#2683)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-06-30 01:51:51 +00:00
LocalAI [bot]
1d30955677
⬆️ Update ggerganov/llama.cpp (#2677)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-06-29 00:43:02 +00:00
LocalAI [bot]
8d9a452e4b
⬆️ Update ggerganov/llama.cpp (#2671)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-06-28 10:09:01 +02:00
LocalAI [bot]
7e562d10a3
⬆️ Update ggerganov/llama.cpp (#2652)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-06-28 01:30:37 +00:00
Ettore Di Giacinto
7b1e792732
deps(llama.cpp): bump to latest, update build variables (#2669)
* arrow_up: Update ggerganov/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* deps(llama.cpp): update build variables to follow upstream

Update build recipes with https://github.com/ggerganov/llama.cpp/pull/8006

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Disable shared libs by default in llama.cpp

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Disable shared libs in llama.cpp Makefile

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Disable metal embedding for now, until it is tested

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fix(mac): explicitly enable metal

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* debug

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fix typo

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-06-27 23:10:04 +02:00
Ettore Di Giacinto
b783c811db
feat(build): only build llama.cpp relevant targets (#2659)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-06-26 14:58:38 +02:00
Ettore Di Giacinto
e84b31935c
feat(vulkan): add vulkan support to the llama.cpp backend (#2648)
feat(vulkan): add vulkan support to llama.cpp

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-06-24 20:04:58 +02:00
LocalAI [bot]
4156a4f15f
⬆️ Update ggerganov/llama.cpp (#2632)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-06-23 22:21:38 +00:00
LocalAI [bot]
533343c84f
⬆️ Update ggerganov/llama.cpp (#2629)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-06-22 02:28:06 +00:00
LocalAI [bot]
70a2bfe82e
⬆️ Update ggerganov/llama.cpp (#2617)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-06-21 06:41:34 +00:00
LocalAI [bot]
d0423254dd
⬆️ Update ggerganov/llama.cpp (#2606)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-06-20 00:58:40 +00:00
Rene Leonhardt
43f0688a95
feat: Upgrade to CUDA 12.5 (#2601)
Signed-off-by: Rene Leonhardt <65483435+reneleonhardt@users.noreply.github.com>
2024-06-19 17:50:49 +02:00
LocalAI [bot]
8142bdc48f
⬆️ Update ggerganov/llama.cpp (#2603)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-06-19 00:28:50 +00:00
Ettore Di Giacinto
89a11e15e7
fix(single-binary): bundle ld.so (#2602)
* debug

* fix copy command/silly muscle memory

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* remove tmate

* Debugging

* Start binary with ld.so if present in libdir

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* small refactor

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-06-18 22:43:43 +02:00
LocalAI [bot]
c926469b9c
⬆️ Update ggerganov/llama.cpp (#2594)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-06-18 03:06:31 +00:00
LocalAI [bot]
2f297979a7
⬆️ Update ggerganov/llama.cpp (#2587)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-06-17 15:28:19 +00:00
LocalAI [bot]
68148f2a1a
⬆️ Update ggerganov/llama.cpp (#2584)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-06-17 00:18:44 +00:00
LocalAI [bot]
58bf8614d9
⬆️ Update ggerganov/llama.cpp (#2575)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-06-15 23:45:10 +00:00
LocalAI [bot]
5116d561e1
⬆️ Update ggerganov/llama.cpp (#2570)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-06-14 23:39:20 +00:00
Ettore Di Giacinto
96a7a3b59f
fix(Makefile): enable STATIC on dist (#2569)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-06-14 12:28:46 +02:00
Ettore Di Giacinto
112d0ffa45
feat(darwin): embed grpc libs (#2567)
* debug

* feat(makefile): allow to bundle libs into binary

* ci: bundle protobuf into single-binary

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* ci: tests

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fix(assets): correctly reference extract folder

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* bundle also abseil

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* bundle more libs

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2024-06-14 08:51:25 +02:00
LocalAI [bot]
25f45827ab
⬆️ Update ggerganov/whisper.cpp (#2565)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-06-14 00:26:51 +00:00
LocalAI [bot]
f322f7c62d
⬆️ Update ggerganov/llama.cpp (#2564)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-06-13 23:47:50 +00:00
LocalAI [bot]
f183fec232
⬆️ Update ggerganov/llama.cpp (#2554)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-06-13 08:34:32 +00:00
Ettore Di Giacinto
882556d4db
feat(gallery): show available models in website, allow local-ai models install to install from galleries (#2555)
* WIP

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* gen a static page instead (we force DNS redirects to it)

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* feat(gallery): install models from CLI, unify install

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Uniform graphic of model page

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Makefile: update targets

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Slightly enhance gallery view

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-06-13 00:47:16 +02:00
LocalAI [bot]
f8382adbf7
⬆️ Update ggerganov/llama.cpp (#2551)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-06-12 08:54:00 +00:00
LocalAI [bot]
80298f94fa
⬆️ Update ggerganov/whisper.cpp (#2552)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-06-12 07:39:21 +00:00
LocalAI [bot]
5da10fb769
⬆️ Update ggerganov/llama.cpp (#2540)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-06-11 00:59:17 +00:00
LocalAI [bot]
bec883e3ff
⬆️ Update ggerganov/whisper.cpp (#2539)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-06-10 23:32:32 +00:00
LocalAI [bot]
3a5f2283ea
⬆️ Update ggerganov/llama.cpp (#2531)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-06-09 23:15:59 +00:00
Ettore Di Giacinto
6c087ae743
feat(arm64): enable single-binary builds (#2490)
* ci: try to build for arm64

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Allow to skip hipblas on make dist

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* use arm64 cross compiler

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* correctly target go arm64

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* create a separate target

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* cross-compile grpc

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Add Protobuf include dirs

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* temp disable CUDA build

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* aarch64 builds: Reduce backends

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Even less backends

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Even less backends

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* feat(startup): allow to load libs from extracted assets

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* makefile: set arch

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-06-09 15:11:37 +02:00
LocalAI [bot]
88af1033d6
⬆️ Update ggerganov/llama.cpp (#2524)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-06-08 23:53:35 +00:00
LocalAI [bot]
23b3d22525
⬆️ Update ggerganov/llama.cpp (#2518)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-06-07 23:35:16 +00:00
LocalAI [bot]
0f9b58f2cf
⬆️ Update ggerganov/llama.cpp (#2508)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-06-06 23:48:17 +00:00
LocalAI [bot]
0f134d557e
⬆️ Update ggerganov/whisper.cpp (#2507)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-06-06 23:21:25 +00:00
Ettore Di Giacinto
4c9623f50d
deps(whisper): update, add libcufft-dev (#2501)
* arrow_up: Update ggerganov/whisper.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>

* fix(build): add libcufft-dev

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-06-06 08:41:04 +02:00
Ettore Di Giacinto
596cf76135
build(intel): bundle intel variants in single-binary (#2494)
* wip: try to build also intel variants

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Add dependencies

* Select automatically intel backend

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-06-06 08:40:51 +02:00
LocalAI [bot]
a293aa1b79
⬆️ Update ggerganov/llama.cpp (#2493)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-06-06 00:02:51 +00:00
Ettore Di Giacinto
17cf6c4a4d
feat(amdgpu): try to build in single binary (#2485)
* feat(amdgpu): try to build in single binary

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Release space from worker

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-06-05 08:44:15 +02:00
LocalAI [bot]
fab3e711ff
⬆️ Update ggerganov/llama.cpp (#2487)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-06-04 23:11:28 +00:00
LocalAI [bot]
67aa31faad
⬆️ Update ggerganov/llama.cpp (#2477)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-06-03 23:09:24 +00:00
LocalAI [bot]
5ddaa19914
⬆️ Update ggerganov/llama.cpp (#2467)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-06-02 21:34:29 +00:00
LocalAI [bot]
b588cae70e
⬆️ Update ggerganov/llama.cpp (#2465)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-06-01 22:31:32 +00:00
Chakib Benziane
b99182c8d4
TTS API improvements (#2308)
* update doc on COQUI_LANGUAGE env variable

Signed-off-by: blob42 <contact@blob42.xyz>

* return errors from tts gRPC backend

Signed-off-by: blob42 <contact@blob42.xyz>

* handle speaker_id and language in coqui TTS backend

Signed-off-by: blob42 <contact@blob42.xyz>

* TTS endpoint: add optional language paramter

Signed-off-by: blob42 <contact@blob42.xyz>

* tts fix: empty language string breaks non-multilingual models

Signed-off-by: blob42 <contact@blob42.xyz>

* allow tts param definition in config file

- consolidate TTS options under `tts` config entry

Signed-off-by: blob42 <contact@blob42.xyz>

* tts: update doc

Signed-off-by: blob42 <contact@blob42.xyz>

---------

Signed-off-by: blob42 <contact@blob42.xyz>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2024-06-01 18:26:27 +00:00
LocalAI [bot]
06b461b061
⬆️ Update ggerganov/llama.cpp (#2453)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-06-01 00:09:26 +02:00
LocalAI [bot]
3fe7e9f678
⬆️ Update ggerganov/whisper.cpp (#2452)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-05-31 21:59:48 +00:00
Ettore Di Giacinto
ff8a6962cd
build(Makefile): add back single target to build native llama-cpp (#2448)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-05-31 18:35:33 +02:00
LocalAI [bot]
5dc6bace49
⬆️ Update ggerganov/whisper.cpp (#2443)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-05-30 22:18:55 +00:00
LocalAI [bot]
3cd5918ae6
⬆️ Update ggerganov/llama.cpp (#2444)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-05-30 22:09:42 +00:00
LocalAI [bot]
b2fc92daa7
⬆️ Update ggerganov/whisper.cpp (#2438)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-05-30 06:07:28 +00:00
LocalAI [bot]
0787797961
⬆️ Update ggerganov/llama.cpp (#2437)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-05-29 23:15:36 +00:00
LocalAI [bot]
087bceccac
⬆️ Update ggerganov/llama.cpp (#2433)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-05-28 21:55:03 +00:00
LocalAI [bot]
577888f3c0
⬆️ Update ggerganov/llama.cpp (#2428)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-05-27 22:02:49 +00:00
LocalAI [bot]
1c80f628ff
⬆️ Update ggerganov/whisper.cpp (#2427)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-05-27 21:28:36 +00:00
Ettore Di Giacinto
10430a00bd
feat(hipblas): extend default hipblas GPU_TARGETS (#2426)
Makefile: extend default hipblas GPU_TARGETS

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-05-27 22:35:11 +02:00
LocalAI [bot]
e9c28a1ed7
⬆️ Update ggerganov/llama.cpp (#2419)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-05-26 21:32:05 +00:00
LocalAI [bot]
593fb62bf0
⬆️ Update ggerganov/llama.cpp (#2409)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-05-26 08:43:50 +00:00
LocalAI [bot]
480834f75b
⬆️ Update ggerganov/whisper.cpp (#2408)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-05-26 08:05:15 +00:00
LocalAI [bot]
f8cea16c03
⬆️ Update ggerganov/llama.cpp (#2399)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-05-24 21:52:13 +00:00
LocalAI [bot]
dce63237f2
⬆️ Update ggerganov/llama.cpp (#2360)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-05-23 21:02:13 +00:00
LocalAI [bot]
c8d7d14a37
⬆️ Update go-skynet/go-bert.cpp (#1225)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-05-22 23:42:38 +00:00
LocalAI [bot]
c56bc0de98
⬆️ Update ggerganov/whisper.cpp (#2361)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-05-23 01:02:57 +02:00
Ettore Di Giacinto
3a9408363b
deps(llama.cpp): update and adapt API changes (#2381)
deps(llama.cpp): update and rename function

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-05-23 01:02:11 +02:00
Ettore Di Giacinto
16474bfb40
build: add sha (#2356)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-05-20 18:02:19 +02:00
LocalAI [bot]
053531e434
⬆️ Update ggerganov/whisper.cpp (#2352)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-05-19 22:23:02 +00:00
LocalAI [bot]
b7ab4f25d9
⬆️ Update ggerganov/llama.cpp (#2351)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-05-19 22:22:03 +00:00
Ettore Di Giacinto
8ccd5ab040
feat(webui): statically embed js/css assets (#2348)
* feat(webui): statically embed js/css assets

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* update font assets

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-05-19 18:24:27 +02:00
Ettore Di Giacinto
8ad669339e
add openvoice backend (#2334)
Wip openvoice
2024-05-19 16:27:08 +02:00
LocalAI [bot]
5f35e85e86
⬆️ Update ggerganov/llama.cpp (#2342)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-05-18 21:06:29 +00:00
LocalAI [bot]
9ab8f8f5e0
⬆️ Update ggerganov/llama.cpp (#2339)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-05-17 21:13:01 +00:00
LocalAI [bot]
9a255d6453
⬆️ Update ggerganov/llama.cpp (#2337)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-05-16 21:53:19 +00:00
LocalAI [bot]
4e92569d45
⬆️ Update ggerganov/whisper.cpp (#2329)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-05-15 22:24:06 +00:00
LocalAI [bot]
b584dcf18a
⬆️ Update ggerganov/llama.cpp (#2316)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-05-15 22:20:37 +00:00
Ettore Di Giacinto
c89271b2e4
feat(llama.cpp): add distributed llama.cpp inferencing (#2324)
* feat(llama.cpp): support distributed llama.cpp

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* feat: let tweak how chat messages are merged together

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* refactor

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Makefile: register to ALL_GRPC_BACKENDS

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* refactoring, allow disable auto-detection of backends

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* minor fixups

Signed-off-by: mudler <mudler@localai.io>

* feat: add cmd to start rpc-server from llama.cpp

Signed-off-by: mudler <mudler@localai.io>

* ci: add ccache

Signed-off-by: mudler <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Signed-off-by: mudler <mudler@localai.io>
2024-05-15 01:17:02 +02:00
LocalAI [bot]
566b5cf2ee
⬆️ Update ggerganov/whisper.cpp (#2326)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-05-14 21:17:46 +00:00
Sertaç Özercan
a670318a9f
feat: auto select llama-cpp cuda runtime (#2306)
* auto select cpu variant

Signed-off-by: Sertac Ozercan <sozercan@gmail.com>

* remove cuda target for now

Signed-off-by: Sertac Ozercan <sozercan@gmail.com>

* fix metal

Signed-off-by: Sertac Ozercan <sozercan@gmail.com>

* fix path

Signed-off-by: Sertac Ozercan <sozercan@gmail.com>

* cuda

Signed-off-by: Sertac Ozercan <sozercan@gmail.com>

* auto select cuda

Signed-off-by: Sertac Ozercan <sozercan@gmail.com>

* update test

Signed-off-by: Sertac Ozercan <sozercan@gmail.com>

* select CUDA backend only if present

Signed-off-by: mudler <mudler@localai.io>

* ci: keep cuda bin in path

Signed-off-by: mudler <mudler@localai.io>

* Makefile: make dist now builds also cuda

Signed-off-by: mudler <mudler@localai.io>

* Keep pushing fallback in case auto-flagset/nvidia fails

There could be other reasons for which the default binary may fail. For example we might have detected an Nvidia GPU,
however the user might not have the drivers/cuda libraries installed in the system, and so it would fail to start.

We keep the fallback of llama.cpp at the end of the llama.cpp backends to try to fallback loading in case things go wrong

Signed-off-by: mudler <mudler@localai.io>

* Do not build cuda on MacOS

Signed-off-by: mudler <mudler@localai.io>

* cleanup

Signed-off-by: Sertac Ozercan <sozercan@gmail.com>

* Apply suggestions from code review

Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>

---------

Signed-off-by: Sertac Ozercan <sozercan@gmail.com>
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
Signed-off-by: mudler <mudler@localai.io>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
Co-authored-by: mudler <mudler@localai.io>
2024-05-14 19:40:18 +02:00
LocalAI [bot]
4ac7956f68
⬆️ Update ggerganov/whisper.cpp (#2317)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-05-13 22:25:14 +00:00
Sertaç Özercan
e2c3ffb09b
feat: auto select llama-cpp cpu variant (#2305)
* auto select cpu variant

Signed-off-by: Sertac Ozercan <sozercan@gmail.com>

* remove cuda target for now

Signed-off-by: Sertac Ozercan <sozercan@gmail.com>

* fix metal

Signed-off-by: Sertac Ozercan <sozercan@gmail.com>

* fix path

Signed-off-by: Sertac Ozercan <sozercan@gmail.com>

---------

Signed-off-by: Sertac Ozercan <sozercan@gmail.com>
2024-05-13 11:37:52 +02:00
LocalAI [bot]
b4cb22f444
⬆️ Update ggerganov/llama.cpp (#2303)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-05-12 21:18:59 +00:00
LocalAI [bot]
dfc420706c
⬆️ Update ggerganov/llama.cpp (#2290)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-05-11 21:16:34 +00:00
LocalAI [bot]
93e581dfd0
⬆️ Update ggerganov/llama.cpp (#2285)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-05-10 21:09:22 +00:00
Ettore Di Giacinto
9b09eb005f
build: do not specify a BUILD_ID by default (#2284)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-05-10 16:01:55 +02:00
LocalAI [bot]
18a04246fa
⬆️ Update ggerganov/llama.cpp (#2281)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-05-09 22:18:49 +00:00
LocalAI [bot]
d651f390cd
⬆️ Update ggerganov/whisper.cpp (#2273)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-05-08 22:11:10 +00:00
LocalAI [bot]
eca5200fbd
⬆️ Update ggerganov/llama.cpp (#2272)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-05-08 21:34:56 +00:00
LocalAI [bot]
995aa5ed21
⬆️ Update ggerganov/llama.cpp (#2263)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-05-07 21:39:12 +00:00
LocalAI [bot]
581b894789
⬆️ Update ggerganov/llama.cpp (#2255)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-05-06 21:28:07 +00:00
LocalAI [bot]
c5475020fe
⬆️ Update ggerganov/llama.cpp (#2251)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-05-05 21:16:00 +00:00
Ettore Di Giacinto
c5798500cb
feat(single-build): generate single binaries for releases (#2246)
* feat(single-build): generate single binaries for releases

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* drop old targets

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-05-05 17:20:51 +02:00
LocalAI [bot]
17e94fbcb1
⬆️ Update ggerganov/llama.cpp (#2239)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-05-04 21:26:22 +00:00
Ettore Di Giacinto
530bec9c64
feat(llama.cpp): do not specify backends to autoload and add llama.cpp variants (#2232)
* feat(initializer): do not specify backends to autoload

We can simply try to autoload the backends extracted in the asset dir.
This will allow to build variants of the same backend (for e.g. with different instructions sets),
so to have a single binary for all the variants.

Signed-off-by: mudler <mudler@localai.io>

* refactor(prepare): refactor out llama.cpp prepare steps

Make it so are idempotent and that we can re-build

Signed-off-by: mudler <mudler@localai.io>

* [TEST] feat(build): build noavx version along

Signed-off-by: mudler <mudler@localai.io>

* build: make build parallel

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* build: do not override CMAKE_ARGS

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* build: add fallback variant

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Fixups

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fix(huggingface-langchain): fail if no token is set

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fix(huggingface-langchain): rename

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fix: do not autoload local-store

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fix: give priority between the listed backends

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: mudler <mudler@localai.io>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-05-04 17:56:12 +02:00
LocalAI [bot]
ac0f3d6e82
⬆️ Update ggerganov/whisper.cpp (#2230)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-05-03 22:16:26 +00:00
LocalAI [bot]
da0b6a89ae
⬆️ Update ggerganov/llama.cpp (#2229)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-05-03 21:39:28 +00:00
LocalAI [bot]
2cc1bd85af
⬆️ Update ggerganov/llama.cpp (#2224)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-05-02 21:23:40 +00:00
LocalAI [bot]
6a7a7996bb
⬆️ Update ggerganov/llama.cpp (#2213)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-05-01 21:19:44 +00:00
LocalAI [bot]
f90d56d371
⬆️ Update ggerganov/llama.cpp (#2203)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-04-30 21:53:31 +00:00
Chris Jowett
970cb3a219 chore: update go-stablediffusion to latest commit with Make jobserver fix
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
2024-04-30 15:59:28 -05:00
LocalAI [bot]
29d7812344
⬆️ Update ggerganov/whisper.cpp (#2188)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-04-29 22:16:04 +00:00