Commit Graph

2590 Commits

Author SHA1 Message Date
Ettore Di Giacinto
789cf6c599
Update model-gallery.md
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2024-06-22 12:10:27 +02:00
Ettore Di Giacinto
0bc82d7270 fix(install.sh): properly detect suse distros
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-06-22 12:08:48 +02:00
Ettore Di Giacinto
9a7ad75bff
docs: update to include installer and update advanced YAML options (#2631)
* docs: update quickstart and advanced sections

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* docs: improvements

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* examples(kubernete): add nvidia example

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-06-22 12:00:38 +02:00
Ettore Di Giacinto
9fb3e4040b
Update README.md
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2024-06-22 10:29:46 +02:00
Ettore Di Giacinto
070fd1b9da
Update distributed_inferencing.md
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2024-06-22 10:06:09 +02:00
Ettore Di Giacinto
dda5b9f260
Update distributed_inferencing.md
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2024-06-22 10:05:48 +02:00
Ettore Di Giacinto
8d84dd4f88
fix(worker): use dynaload for single binaries (#2620)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-06-22 09:33:18 +02:00
Ettore Di Giacinto
f569237a50
feat(oci): support OCI images and Ollama models (#2628)
* Support specifying oci:// and ollama:// for model URLs

Fixes: https://github.com/mudler/LocalAI/issues/2527
Fixes: https://github.com/mudler/LocalAI/issues/1028

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Lower watcher warnings

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Allow to install ollama models from CLI

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fixup tests

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Do not keep file ownership

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Skip test on darwin

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-06-22 08:17:41 +02:00
LocalAI [bot]
e265a618d9
models(gallery): ⬆️ update checksum (#2630)
⬆️ Checksum updates in gallery/index.yaml

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-06-22 04:45:41 +00:00
LocalAI [bot]
533343c84f
⬆️ Update ggerganov/llama.cpp (#2629)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-06-22 02:28:06 +00:00
Ettore Di Giacinto
260f2e1d94
fix(install.sh): correctly handle systemd service installation (#2627)
Fixup install.sh systemd service installation

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-06-21 23:56:06 +02:00
Ettore Di Giacinto
964732590d
models(gallery): add hermes-2-theta-llama-3-70b (#2626)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-06-21 19:41:49 +02:00
LocalAI [bot]
70a2bfe82e
⬆️ Update ggerganov/llama.cpp (#2617)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-06-21 06:41:34 +00:00
Ettore Di Giacinto
ba2d969c44
models(gallery): add qwen2-1.5b-ita (#2615)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-06-20 20:35:53 +02:00
Ettore Di Giacinto
d3c78cf4d7
models(gallery): add magnum-72b-v1 (#2614)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-06-20 20:31:23 +02:00
Ettore Di Giacinto
34afd891a6
models(gallery): add llama3-8b-darkidol-1.1-iq-imatrix (#2613)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-06-20 20:30:47 +02:00
Ettore Di Giacinto
d3137775a1
models(gallery): add llama-3-cursedstock-v1.8-8b-iq-imatrix (#2612)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-06-20 20:14:48 +02:00
Ettore Di Giacinto
e1772026a1
models(gallery): add llama-3-sec-chat (#2611)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-06-20 20:14:03 +02:00
LocalAI [bot]
d0423254dd
⬆️ Update ggerganov/llama.cpp (#2606)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-06-20 00:58:40 +00:00
LocalAI [bot]
db0e52ae9d
⬆️ Update docs version mudler/LocalAI (#2605)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-06-20 00:05:19 +00:00
LocalAI [bot]
4f030f9cd3
models(gallery): ⬆️ update checksum (#2607)
⬆️ Checksum updates in gallery/index.yaml

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-06-19 22:20:17 +02:00
Ettore Di Giacinto
60fb45eb97
models(gallery): add l3-umbral-mind-rp-v1.0-8b-iq-imatrix (#2608)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-06-19 22:19:40 +02:00
Rene Leonhardt
43f0688a95
feat: Upgrade to CUDA 12.5 (#2601)
Signed-off-by: Rene Leonhardt <65483435+reneleonhardt@users.noreply.github.com>
2024-06-19 17:50:49 +02:00
LocalAI [bot]
8142bdc48f
⬆️ Update ggerganov/llama.cpp (#2603)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-06-19 00:28:50 +00:00
Ettore Di Giacinto
89a11e15e7
fix(single-binary): bundle ld.so (#2602)
* debug

* fix copy command/silly muscle memory

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* remove tmate

* Debugging

* Start binary with ld.so if present in libdir

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* small refactor

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-06-18 22:43:43 +02:00
Ettore Di Giacinto
06de542032
feat(talk): display an informative box, better colors (#2600)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-06-18 15:10:01 +02:00
Ettore Di Giacinto
ecbb61cbf4
feat(sd-3): add stablediffusion 3 support (#2591)
* feat(sd-3): add stablediffusion 3 support

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* deps(diffusers): add sentencepiece

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* models(gallery): add stablediffusion-3

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-06-18 15:09:39 +02:00
Ettore Di Giacinto
7f13e3a783
docs(models): fixup top message
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2024-06-18 08:42:30 +02:00
LocalAI [bot]
c926469b9c
⬆️ Update ggerganov/llama.cpp (#2594)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-06-18 03:06:31 +00:00
LocalAI [bot]
c30b57a629
⬆️ Update docs version mudler/LocalAI (#2593)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-06-18 01:47:04 +00:00
LocalAI [bot]
2f297979a7
⬆️ Update ggerganov/llama.cpp (#2587)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-06-17 15:28:19 +00:00
Ettore Di Giacinto
2437a2769d
models(gallery): add gemma-1.1-7b-it (#2588)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-06-17 14:13:27 +02:00
Ettore Di Giacinto
b58b7cad94
models(gallery): add samantha-qwen2 (#2586)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-06-17 10:08:29 +02:00
LocalAI [bot]
68148f2a1a
⬆️ Update ggerganov/llama.cpp (#2584)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-06-17 00:18:44 +00:00
Ettore Di Giacinto
4897eb0ba2
ci: pack less libs inside the binary (#2579)
The binary grew up to 1.8GB quickly - rocm at least raises +800MB by
itself - so we might just want to manage the GPU libs separately.

Adds a comment to list all the libraries found so far that we are
depending on, but will likely follow up in a way to bundle these
separately.

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-06-16 22:10:28 +02:00
Ettore Di Giacinto
1b43966c48
Update README.md
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2024-06-16 20:27:37 +02:00
Ettore Di Giacinto
c5f2f11503
models(gallery): add hathor_stable-v0.2-l3-8b (#2582)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-06-16 20:24:36 +02:00
Ettore Di Giacinto
895443d1b5
models(gallery): add tess-v2.5-phi-3-medium-128k-14b (#2581)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-06-16 20:22:08 +02:00
Ettore Di Giacinto
6a0802e8e6
models(gallery): add dolphin-qwen (#2580)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-06-16 20:11:21 +02:00
Ettore Di Giacinto
94cfaad7f4
feat(libpath): refactor and expose functions for external library paths (#2578)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-06-16 13:58:28 +02:00
Ettore Di Giacinto
ac4a94dd44
feat(build): bundle libs for arm64 and x86 linux binaries (#2572)
This PR bundles further libs into the arm64 and x86_64 binaries

This can be improved by a lot - it's far from perfect, however in this PR I wanted to collect the required libs, and give a simple baseline to improve later upon. It is quite challenging to do this exercise with CI only - but it's the fastest way I see now. 

I hope that after the list is initially built we can further improve this down the line and remove some of the technical debt left here to speedup things and do not get stuck in the middle of CI cycles.

In this PR:

- The x86_64 binary now bundles hipblas, nvidia and intel libraries too to avoid any dependency to be installed in the host
- Similarly, for the arm64 we now bundle all the required assets

## What's left

We should be also able to cross-compile Nvidia for arm64 - however I didn't succeed so far so I've left that open. Similarly I might have missed some libraries, but we will see with bug reports and testing around with the new binaries. I've tested on my arm64 board and I could finally start things up.

An open point still is shipping libraries for e.g. tts and stablediffusion. this is not done yet, however with the same methodology we should be able to extend support also for these two backends in the binary.
2024-06-16 09:10:44 +02:00
LocalAI [bot]
58bf8614d9
⬆️ Update ggerganov/llama.cpp (#2575)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-06-15 23:45:10 +00:00
Ettore Di Giacinto
3764e50b35
models(gallery): add firefly-gemma-7b (#2576)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-06-15 23:07:20 +02:00
Nate Harris
3f464d2d9e
Fix standard image latest Docker tags (#2574)
- Fix standard image latest Docker tags

Signed-off-by: Nate Harris <nwithan8@users.noreply.github.com>
2024-06-15 22:08:30 +02:00
LocalAI [bot]
5116d561e1
⬆️ Update ggerganov/llama.cpp (#2570)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-06-14 23:39:20 +00:00
Ettore Di Giacinto
96a7a3b59f
fix(Makefile): enable STATIC on dist (#2569)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-06-14 12:28:46 +02:00
Ettore Di Giacinto
112d0ffa45
feat(darwin): embed grpc libs (#2567)
* debug

* feat(makefile): allow to bundle libs into binary

* ci: bundle protobuf into single-binary

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* ci: tests

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fix(assets): correctly reference extract folder

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* bundle also abseil

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* bundle more libs

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2024-06-14 08:51:25 +02:00
LocalAI [bot]
25f45827ab
⬆️ Update ggerganov/whisper.cpp (#2565)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-06-14 00:26:51 +00:00
LocalAI [bot]
f322f7c62d
⬆️ Update ggerganov/llama.cpp (#2564)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-06-13 23:47:50 +00:00
Ettore Di Giacinto
06351cbbb4
feat(binary): support extracted bundled libs on darwin (#2563)
When offering fallback libs, use the proper env var for darwin

Note: this does not include the libraries itself, but only sets the
proper env var for the libs to be picked up on darwin.

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-06-13 22:59:42 +02:00