96306a39a0
chore(docs): extra-Usage and Machine-Tag docs ( #4627 )
...
build container images / self-hosted-jobs (-aio-gpu-intel-f16, quay.io/go-skynet/intel-oneapi-base:latest, sycl_f16, true, ubuntu:22.04, extras, latest-gpu-intel-f16, latest-aio-gpu-intel-f16, --jobs=3 --output-sync=target, linux/amd64, arc-runner-set, auto, -sycl-f16-ffmpeg) (push) Waiting to run
build container images / self-hosted-jobs (-aio-gpu-intel-f32, quay.io/go-skynet/intel-oneapi-base:latest, sycl_f32, true, ubuntu:22.04, extras, latest-gpu-intel-f32, latest-aio-gpu-intel-f32, --jobs=3 --output-sync=target, linux/amd64, arc-runner-set, auto, -sycl-f32-ffmpeg) (push) Waiting to run
build container images / self-hosted-jobs (-aio-gpu-nvidia-cuda-11, ubuntu:22.04, cublas, 11, 7, true, extras, latest-gpu-nvidia-cuda-11, latest-aio-gpu-nvidia-cuda-11, --jobs=3 --output-sync=target, linux/amd64, arc-runner-set, auto, -cublas-cuda11-ffmpeg) (push) Waiting to run
build container images / self-hosted-jobs (-aio-gpu-nvidia-cuda-12, ubuntu:22.04, cublas, 12, 0, true, extras, latest-gpu-nvidia-cuda-12, latest-aio-gpu-nvidia-cuda-12, --jobs=3 --output-sync=target, linux/amd64, arc-runner-set, auto, -cublas-cuda12-ffmpeg) (push) Waiting to run
build container images / self-hosted-jobs (quay.io/go-skynet/intel-oneapi-base:latest, sycl_f16, false, ubuntu:22.04, core, --jobs=3 --output-sync=target, linux/amd64, arc-runner-set, false, -sycl-f16-core) (push) Waiting to run
build container images / self-hosted-jobs (quay.io/go-skynet/intel-oneapi-base:latest, sycl_f16, true, ubuntu:22.04, core, --jobs=3 --output-sync=target, linux/amd64, arc-runner-set, false, -sycl-f16-ffmpeg-core) (push) Waiting to run
build container images / self-hosted-jobs (quay.io/go-skynet/intel-oneapi-base:latest, sycl_f32, false, ubuntu:22.04, core, --jobs=3 --output-sync=target, linux/amd64, arc-runner-set, false, -sycl-f32-core) (push) Waiting to run
build container images / self-hosted-jobs (quay.io/go-skynet/intel-oneapi-base:latest, sycl_f32, true, ubuntu:22.04, core, --jobs=3 --output-sync=target, linux/amd64, arc-runner-set, false, -sycl-f32-ffmpeg-core) (push) Waiting to run
build container images / self-hosted-jobs (ubuntu:22.04, , , extras, --jobs=3 --output-sync=target, linux/amd64, arc-runner-set, auto, ) (push) Waiting to run
build container images / self-hosted-jobs (ubuntu:22.04, , true, extras, --jobs=3 --output-sync=target, linux/amd64, arc-runner-set, auto, -ffmpeg) (push) Waiting to run
build container images / self-hosted-jobs (ubuntu:22.04, cublas, 11, 7, , extras, --jobs=3 --output-sync=target, linux/amd64, arc-runner-set, false, -cublas-cuda11) (push) Waiting to run
build container images / self-hosted-jobs (ubuntu:22.04, cublas, 12, 0, , extras, --jobs=3 --output-sync=target, linux/amd64, arc-runner-set, false, -cublas-cuda12) (push) Waiting to run
build container images / core-image-build (-aio-cpu, ubuntu:22.04, , true, core, latest-cpu, latest-aio-cpu, --jobs=4 --output-sync=target, linux/amd64,linux/arm64, arc-runner-set, false, auto, -ffmpeg-core) (push) Waiting to run
build container images / core-image-build (ubuntu:22.04, cublas, 11, 7, , core, --jobs=4 --output-sync=target, linux/amd64, arc-runner-set, false, false, -cublas-cuda11-core) (push) Waiting to run
build container images / core-image-build (ubuntu:22.04, cublas, 11, 7, true, core, --jobs=4 --output-sync=target, linux/amd64, arc-runner-set, false, false, -cublas-cuda11-ffmpeg-core) (push) Waiting to run
build container images / core-image-build (ubuntu:22.04, cublas, 12, 0, , core, --jobs=4 --output-sync=target, linux/amd64, arc-runner-set, false, false, -cublas-cuda12-core) (push) Waiting to run
build container images / core-image-build (ubuntu:22.04, cublas, 12, 0, true, core, --jobs=4 --output-sync=target, linux/amd64, arc-runner-set, false, false, -cublas-cuda12-ffmpeg-core) (push) Waiting to run
build container images / core-image-build (ubuntu:22.04, vulkan, true, core, latest-vulkan-ffmpeg-core, --jobs=4 --output-sync=target, linux/amd64, arc-runner-set, false, false, -vulkan-ffmpeg-core) (push) Waiting to run
build container images / gh-runner (nvcr.io/nvidia/l4t-jetpack:r36.4.0, cublas, 12, 0, true, core, latest-nvidia-l4t-arm64-core, --jobs=4 --output-sync=target, linux/arm64, ubuntu-24.04-arm, true, false, -nvidia-l4t-arm64-core) (push) Waiting to run
Security Scan / tests (push) Waiting to run
Tests extras backends / tests-transformers (push) Waiting to run
Tests extras backends / tests-sentencetransformers (push) Waiting to run
Tests extras backends / tests-rerankers (push) Waiting to run
Tests extras backends / tests-diffusers (push) Waiting to run
Tests extras backends / tests-parler-tts (push) Waiting to run
Tests extras backends / tests-openvoice (push) Waiting to run
Tests extras backends / tests-coqui (push) Waiting to run
tests / tests-linux (1.21.x) (push) Waiting to run
tests / tests-aio-container (push) Waiting to run
tests / tests-apple (1.21.x) (push) Waiting to run
Rename LocalAI-Extra-Usage -> Extra-Usage, add MACHINE_TAG as cli flag option, add docs about extra-usage and machine-tag
Signed-off-by: mintyleaf <mintyleafdev@gmail.com >
2025-01-18 08:58:38 +01:00
ab344e4f47
docs: update compatibility-table.md ( #4557 )
...
build container images / self-hosted-jobs (-aio-gpu-intel-f32, quay.io/go-skynet/intel-oneapi-base:latest, sycl_f32, true, ubuntu:22.04, extras, latest-gpu-intel-f32, latest-aio-gpu-intel-f32, --jobs=3 --output-sync=target, linux/amd64, arc-runner-set, auto, -sycl-f32-ffmpeg) (push) Waiting to run
build container images / self-hosted-jobs (-aio-gpu-nvidia-cuda-11, ubuntu:22.04, cublas, 11, 7, true, extras, latest-gpu-nvidia-cuda-11, latest-aio-gpu-nvidia-cuda-11, --jobs=3 --output-sync=target, linux/amd64, arc-runner-set, auto, -cublas-cuda11-ffmpeg) (push) Waiting to run
build container images / self-hosted-jobs (-aio-gpu-nvidia-cuda-12, ubuntu:22.04, cublas, 12, 0, true, extras, latest-gpu-nvidia-cuda-12, latest-aio-gpu-nvidia-cuda-12, --jobs=3 --output-sync=target, linux/amd64, arc-runner-set, auto, -cublas-cuda12-ffmpeg) (push) Waiting to run
build container images / self-hosted-jobs (quay.io/go-skynet/intel-oneapi-base:latest, sycl_f16, false, ubuntu:22.04, core, --jobs=3 --output-sync=target, linux/amd64, arc-runner-set, false, -sycl-f16-core) (push) Waiting to run
build container images / self-hosted-jobs (quay.io/go-skynet/intel-oneapi-base:latest, sycl_f16, true, ubuntu:22.04, core, --jobs=3 --output-sync=target, linux/amd64, arc-runner-set, false, -sycl-f16-ffmpeg-core) (push) Waiting to run
build container images / self-hosted-jobs (quay.io/go-skynet/intel-oneapi-base:latest, sycl_f32, false, ubuntu:22.04, core, --jobs=3 --output-sync=target, linux/amd64, arc-runner-set, false, -sycl-f32-core) (push) Waiting to run
build container images / self-hosted-jobs (quay.io/go-skynet/intel-oneapi-base:latest, sycl_f32, true, ubuntu:22.04, core, --jobs=3 --output-sync=target, linux/amd64, arc-runner-set, false, -sycl-f32-ffmpeg-core) (push) Waiting to run
build container images / self-hosted-jobs (ubuntu:22.04, , , extras, --jobs=3 --output-sync=target, linux/amd64, arc-runner-set, auto, ) (push) Waiting to run
build container images / self-hosted-jobs (ubuntu:22.04, , true, extras, --jobs=3 --output-sync=target, linux/amd64, arc-runner-set, auto, -ffmpeg) (push) Waiting to run
build container images / self-hosted-jobs (ubuntu:22.04, cublas, 11, 7, , extras, --jobs=3 --output-sync=target, linux/amd64, arc-runner-set, false, -cublas-cuda11) (push) Waiting to run
build container images / self-hosted-jobs (ubuntu:22.04, cublas, 12, 0, , extras, --jobs=3 --output-sync=target, linux/amd64, arc-runner-set, false, -cublas-cuda12) (push) Waiting to run
build container images / core-image-build (-aio-cpu, ubuntu:22.04, , true, core, latest-cpu, latest-aio-cpu, --jobs=4 --output-sync=target, linux/amd64,linux/arm64, arc-runner-set, false, auto, -ffmpeg-core) (push) Waiting to run
build container images / core-image-build (ubuntu:22.04, cublas, 11, 7, , core, --jobs=4 --output-sync=target, linux/amd64, arc-runner-set, false, false, -cublas-cuda11-core) (push) Waiting to run
build container images / core-image-build (ubuntu:22.04, cublas, 11, 7, true, core, --jobs=4 --output-sync=target, linux/amd64, arc-runner-set, false, false, -cublas-cuda11-ffmpeg-core) (push) Waiting to run
build container images / core-image-build (ubuntu:22.04, cublas, 12, 0, , core, --jobs=4 --output-sync=target, linux/amd64, arc-runner-set, false, false, -cublas-cuda12-core) (push) Waiting to run
build container images / core-image-build (ubuntu:22.04, cublas, 12, 0, true, core, --jobs=4 --output-sync=target, linux/amd64, arc-runner-set, false, false, -cublas-cuda12-ffmpeg-core) (push) Waiting to run
build container images / core-image-build (ubuntu:22.04, vulkan, true, core, latest-vulkan-ffmpeg-core, --jobs=4 --output-sync=target, linux/amd64, arc-runner-set, false, false, -vulkan-ffmpeg-core) (push) Waiting to run
Security Scan / tests (push) Waiting to run
Tests extras backends / tests-transformers (push) Waiting to run
Tests extras backends / tests-sentencetransformers (push) Waiting to run
Tests extras backends / tests-rerankers (push) Waiting to run
Tests extras backends / tests-diffusers (push) Waiting to run
Tests extras backends / tests-parler-tts (push) Waiting to run
Tests extras backends / tests-openvoice (push) Waiting to run
Tests extras backends / tests-transformers-musicgen (push) Waiting to run
Tests extras backends / tests-vallex (push) Waiting to run
Tests extras backends / tests-coqui (push) Waiting to run
tests / tests-linux (1.21.x) (push) Waiting to run
tests / tests-aio-container (push) Waiting to run
tests / tests-apple (1.21.x) (push) Waiting to run
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com >
2025-01-07 21:20:44 +01:00
cab9f88ca4
chore(docs): add nvidia l4t instructions ( #4454 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2024-12-23 18:59:33 +01:00
ae9855a39e
chore(docs): patch p2p detail in env and docs ( #4434 )
...
* Update distributed_inferencing.md
Signed-off-by: jtwolfe <jamie.t.wolfe@gmail.com >
* Update .env
Signed-off-by: jtwolfe <jamie.t.wolfe@gmail.com >
* Update distributed_inferencing.md
whoops
Signed-off-by: jtwolfe <jamie.t.wolfe@gmail.com >
---------
Signed-off-by: jtwolfe <jamie.t.wolfe@gmail.com >
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com >
2024-12-19 15:19:31 +01:00
3127cd1352
chore(docs): update available backends ( #4325 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2024-12-05 16:57:56 +01:00
b90d78d9f6
Updated links of yamls ( #4324 )
...
Updated links
Links to deplyment*.yaml was changed
Signed-off-by: PetrFlegr <ptrflegr@gmail.com >
2024-12-05 16:06:51 +01:00
44a5dac312
feat(backend): add stablediffusion-ggml ( #4289 )
...
* feat(backend): add stablediffusion-ggml
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* chore(ci): track stablediffusion-ggml
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* fixups
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Use default scheduler and sampler if not specified
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* fixups
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Move cfg scale out of diffusers block
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Make it working
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* fix: set free_params_immediately to false to call the model in sequence
https://github.com/leejet/stable-diffusion.cpp/issues/366
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2024-12-03 22:41:22 +01:00
3c3050f68e
feat(backends): Drop bert.cpp ( #4272 )
...
* feat(backends): Drop bert.cpp
use llama.cpp 3.2 as a drop-in replacement for bert.cpp
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* chore(tests): make test more robust
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2024-11-27 16:34:28 +01:00
9cb30bedeb
integrations: add Nextcloud ( #4233 )
...
Signed-off-by: Adam Monsen <haircut@gmail.com >
2024-11-24 10:33:18 +01:00
c9c58a24a8
chore(docs): integrating LocalAI with Microsoft Word ( #4218 )
...
Integrating LocalAI with Microsoft Word
Signed-off-by: GPTLocalhost (Word Add-in) <72584872+GPTLocalhost@users.noreply.github.com >
2024-11-22 09:57:39 +01:00
f03bbf3188
fix : #4215 404 in documentation due to migrated configuration examples ( #4216 )
...
update link to examples which have moved to their own repository
Signed-off-by: Philipp Seelig <philipp@daxbau.net >
Co-authored-by: Philipp Seelig <philipp@daxbau.net >
Co-authored-by: Dave <dave@gray101.com >
2024-11-21 09:47:11 +01:00
9892d7d584
feat(p2p): add support for configuration of edgevpn listen_maddrs, dht_announce_maddrs and bootstrap_peers ( #4200 )
...
* add support for edgevpn listen_maddrs, dht_announce_maddrs, dht_bootstrap_peers
* upd docs for libp2p loglevel
2024-11-20 14:18:52 +01:00
0b3a55b9fe
docs: Update documentation for text-to-audio feature regarding response_format ( #4038 )
2024-11-03 02:15:54 +00:00
7748eb6553
docs: add Homebrew as an option to install on MacOS ( #3946 )
...
Add Homebrew as an option to install on MacOS
Signed-off-by: Mauro Morales <contact@mauromorales.com >
2024-10-23 20:02:08 +02:00
97cf028175
chore: update integrations.md with LLPhant ( #3838 )
...
Signed-off-by: Franco Lombardo <f.lombardo69@gmail.com >
2024-10-15 09:41:39 +02:00
bf8e50a11d
chore(docs): add Vulkan images links ( #3620 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2024-09-20 18:16:01 +02:00
11d960b2a6
chore(cli): be consistent between workers and expose ExtraLLamaCPPArgs to both ( #3428 )
...
* chore(cli): be consistent between workers and expose ExtraLLamaCPPArgs to both
Fixes: https://github.com/mudler/LocalAI/issues/3427
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* bump grpcio
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2024-08-30 00:10:17 +02:00
12950cac21
chore(docs): update links
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2024-08-28 10:40:41 +02:00
d2da2f1672
chore(docs): add links to demo and explorer
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2024-08-28 10:38:18 +02:00
de1fbdca71
Update quickstart.md ( #3373 )
...
fix typo.
Signed-off-by: grant-wilson <grantm.wilsonii@gmail.com >
2024-08-24 23:01:34 +02:00
0762aa5327
Update GPU-acceleration.md
...
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com >
2024-08-24 09:58:49 +02:00
d3a217c254
chore(docs): update p2p env var documentation ( #3350 )
...
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com >
2024-08-21 13:09:57 +02:00
2a3427e533
fix(docs): Refer to the OpenAI documentation to update the openai-functions docu… ( #3317 )
...
* Refer to the OpenAI documentation to update the openai-functions documentation
I saw the openai official website, apIn the description: The parameters `function_call` and `functions` have been replaced by `tool_choice` and `tools`.So I submitted this update;But I haven't read the code of localai, so I'm not sure if it also applies to localai.
Signed-off-by: 四少爷 <sex@jermey.cn >
* Update Usage Example
The original usage example was too outdated, and calling with the new version of the openai python package would result in errors. Therefore, the curl example was rewritten (as curl examples are also used elsewhere).
Signed-off-by: 四少爷 <sex@jermey.cn >
* add python example
Signed-off-by: 四少爷 <sex@jermey.cn >
---------
Signed-off-by: 四少爷 <sex@jermey.cn >
2024-08-21 13:09:26 +02:00
9475a6fa05
chore: drop petals ( #3316 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2024-08-20 10:01:38 +02:00
faadabea14
Update binaries.md
...
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com >
2024-08-14 10:08:32 +02:00
89484efaed
docs: update distributed_inferencing.md
...
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com >
2024-07-24 12:27:49 +02:00
153e977155
Update distributed_inferencing.md
...
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com >
2024-07-22 17:35:10 +02:00
87bd831aba
docs: add federation ( #2929 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2024-07-20 10:43:18 +02:00
bf9dd1de7f
feat(functions): parse broken JSON when we parse the raw results, use dynamic rules for grammar keys ( #2912 )
...
* feat(functions): enhance parsing with broken JSON when we parse the raw results
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* breaking: make function name by default
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* feat(grammar): dynamically generate grammars with mutating keys
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* refactor: simplify condition
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Update docs
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2024-07-18 17:52:22 +02:00
607900a4bb
docs: more swagger, update docs ( #2907 )
...
* docs(swagger): finish convering gallery section
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* docs: add section to explain how to install models with local-ai run
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Minor docs adjustments
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2024-07-18 11:25:21 +02:00
6de12c694a
docs: update try-it-out.md ( #2906 )
2024-07-18 03:21:22 +00:00
35561edb6e
feat(llama.cpp): support embeddings endpoints ( #2871 )
...
* feat(llama.cpp): add embeddings
Also enable embeddings by default for llama.cpp models
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* fix(Makefile): prepare llama.cpp sources only once
Otherwise we keep cloning llama.cpp for each of the variants
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* do not set embeddings to false
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* docs: add embeddings to the YAML config reference
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2024-07-15 22:54:16 +02:00
edea2e7c3a
docs: add a note on benchmarks ( #2857 )
...
Add a note on LocalAI defaults and benchmarks in our FAQ section.
See also https://github.com/mudler/LocalAI/issues/2780
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2024-07-14 12:16:04 +02:00
fc87507012
chore(deps): Update Dependencies ( #2538 )
...
* chore(deps): Update dependencies
Signed-off-by: Rene Leonhardt <65483435+reneleonhardt@users.noreply.github.com >
* chore(deps): Upgrade github.com/imdario/mergo to dario.cat/mergo
Signed-off-by: Rene Leonhardt <65483435+reneleonhardt@users.noreply.github.com >
* remove version identifiers for MeloTTS
Signed-off-by: Rene Leonhardt <65483435+reneleonhardt@users.noreply.github.com >
---------
Signed-off-by: Rene Leonhardt <65483435+reneleonhardt@users.noreply.github.com >
Signed-off-by: Dave <dave@gray101.com >
Co-authored-by: Dave <dave@gray101.com >
2024-07-12 19:54:08 +00:00
95e31fd279
feat(install.sh): support federated install ( #2752 )
...
* feat(install.sh): support federated install
This allows to support federation by exposing:
- FEDERATED: true/false to share the instance
- FEDERATED_SERVER: true/false to start the federated load balancer (it
forwards requests to the federation)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* docs: update installer parameters
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
Co-authored-by: Dave <dave@gray101.com >
2024-07-12 08:42:21 +02:00
d5a56f04be
feat(p2p): allow to disable DHT and use only LAN ( #2751 )
...
This allows LocalAI to be less noisy avoiding to connect outside.
Needed if e.g. there is no plan into using p2p across separate networks.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2024-07-09 23:10:02 +02:00
7b1e792732
deps(llama.cpp): bump to latest, update build variables ( #2669 )
...
* arrow_up: Update ggerganov/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* deps(llama.cpp): update build variables to follow upstream
Update build recipes with https://github.com/ggerganov/llama.cpp/pull/8006
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Disable shared libs by default in llama.cpp
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Disable shared libs in llama.cpp Makefile
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Disable metal embedding for now, until it is tested
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* fix(mac): explicitly enable metal
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* debug
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* fix typo
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
---------
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2024-06-27 23:10:04 +02:00
5d83c8d3a2
Update quickstart.md
...
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com >
2024-06-25 19:23:58 +02:00
8f968d0341
Update quickstart.md
...
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com >
2024-06-25 19:18:43 +02:00
3ee5ceb9fa
Update kubernetes.md
...
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com >
2024-06-22 12:16:55 +02:00
1bd72a3be5
Update kubernetes.md
...
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com >
2024-06-22 12:16:27 +02:00
fbd14118bf
Update kubernetes.md
...
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com >
2024-06-22 12:14:53 +02:00
515d98b978
Update model-gallery.md
...
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com >
2024-06-22 12:10:49 +02:00
789cf6c599
Update model-gallery.md
...
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com >
2024-06-22 12:10:27 +02:00
9a7ad75bff
docs: update to include installer and update advanced YAML options ( #2631 )
...
* docs: update quickstart and advanced sections
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* docs: improvements
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* examples(kubernete): add nvidia example
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2024-06-22 12:00:38 +02:00
070fd1b9da
Update distributed_inferencing.md
...
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com >
2024-06-22 10:06:09 +02:00
dda5b9f260
Update distributed_inferencing.md
...
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com >
2024-06-22 10:05:48 +02:00
3f464d2d9e
Fix standard image latest Docker tags ( #2574 )
...
- Fix standard image latest Docker tags
Signed-off-by: Nate Harris <nwithan8@users.noreply.github.com >
2024-06-15 22:08:30 +02:00
148adebe16
docs: fix p2p commands ( #2472 )
...
Also change icons on GPT vision page
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2024-06-03 16:58:53 +02:00
b99182c8d4
TTS API improvements ( #2308 )
...
* update doc on COQUI_LANGUAGE env variable
Signed-off-by: blob42 <contact@blob42.xyz >
* return errors from tts gRPC backend
Signed-off-by: blob42 <contact@blob42.xyz >
* handle speaker_id and language in coqui TTS backend
Signed-off-by: blob42 <contact@blob42.xyz >
* TTS endpoint: add optional language paramter
Signed-off-by: blob42 <contact@blob42.xyz >
* tts fix: empty language string breaks non-multilingual models
Signed-off-by: blob42 <contact@blob42.xyz >
* allow tts param definition in config file
- consolidate TTS options under `tts` config entry
Signed-off-by: blob42 <contact@blob42.xyz >
* tts: update doc
Signed-off-by: blob42 <contact@blob42.xyz >
---------
Signed-off-by: blob42 <contact@blob42.xyz >
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com >
2024-06-01 18:26:27 +00:00