contains simple fixes to warnings and errors, removes a broken / outdated test, runs go mod tidy, and as the actual change, centralizes base64 image handling
Signed-off-by: Dave Lee <dave@gray101.com>
* bugfix: CUDA acceleration not working
CUDA not working after #2286.
Refactored the code to be more polish
* Update requirements.txt
Missing imports
Signed-off-by: fakezeta <fakezeta@gmail.com>
* Update requirements.txt
Signed-off-by: fakezeta <fakezeta@gmail.com>
---------
Signed-off-by: fakezeta <fakezeta@gmail.com>
update transformers
*Handle Temperature = 0 as greedy search
*Handle custom works as stop words
*Implement KV cache
*Phi 3 no more requires trust_remote_code: true
* feat: create bash library to handle install/run/test of python backends
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* chore: minor cleanup
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* fix: remove incorrect LIMIT_TARGETS from parler-tts
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* fix: update runUnitests to handle running tests from a custom test file
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* chore: document runUnittests
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
---------
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* feat: migrate diffusers backend from conda to uv
- replace conda with UV for diffusers install (prototype for all
extras backends)
- add ability to build docker with one/some/all extras backends
instead of all or nothing
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* feat: migrate autogtpq bark coqui from conda to uv
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* feat: convert exllama over to uv
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* feat: migrate exllama2 to uv
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* feat: migrate mamba to uv
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* feat: migrate parler to uv
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* feat: migrate petals to uv
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* fix: fix tests
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* feat: migrate rerankers to uv
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* feat: migrate sentencetransformers to uv
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* fix: install uv for tests-linux
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* fix: make sure file exists before installing on intel images
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* feat: migrate transformers backend to uv
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* feat: migrate transformers-musicgen to uv
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* feat: migrate vall-e-x to uv
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* feat: migrate vllm to uv
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* fix: add uv install to the rest of test-extra.yml
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* fix: adjust file perms on all install/run/test scripts
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* fix: add missing acclerate dependencies
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* fix: add some more missing dependencies to python backends
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* fix: parler tests venv py dir fix
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* fix: correct filename for transformers-musicgen tests
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* fix: adjust the pwd for valle tests
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* feat: cleanup and optimization work for uv migration
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* fix: add setuptools to requirements-install for mamba
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* feat: more size optimization work
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* feat: make installs and tests more consistent, cleanup some deps
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* fix: cleanup
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* fix: mamba backend is cublas only
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* fix: uncomment lines in makefile
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
---------
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
Winograd convolutions were always disabled giving error when inference device was CPU.
This commit implement logic to disable Winograd convolutions only if CPU or NPU are declared.
* feat(initializer): do not specify backends to autoload
We can simply try to autoload the backends extracted in the asset dir.
This will allow to build variants of the same backend (for e.g. with different instructions sets),
so to have a single binary for all the variants.
Signed-off-by: mudler <mudler@localai.io>
* refactor(prepare): refactor out llama.cpp prepare steps
Make it so are idempotent and that we can re-build
Signed-off-by: mudler <mudler@localai.io>
* [TEST] feat(build): build noavx version along
Signed-off-by: mudler <mudler@localai.io>
* build: make build parallel
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* build: do not override CMAKE_ARGS
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* build: add fallback variant
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Fixups
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix(huggingface-langchain): fail if no token is set
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix(huggingface-langchain): rename
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix: do not autoload local-store
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix: give priority between the listed backends
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: mudler <mudler@localai.io>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
fix: more places where we are installing grpc that need a version specified
fix: attempt to fix metal tests
fix: metal/brew is forcing an update, they don't have 1.58 available anymore
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* start breaking up the giant channel refactor now that it's better understood - easier to merge bites
Signed-off-by: Dave Lee <dave@gray101.com>
* add concurrency and base64 back in, along with new base64 tests.
Signed-off-by: Dave Lee <dave@gray101.com>
* Automatic rename of whisper.go's Result to TranscriptResult
Signed-off-by: Dave Lee <dave@gray101.com>
* remove pkg/concurrency - significant changes coming in split 2
Signed-off-by: Dave Lee <dave@gray101.com>
* fix comments
Signed-off-by: Dave Lee <dave@gray101.com>
* add list_model service as another low-risk service to get it out of the way
Signed-off-by: Dave Lee <dave@gray101.com>
* split backend config loader into seperate file from the actual config struct. No changes yet, just reduce cognative load with smaller files of logical blocks
Signed-off-by: Dave Lee <dave@gray101.com>
* rename state.go ==> application.go
Signed-off-by: Dave Lee <dave@gray101.com>
* fix lost import?
Signed-off-by: Dave Lee <dave@gray101.com>
---------
Signed-off-by: Dave Lee <dave@gray101.com>
* Bump oneapi-basekit, optimum and openvino
* Changed PERFORMANCE HINT to CUMULATIVE_THROUGHPUT
Minor latency change for first token but about 10-15% speedup on token generation.
runCommand ==> ffmpegCommand. No functional changes, but makes it clear to the security scanner and future developers that this function cannot run arbitrary commands
Signed-off-by: Dave Lee <dave@gray101.com>
* fix regression #1971
fixes regression #1971 introduced by intel_extension_for_transformers==1.4
* UseTokenizerTemplate and StopPrompt
Implementation of use_tokenizer_template and stopwords options
* feat(parler-tts): Add new backend
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(parler-tts): try downgrade protobuf
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(parler-tts): add parler conda env
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Revert "feat(parler-tts): try downgrade protobuf"
This reverts commit bd5941d5cfc00676b45a99f71debf3c34249cf3c.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* deps: add grpc
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix: try to gen proto with same environment
* workaround
* Revert "fix: try to gen proto with same environment"
This reverts commit 998c745e2f.
* Workaround fixup
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Co-authored-by: Dave <dave@gray101.com>
* fix: initial work towards not committing generated files to the repository
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* feat: improve build docs
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* fix: remove unused folder from .dockerignore and .gitignore
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* fix: attempt to fix extra backend tests
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* fix: attempt to fix other tests
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* fix: more test fixes
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* fix: fix apple tests
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* fix: more extras tests fixes
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* fix: add GOBIN to PATH in docker build
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* fix: extra tests and Dockerfile corrections
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* fix: remove build dependency checks
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* fix: add golang protobuf compilers to tests-linux action
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* fix: ensure protogen is run for extra backend installs
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* fix: use newer protobuf
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* fix: more missing protoc binaries
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* fix: missing dependencies during docker build
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* fix: don't install grpc compilers in the final stage if they aren't needed
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* fix: python-grpc-tools in 22.04 repos is too old
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* fix: add a couple of extra build dependencies to Makefile
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* fix: unbreak container rebuild functionality
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
---------
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* Enhance autogptq backend to support VL models
* update dependencies for autogptq
* remove redundant auto-gptq dependency
* Convert base64 to image_url for Qwen-VL model
* implemented model inference for qwen-vl
* remove user prompt from generated answer
* fixed write image error
* fixed use_triton issue when loading Qwen-VL model
---------
Co-authored-by: Binghua Wu <bingwu@estee.com>
* Streaming working
* Small fix for regression on CUDA and XPU
* use pip version of optimum[openvino]
* Update backend/python/transformers/transformers_server.py
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
* Token streaming support
fix optimum[openvino] package in install.sh
* Token Streaming support
---------
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
* fixes#1775 and #1774
Add BitsAndBytes Quantization and fixes embedding on CUDA devices
* Manage 4bit and 8 bit quantization
Manage different BitsAndBytes options with the quantization: parameter in yaml
* fix compilation errors on non CUDA environment
* OpenVINO draft
First draft of OpenVINO integration in transformer backend
* first working implementation
* Streaming working
* Small fix for regression on CUDA and XPU
* use pip version of optimum[openvino]
* Update backend/python/transformers/transformers_server.py
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
---------
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
* Enhance autogptq backend to support VL models
* update dependencies for autogptq
* remove redundant auto-gptq dependency
* Convert base64 to image_url for Qwen-VL model
* implemented model inference for qwen-vl
* remove user prompt from generated answer
* fixed write image error
---------
Co-authored-by: Binghua Wu <bingwu@estee.com>
* test with gguf instead of ggml. Updates testPrompt to match? Adds debugging line to Dockerfile that I've found helpful recently.
* fix testPrompt slightly
* Sad Experiment: Test GH runner without metal?
* break apart CGO_LDFLAGS
* switch runner
* upstream llama.cpp disables Metal on Github CI!
* missed a dir from clean-tests
* CGO_LDFLAGS
* tmate failure + NO_ACCELERATE
* whisper.cpp has a metal fix
* do the exact opposite of the name of this branch, but keep it around for unrelated fixes?
* add back newlines
* add tmate to linux for testing
* update fixtures
* timeout for tmate
* fix: clean up Makefile dependencies to allow for parallel builds
* refactor: remove old unused backend from Makefile
* fix: finish removing legacy backend, update piper
* fix: I broke llama... I fixed llama
* feat: give the tests and builds a few threads
* fix: ensure libraries are replaced before build, add dropreplace target
* Fix image build workflows
* feat(elevenlabs): map elevenlabs API support to TTS
This allows elevenlabs Clients to work automatically with LocalAI by
supporting the elevenlabs API.
The elevenlabs server endpoint is implemented such as it is wired to the
TTS endpoints.
Fixes: https://github.com/mudler/LocalAI/issues/1809
* feat(openai/tts): compat layer with openai tts
Fixes: #1276
* fix: adapt tts CLI
* fixes#1775 and #1774
Add BitsAndBytes Quantization and fixes embedding on CUDA devices
* Manage 4bit and 8 bit quantization
Manage different BitsAndBytes options with the quantization: parameter in yaml
* fix compilation errors on non CUDA environment
* feat(intel): add diffusers support
* try to consume upstream container image
* Debug
* Manually install deps
* Map transformers/hf cache dir to modelpath if not specified
* fix(compel): update initialization, pass by all gRPC options
* fix: add dependencies, implement transformers for xpu
* base it from the oneapi image
* Add pillow
* set threads if specified when launching the API
* Skip conda install if intel
* defaults to non-intel
* ci: add to pipelines
* prepare compel only if enabled
* Skip conda install if intel
* fix cleanup
* Disable compel by default
* Install torch 2.1.0 with Intel
* Skip conda on some setups
* Detect python
* Quiet output
* Do not override system python with conda
* Prefer python3
* Fixups
* exllama2: do not install without conda (overrides pytorch version)
* exllama/exllama2: do not install if not using cuda
* Add missing dataset dependency
* Small fixups, symlink to python, add requirements
* Add neural_speed to the deps
* correctly handle model offloading
* fix: device_map == xpu
* go back at calling python, fixed at dockerfile level
* Exllama2 restricted to only nvidia gpus
* Tokenizer to xpu
* fix: use vllm AsyncLLMEngine to bring true stream
Current vLLM implementation uses the LLMEngine, which was designed for offline batch inference, which results in the streaming mode outputing all blobs at once at the end of the inference.
This PR reworks the gRPC server to use asyncio and gRPC.aio, in combination with vLLM's AsyncLLMEngine to bring true stream mode.
This PR also passes more parameters to vLLM during inference (presence_penalty, frequency_penalty, stop, ignore_eos, seed, ...).
* Remove unused import
This PR specifically introduces a `core` folder and moves the following packages over, without any other changes:
- `api/backend`
- `api/config`
- `api/options`
- `api/schema`
Once this is merged and we confirm there's no regressions, I can migrate over the remaining changes piece by piece to split up application startup, backend services, http, and mqtt as was the goal of the earlier PRs!
* Dockerfile changes to build for ROCm
* Adjust linker flags for ROCm
* Update conda env for diffusers and transformers to use ROCm pytorch
* Update transformers conda env for ROCm
* ci: build hipblas images
* fixup rebase
* use self-hosted
Signed-off-by: mudler <mudler@localai.io>
* specify LD_LIBRARY_PATH only when BUILD_TYPE=hipblas
---------
Signed-off-by: mudler <mudler@localai.io>
Co-authored-by: mudler <mudler@localai.io>
Infinite context loop might as well trigger an infinite loop of context
shifting if the model hallucinates and does not stop answering.
This has the unpleasant effect that the predicion never terminates,
which is the case especially on small models which tends to hallucinate.
Workarounds https://github.com/mudler/LocalAI/issues/1333 by removing
context-shifting.
See also upstream issue: https://github.com/ggerganov/llama.cpp/issues/3969
* feat(refactor): refactor config and input reading
* feat(tts): read config file for TTS
* examples(kubernetes): Add simple deployment example
* examples(kubernetes): Add simple deployment for intel arc
* docs(sycl): add sycl example
* feat(tts): do not always pick a first model
* fixups to run vall-e-x on container
* Correctly resolve backend
* cleanup backends
* switch image to ubuntu 22.04
* adapt commands for ubuntu
* transformers cleanup
* no contrib on ubuntu
* Change test model to gguf
* ci: disable bark tests (too cpu-intensive)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* cleanup
* refinements
* use intel base image
* Makefile: Add docker targets
* Change test model
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(transformers): support also text generation
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* embedded: set seed -1
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Certain backends as vall-e-x are not meant to be used as a library, so
we want to start the process in the same folder where the backend and
all the assets are fixes#1394
* feat(conda): share env between diffusers and bark
* Detect if env already exists
* share diffusers and petals
* tests: add petals
* Use smaller model for tests with petals
* test only model load on petals
* tests(petals): run only load model tests
* Revert "test only model load on petals"
This reverts commit 111cfa97f1.
* move transformers and sentencetransformers to common env
* Share also transformers-musicgen
* feat(img2vid): Initial support for img2vid
* doc(SD): fix SDXL Example
* Minor fixups for img2vid
* docs(img2img): fix example curl call
* feat(txt2vid): initial support
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
* diffusers: be retro-compatible with CUDA settings
* docs(img2vid, txt2vid): examples
* Add notice on docs
---------
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
* Use cuda in transformers if available
tensorflow probably needs a different check.
Signed-off-by: Erich Schubert <kno10@users.noreply.github.com>
* feat: expose CUDA at top level
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* tests: add to tests and create workflow for py extra backends
* doc: update note on how to use core images
---------
Signed-off-by: Erich Schubert <kno10@users.noreply.github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Co-authored-by: Erich Schubert <kno10@users.noreply.github.com>
* Update docs for new requirements.txt path
Signed-off-by: Marcus Köhler <khler.marcus@gmail.com>
* Fix typo (.PONY -> .PHONY) in python backend makefiles
Signed-off-by: Marcus Köhler <khler.marcus@gmail.com>
---------
Signed-off-by: Marcus Köhler <khler.marcus@gmail.com>
* Fix python header comments for some extra gRPC backends
When a Python script is to be executed directly via exec(3), either the platform knows how to execute
the file itself (i.e. special configuration is necessary) or the first line
contains a shebang (#!) specifying the interpreter to run it (similar to
shell scripts).
The shebang MUST be on the first line for the script to work on all platforms,
so any header comments need to be in the lines following it. Otherwise
executing these scripts as extra backends will yield an "exec format
error" message.
Changes:
* Move introductory comments below the shebang line
* Change header comment in transformers.py to refer to the correct
python module
Signed-off-by: Marcus Köhler <khler.marcus@gmail.com>
* Make header comment in ttsbark.py more specific
Signed-off-by: Marcus Köhler <khler.marcus@gmail.com>
---------
Signed-off-by: Marcus Köhler <khler.marcus@gmail.com>
* Update huggingface.py
Switch SentenceTransformer for AutoModel in order to set trust_remote_code needed to use the encode method with embeddings models like jinai-v2
Signed-off-by: Lucas Hänke de Cansino <lhc@next-boss.eu>
* feat(transformers): split in separate backend
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Lucas Hänke de Cansino <lhc@next-boss.eu>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Co-authored-by: Lucas Hänke de Cansino <lhc@next-boss.eu>
* refactor: rename llama-stable to llama-ggml
* Makefile: get sources in sources/
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fixup path
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fixup sources
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fixups sd
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* update SD
* fixup
* fixup: create piper libdir also when not built
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix make target on linux test
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* refactor: move backends into the backends directory
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* refactor: move main close to implementation for every backend
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(llama.cpp): support lora with scale
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(llama.cpp): support yarn
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* wip
* wip
* Make it functional
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* wip
* Small fixups
* do not inject space on role encoding, encode img at beginning of messages
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Add examples/config defaults
* Add include dir of current source dir
* cleanup
* fixes
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fixups
* Revert "fixups"
This reverts commit f1a4731cca.
* fixes
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Current state of the branch.
* Now gRPC is build only when the BUILD_GRPC_FOR_BACKEND_LLAMA variable is defined.
* Now the local compilation of gRPC is executed on BUILD_GRPC_FOR_BACKEND_LLAMA.
* Revised the Makefile.
* Removed replace directives in go.mod.
---------
Signed-off-by: Diego <38375572+diego-minguzzi@users.noreply.github.com>
Co-authored-by: lunamidori5 <118759930+lunamidori5@users.noreply.github.com>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
* Fix backend/cpp/llama CMakeList.txt on OSX - detect OSX and use homebrew libraries
* sneak a logging fix in too for gallery debugging
* additional logging
* wip: llama.cpp c++ gRPC server
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* make it work, attach it to the build process
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* update deps
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix: add protobuf dep
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* try fix protobuf on cmake
* cmake: workarounds
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* add packages
* cmake: use fixed version of grpc
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* cmake(grpc): install locally
* install grpc
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* install required deps for grpc on debian bullseye
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* debug
* debug
* Fixups
* no need to install cmake manually
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* ci: fixup macOS
* use brew whenever possible
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* macOS fixups
* debug
* fix container build
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* workaround
* try mac
https://stackoverflow.com/questions/23905661/on-mac-g-clang-fails-to-search-usr-local-include-and-usr-local-lib-by-def
* Disable temp. arm64 docker image builds
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>