* Fix python header comments for some extra gRPC backends
When a Python script is to be executed directly via exec(3), either the platform knows how to execute
the file itself (i.e. special configuration is necessary) or the first line
contains a shebang (#!) specifying the interpreter to run it (similar to
shell scripts).
The shebang MUST be on the first line for the script to work on all platforms,
so any header comments need to be in the lines following it. Otherwise
executing these scripts as extra backends will yield an "exec format
error" message.
Changes:
* Move introductory comments below the shebang line
* Change header comment in transformers.py to refer to the correct
python module
Signed-off-by: Marcus Köhler <khler.marcus@gmail.com>
* Make header comment in ttsbark.py more specific
Signed-off-by: Marcus Köhler <khler.marcus@gmail.com>
---------
Signed-off-by: Marcus Köhler <khler.marcus@gmail.com>
* Update huggingface.py
Switch SentenceTransformer for AutoModel in order to set trust_remote_code needed to use the encode method with embeddings models like jinai-v2
Signed-off-by: Lucas Hänke de Cansino <lhc@next-boss.eu>
* feat(transformers): split in separate backend
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Lucas Hänke de Cansino <lhc@next-boss.eu>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Co-authored-by: Lucas Hänke de Cansino <lhc@next-boss.eu>
* refactor: rename llama-stable to llama-ggml
* Makefile: get sources in sources/
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fixup path
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fixup sources
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fixups sd
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* update SD
* fixup
* fixup: create piper libdir also when not built
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix make target on linux test
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* refactor: move backends into the backends directory
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* refactor: move main close to implementation for every backend
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(llama.cpp): support lora with scale
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(llama.cpp): support yarn
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* wip
* wip
* Make it functional
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* wip
* Small fixups
* do not inject space on role encoding, encode img at beginning of messages
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Add examples/config defaults
* Add include dir of current source dir
* cleanup
* fixes
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fixups
* Revert "fixups"
This reverts commit f1a4731cca.
* fixes
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Current state of the branch.
* Now gRPC is build only when the BUILD_GRPC_FOR_BACKEND_LLAMA variable is defined.
* Now the local compilation of gRPC is executed on BUILD_GRPC_FOR_BACKEND_LLAMA.
* Revised the Makefile.
* Removed replace directives in go.mod.
---------
Signed-off-by: Diego <38375572+diego-minguzzi@users.noreply.github.com>
Co-authored-by: lunamidori5 <118759930+lunamidori5@users.noreply.github.com>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
* Fix backend/cpp/llama CMakeList.txt on OSX - detect OSX and use homebrew libraries
* sneak a logging fix in too for gallery debugging
* additional logging
* wip: llama.cpp c++ gRPC server
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* make it work, attach it to the build process
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* update deps
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix: add protobuf dep
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* try fix protobuf on cmake
* cmake: workarounds
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* add packages
* cmake: use fixed version of grpc
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* cmake(grpc): install locally
* install grpc
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* install required deps for grpc on debian bullseye
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* debug
* debug
* Fixups
* no need to install cmake manually
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* ci: fixup macOS
* use brew whenever possible
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* macOS fixups
* debug
* fix container build
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* workaround
* try mac
https://stackoverflow.com/questions/23905661/on-mac-g-clang-fails-to-search-usr-local-include-and-usr-local-lib-by-def
* Disable temp. arm64 docker image builds
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>