b3b8010930
chore(deps): Bump causal-conv1d from 1.2.0.post2 to 1.4.0 in /backend/python/mamba ( #2792 )
...
chore(deps): Bump causal-conv1d in /backend/python/mamba
Bumps [causal-conv1d](https://github.com/Dao-AILab/causal-conv1d ) from 1.2.0.post2 to 1.4.0.
- [Release notes](https://github.com/Dao-AILab/causal-conv1d/releases )
- [Commits](https://github.com/Dao-AILab/causal-conv1d/compare/v1.2.0.post2...v1.4.0 )
---
updated-dependencies:
- dependency-name: causal-conv1d
dependency-type: direct:production
update-type: version-update:semver-minor
...
Signed-off-by: dependabot[bot] <support@github.com >
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-07-13 01:34:59 +00:00
30861f49a8
chore(deps): Bump setuptools from 69.5.1 to 70.3.0 in /backend/python/petals ( #2799 )
...
chore(deps): Bump setuptools in /backend/python/petals
Bumps [setuptools](https://github.com/pypa/setuptools ) from 69.5.1 to 70.3.0.
- [Release notes](https://github.com/pypa/setuptools/releases )
- [Changelog](https://github.com/pypa/setuptools/blob/main/NEWS.rst )
- [Commits](https://github.com/pypa/setuptools/compare/v69.5.1...v70.3.0 )
---
updated-dependencies:
- dependency-name: setuptools
dependency-type: direct:production
update-type: version-update:semver-major
...
Signed-off-by: dependabot[bot] <support@github.com >
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-07-13 01:20:56 +00:00
5345f30a33
chore(deps): Bump setuptools from 69.5.1 to 70.3.0 in /backend/python/parler-tts ( #2797 )
...
chore(deps): Bump setuptools in /backend/python/parler-tts
Bumps [setuptools](https://github.com/pypa/setuptools ) from 69.5.1 to 70.3.0.
- [Release notes](https://github.com/pypa/setuptools/releases )
- [Changelog](https://github.com/pypa/setuptools/blob/main/NEWS.rst )
- [Commits](https://github.com/pypa/setuptools/compare/v69.5.1...v70.3.0 )
---
updated-dependencies:
- dependency-name: setuptools
dependency-type: direct:production
update-type: version-update:semver-major
...
Signed-off-by: dependabot[bot] <support@github.com >
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-07-13 01:12:19 +00:00
de2bf82e09
chore(deps): Bump inflect from 7.0.0 to 7.3.1 in /backend/python/openvoice ( #2796 )
...
chore(deps): Bump inflect in /backend/python/openvoice
Bumps [inflect](https://github.com/jaraco/inflect ) from 7.0.0 to 7.3.1.
- [Release notes](https://github.com/jaraco/inflect/releases )
- [Changelog](https://github.com/jaraco/inflect/blob/main/NEWS.rst )
- [Commits](https://github.com/jaraco/inflect/compare/v7.0.0...v7.3.1 )
---
updated-dependencies:
- dependency-name: inflect
dependency-type: direct:production
update-type: version-update:semver-minor
...
Signed-off-by: dependabot[bot] <support@github.com >
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-07-13 01:08:38 +00:00
67b20a7147
chore(deps): Bump setuptools from 69.5.1 to 70.3.0 in /backend/python/coqui ( #2798 )
...
chore(deps): Bump setuptools in /backend/python/coqui
Bumps [setuptools](https://github.com/pypa/setuptools ) from 69.5.1 to 70.3.0.
- [Release notes](https://github.com/pypa/setuptools/releases )
- [Changelog](https://github.com/pypa/setuptools/blob/main/NEWS.rst )
- [Commits](https://github.com/pypa/setuptools/compare/v69.5.1...v70.3.0 )
---
updated-dependencies:
- dependency-name: setuptools
dependency-type: direct:production
update-type: version-update:semver-major
...
Signed-off-by: dependabot[bot] <support@github.com >
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-07-13 00:43:54 +00:00
fc87507012
chore(deps): Update Dependencies ( #2538 )
...
* chore(deps): Update dependencies
Signed-off-by: Rene Leonhardt <65483435+reneleonhardt@users.noreply.github.com >
* chore(deps): Upgrade github.com/imdario/mergo to dario.cat/mergo
Signed-off-by: Rene Leonhardt <65483435+reneleonhardt@users.noreply.github.com >
* remove version identifiers for MeloTTS
Signed-off-by: Rene Leonhardt <65483435+reneleonhardt@users.noreply.github.com >
---------
Signed-off-by: Rene Leonhardt <65483435+reneleonhardt@users.noreply.github.com >
Signed-off-by: Dave <dave@gray101.com >
Co-authored-by: Dave <dave@gray101.com >
2024-07-12 19:54:08 +00:00
a00e9a82ae
Update remaining git clones to git fetch ( #2779 )
...
Signed-off-by: Loric <117862619+LoricOSC@users.noreply.github.com >
2024-07-12 06:43:58 +00:00
17608ea6aa
Using exec when starting a backend instead of spawning a new process ( #2720 )
...
Co-authored-by: Simon Siebert <ansiebert@deloitte.de >
2024-07-05 16:59:18 +00:00
c047c19145
fix: make sure the GNUMake jobserver is passed to cmake for the llama.cpp build ( #2697 )
...
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com >
2024-07-02 08:46:59 +02:00
7b1e792732
deps(llama.cpp): bump to latest, update build variables ( #2669 )
...
* arrow_up: Update ggerganov/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* deps(llama.cpp): update build variables to follow upstream
Update build recipes with https://github.com/ggerganov/llama.cpp/pull/8006
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Disable shared libs by default in llama.cpp
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Disable shared libs in llama.cpp Makefile
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Disable metal embedding for now, until it is tested
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* fix(mac): explicitly enable metal
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* debug
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* fix typo
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
---------
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2024-06-27 23:10:04 +02:00
a8bfb6f9c2
feat(options): add repeat_last_n
( #2660 )
...
feat(options): add repeat_last_n
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2024-06-26 14:58:50 +02:00
b783c811db
feat(build): only build llama.cpp relevant targets ( #2659 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2024-06-26 14:58:38 +02:00
03b1cf51fd
feat(whisper): add translate option ( #2649 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2024-06-24 19:21:22 +02:00
12513ebae0
rf: centralize base64 image handling ( #2595 )
...
contains simple fixes to warnings and errors, removes a broken / outdated test, runs go mod tidy, and as the actual change, centralizes base64 image handling
Signed-off-by: Dave Lee <dave@gray101.com >
2024-06-24 08:34:36 +02:00
5866fc8ded
chore: fix go.mod module ( #2635 )
...
Signed-off-by: Sertac Ozercan <sozercan@gmail.com >
2024-06-23 08:24:36 +00:00
ecbb61cbf4
feat(sd-3): add stablediffusion 3 support ( #2591 )
...
* feat(sd-3): add stablediffusion 3 support
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* deps(diffusers): add sentencepiece
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* models(gallery): add stablediffusion-3
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2024-06-18 15:09:39 +02:00
6ef78ef7f6
bugfix: CUDA acceleration not working ( #2475 )
...
* bugfix: CUDA acceleration not working
CUDA not working after #2286 .
Refactored the code to be more polish
* Update requirements.txt
Missing imports
Signed-off-by: fakezeta <fakezeta@gmail.com >
* Update requirements.txt
Signed-off-by: fakezeta <fakezeta@gmail.com >
---------
Signed-off-by: fakezeta <fakezeta@gmail.com >
2024-06-03 22:41:42 +02:00
4a239a4bff
feat(transformers): various enhancements to the transformers backend ( #2468 )
...
update transformers
*Handle Temperature = 0 as greedy search
*Handle custom works as stop words
*Implement KV cache
*Phi 3 no more requires trust_remote_code: true
2024-06-03 08:52:55 +02:00
b99182c8d4
TTS API improvements ( #2308 )
...
* update doc on COQUI_LANGUAGE env variable
Signed-off-by: blob42 <contact@blob42.xyz >
* return errors from tts gRPC backend
Signed-off-by: blob42 <contact@blob42.xyz >
* handle speaker_id and language in coqui TTS backend
Signed-off-by: blob42 <contact@blob42.xyz >
* TTS endpoint: add optional language paramter
Signed-off-by: blob42 <contact@blob42.xyz >
* tts fix: empty language string breaks non-multilingual models
Signed-off-by: blob42 <contact@blob42.xyz >
* allow tts param definition in config file
- consolidate TTS options under `tts` config entry
Signed-off-by: blob42 <contact@blob42.xyz >
* tts: update doc
Signed-off-by: blob42 <contact@blob42.xyz >
---------
Signed-off-by: blob42 <contact@blob42.xyz >
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com >
2024-06-01 18:26:27 +00:00
ba984c7097
fix: pin version of setuptools for intel builds to work around #2406 ( #2414 )
...
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com >
2024-05-26 18:27:07 +00:00
16433d2e8e
fix: install pytorch from proper index for hipblas builds ( #2413 )
...
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com >
2024-05-26 18:05:52 +00:00
3a9408363b
deps(llama.cpp): update and adapt API changes ( #2381 )
...
deps(llama.cpp): update and rename function
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2024-05-23 01:02:11 +02:00
1a3dedece0
dependencies(grpcio): bump to fix CI issues ( #2362 )
...
feat(grpcio): bump to fix CI issues
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2024-05-21 14:33:47 +02:00
8ad669339e
add openvoice backend ( #2334 )
...
Wip openvoice
2024-05-19 16:27:08 +02:00
86627b27f7
fix: add setuptools to all requirements-intel.txt files for python backends ( #2333 )
...
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com >
2024-05-16 19:15:46 +02:00
c89271b2e4
feat(llama.cpp): add distributed llama.cpp inferencing ( #2324 )
...
* feat(llama.cpp): support distributed llama.cpp
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* feat: let tweak how chat messages are merged together
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* refactor
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Makefile: register to ALL_GRPC_BACKENDS
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* refactoring, allow disable auto-detection of backends
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* minor fixups
Signed-off-by: mudler <mudler@localai.io >
* feat: add cmd to start rpc-server from llama.cpp
Signed-off-by: mudler <mudler@localai.io >
* ci: add ccache
Signed-off-by: mudler <mudler@localai.io >
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
Signed-off-by: mudler <mudler@localai.io >
2024-05-15 01:17:02 +02:00
e49ea0123b
feat(llama.cpp): add flash_attention
and no_kv_offloading
( #2310 )
...
feat(llama.cpp): add flash_attn and no_kv_offload
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2024-05-13 19:07:51 +02:00
5b79bd04a7
add setuptools for openvino ( #2301 )
2024-05-12 19:31:43 +00:00
88942e4761
fix: add missing openvino/optimum/etc libraries for Intel, fixes #2289 ( #2292 )
...
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com >
2024-05-12 09:01:45 +02:00
e2de8a88f7
feat: create bash library to handle install/run/test of python backends ( #2286 )
...
* feat: create bash library to handle install/run/test of python backends
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com >
* chore: minor cleanup
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com >
* fix: remove incorrect LIMIT_TARGETS from parler-tts
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com >
* fix: update runUnitests to handle running tests from a custom test file
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com >
* chore: document runUnittests
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com >
---------
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com >
2024-05-11 18:32:46 +02:00
28a421cb1d
feat: migrate python backends from conda to uv ( #2215 )
...
* feat: migrate diffusers backend from conda to uv
- replace conda with UV for diffusers install (prototype for all
extras backends)
- add ability to build docker with one/some/all extras backends
instead of all or nothing
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com >
* feat: migrate autogtpq bark coqui from conda to uv
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com >
* feat: convert exllama over to uv
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com >
* feat: migrate exllama2 to uv
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com >
* feat: migrate mamba to uv
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com >
* feat: migrate parler to uv
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com >
* feat: migrate petals to uv
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com >
* fix: fix tests
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com >
* feat: migrate rerankers to uv
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com >
* feat: migrate sentencetransformers to uv
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com >
* fix: install uv for tests-linux
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com >
* fix: make sure file exists before installing on intel images
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com >
* feat: migrate transformers backend to uv
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com >
* feat: migrate transformers-musicgen to uv
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com >
* feat: migrate vall-e-x to uv
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com >
* feat: migrate vllm to uv
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com >
* fix: add uv install to the rest of test-extra.yml
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com >
* fix: adjust file perms on all install/run/test scripts
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com >
* fix: add missing acclerate dependencies
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com >
* fix: add some more missing dependencies to python backends
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com >
* fix: parler tests venv py dir fix
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com >
* fix: correct filename for transformers-musicgen tests
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com >
* fix: adjust the pwd for valle tests
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com >
* feat: cleanup and optimization work for uv migration
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com >
* fix: add setuptools to requirements-install for mamba
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com >
* feat: more size optimization work
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com >
* feat: make installs and tests more consistent, cleanup some deps
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com >
* fix: cleanup
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com >
* fix: mamba backend is cublas only
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com >
* fix: uncomment lines in makefile
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com >
---------
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com >
2024-05-10 15:08:08 +02:00
fea9522982
fix: OpenVINO winograd always disabled ( #2252 )
...
Winograd convolutions were always disabled giving error when inference device was CPU.
This commit implement logic to disable Winograd convolutions only if CPU or NPU are declared.
2024-05-07 08:38:58 +02:00
530bec9c64
feat(llama.cpp): do not specify backends to autoload and add llama.cpp variants ( #2232 )
...
* feat(initializer): do not specify backends to autoload
We can simply try to autoload the backends extracted in the asset dir.
This will allow to build variants of the same backend (for e.g. with different instructions sets),
so to have a single binary for all the variants.
Signed-off-by: mudler <mudler@localai.io >
* refactor(prepare): refactor out llama.cpp prepare steps
Make it so are idempotent and that we can re-build
Signed-off-by: mudler <mudler@localai.io >
* [TEST] feat(build): build noavx version along
Signed-off-by: mudler <mudler@localai.io >
* build: make build parallel
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* build: do not override CMAKE_ARGS
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* build: add fallback variant
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Fixups
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* fix(huggingface-langchain): fail if no token is set
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* fix(huggingface-langchain): rename
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* fix: do not autoload local-store
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* fix: give priority between the listed backends
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
---------
Signed-off-by: mudler <mudler@localai.io >
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2024-05-04 17:56:12 +02:00
4690b534e0
feat: user defined inference device for CUDA and OpenVINO ( #2212 )
...
user defined inference device
configuration via main_gpu parameter
2024-05-02 09:54:29 +02:00
f7aabf1b50
fix: bring everything onto the same GRPC version to fix tests ( #2199 )
...
fix: more places where we are installing grpc that need a version specified
fix: attempt to fix metal tests
fix: metal/brew is forcing an update, they don't have 1.58 available anymore
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com >
2024-04-30 19:12:15 +00:00
e38610e521
feat: OpenVINO acceleration for embeddings in transformer backend ( #2190 )
...
OpenVINO acceleration for embeddings
New argument type: OVModelForFeatureExtraction
2024-04-30 10:13:04 +02:00
c4f958e11b
refactor(application): introduce application global state ( #2072 )
...
* start breaking up the giant channel refactor now that it's better understood - easier to merge bites
Signed-off-by: Dave Lee <dave@gray101.com >
* add concurrency and base64 back in, along with new base64 tests.
Signed-off-by: Dave Lee <dave@gray101.com >
* Automatic rename of whisper.go's Result to TranscriptResult
Signed-off-by: Dave Lee <dave@gray101.com >
* remove pkg/concurrency - significant changes coming in split 2
Signed-off-by: Dave Lee <dave@gray101.com >
* fix comments
Signed-off-by: Dave Lee <dave@gray101.com >
* add list_model service as another low-risk service to get it out of the way
Signed-off-by: Dave Lee <dave@gray101.com >
* split backend config loader into seperate file from the actual config struct. No changes yet, just reduce cognative load with smaller files of logical blocks
Signed-off-by: Dave Lee <dave@gray101.com >
* rename state.go ==> application.go
Signed-off-by: Dave Lee <dave@gray101.com >
* fix lost import?
Signed-off-by: Dave Lee <dave@gray101.com >
---------
Signed-off-by: Dave Lee <dave@gray101.com >
2024-04-29 17:42:37 +00:00
b7ea9602f5
fix: undefined symbol: iJIT_NotifyEvent in import torch ##2153 ( #2179 )
...
* add extra index to Intel repository
* Update install.sh
2024-04-29 15:11:09 +02:00
c9451cb604
Bump oneapi-basekit, optimum and openvino ( #2139 )
...
* Bump oneapi-basekit, optimum and openvino
* Changed PERFORMANCE HINT to CUMULATIVE_THROUGHPUT
Minor latency change for first token but about 10-15% speedup on token generation.
2024-04-26 16:20:43 +02:00
44bc540bb5
fix: security scanner dislikes runCommand
function arguments ( #2140 )
...
runCommand ==> ffmpegCommand. No functional changes, but makes it clear to the security scanner and future developers that this function cannot run arbitrary commands
Signed-off-by: Dave Lee <dave@gray101.com >
2024-04-26 10:33:12 +02:00
b664edde29
feat(rerankers): Add new backend, support jina rerankers API ( #2121 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2024-04-25 00:19:02 +02:00
2fb34b00b5
Incl ocv pkg for diffsusers utils ( #2115 )
...
* Update diffusers.yml
Signed-off-by: jtwolfe <jamie.t.wolfe@gmail.com >
* Update diffusers-rocm.yml
Signed-off-by: jtwolfe <jamie.t.wolfe@gmail.com >
---------
Signed-off-by: jtwolfe <jamie.t.wolfe@gmail.com >
2024-04-24 09:17:49 +02:00
f718a391c0
fix missing TrustRemoteCode in OpenVINO model load ( #2114 )
2024-04-24 00:45:37 +00:00
8e36fe9b6f
Transformers Backend: max_tokens adherence to OpenAI API ( #2108 )
...
max token adherence to OpenAI API
improve adherence to OpenAI API when max tokens is omitted or equal to 0 in the request
2024-04-23 18:42:17 +02:00
66b002458d
Transformer Backend: Implementing use_tokenizer_template and stop_prompts options ( #2090 )
...
* fix regression #1971
fixes regression #1971 introduced by intel_extension_for_transformers==1.4
* UseTokenizerTemplate and StopPrompt
Implementation of use_tokenizer_template and stopwords options
2024-04-21 16:20:25 +00:00
03adc1f60d
Add tensor_parallel_size setting to vllm setting items ( #2085 )
...
Signed-off-by: Taikono-Himazin <kazu@po.harenet.ne.jp >
2024-04-20 14:37:02 +00:00
af9e5a2d05
Revert #1963 ( #2056 )
...
* Revert "fix(fncall): fix regression introduced in #1963 (#2048 )"
This reverts commit 6b06d4e0af
.
* Revert "fix: action-tmate back to upstream, dead code removal (#2038 )"
This reverts commit fdec8a9d00
.
* Revert "feat(grpc): return consumed token count and update response accordingly (#2035 )"
This reverts commit e843d7df0e
.
* Revert "refactor: backend/service split, channel-based llm flow (#1963 )"
This reverts commit eed5706994
.
* feat(grpc): return consumed token count and update response accordingly
Fixes : #1920
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2024-04-17 23:33:49 +02:00
e843d7df0e
feat(grpc): return consumed token count and update response accordingly ( #2035 )
...
Fixes : #1920
2024-04-15 19:47:11 +02:00
0fdff26924
feat(parler-tts): Add new backend ( #2027 )
...
* feat(parler-tts): Add new backend
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* feat(parler-tts): try downgrade protobuf
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* feat(parler-tts): add parler conda env
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Revert "feat(parler-tts): try downgrade protobuf"
This reverts commit bd5941d5cfc00676b45a99f71debf3c34249cf3c.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* deps: add grpc
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* fix: try to gen proto with same environment
* workaround
* Revert "fix: try to gen proto with same environment"
This reverts commit 998c745e2f
.
* Workaround fixup
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
Co-authored-by: Dave <dave@gray101.com >
2024-04-13 18:59:21 +02:00
eed5706994
refactor: backend/service split, channel-based llm flow ( #1963 )
...
Refactor: channel based llm flow and services split
---------
Signed-off-by: Dave Lee <dave@gray101.com >
2024-04-13 09:45:34 +02:00