2023-08-09 06:38:51 +00:00
#!/usr/bin/env python3
from concurrent import futures
2024-07-16 16:58:45 +00:00
import traceback
2023-08-09 06:38:51 +00:00
import argparse
feat(conda): conda environments (#1144)
* feat(autogptq): add a separate conda environment for autogptq (#1137)
**Description**
This PR related to #1117
**Notes for Reviewers**
Here we lock down the version of the dependencies. Make sure it can be
used all the time without failed if the version of dependencies were
upgraded.
I change the order of importing packages according to the pylint, and no
change the logic of code. It should be ok.
I will do more investigate on writing some test cases for every backend.
I can run the service in my environment, but there is not exist a way to
test it. So, I am not confident on it.
Add a README.md in the `grpc` root. This is the common commands for
creating `conda` environment. And it can be used to the reference file
for creating extral gRPC backend document.
Signed-off-by: GitHub <noreply@github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* [Extra backend] Add seperate environment for ttsbark (#1141)
**Description**
This PR relates to #1117
**Notes for Reviewers**
Same to the latest PR:
* The code is also changed, but only the order of the import package
parts. And some code comments are also added.
* Add a configuration of the `conda` environment
* Add a simple test case for testing if the service can be startup in
current `conda` environment. It is succeed in VSCode, but the it is not
out of box on terminal. So, it is hard to say the test case really
useful.
**[Signed
commits](../CONTRIBUTING.md#signing-off-on-commits-developer-certificate-of-origin)**
- [x] Yes, I signed my commits.
<!--
Thank you for contributing to LocalAI!
Contributing Conventions
-------------------------
The draft above helps to give a quick overview of your PR.
Remember to remove this comment and to at least:
1. Include descriptive PR titles with [<component-name>] prepended. We
use [conventional
commits](https://www.conventionalcommits.org/en/v1.0.0/).
2. Build and test your changes before submitting a PR (`make build`).
3. Sign your commits
4. **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below).
5. **X/Twitter handle:** we announce bigger features on X/Twitter. If
your PR gets announced, and you'd like a mention, we'll gladly shout you
out!
By following the community's contribution conventions upfront, the
review process will
be accelerated and your PR merged more quickly.
If no one reviews your PR within a few days, please @-mention @mudler.
-->
Signed-off-by: GitHub <noreply@github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(conda): add make target and entrypoints for the dockerfile
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(conda): Add seperate conda env for diffusers (#1145)
**Description**
This PR relates to #1117
**Notes for Reviewers**
* Add `conda` env `diffusers.yml`
* Add Makefile to create it automatically
* Add `run.sh` to support running as a extra backend
* Also adding it to the main Dockerfile
* Add make command in the root Makefile
* Testing the server, it can start up under the env
Signed-off-by: GitHub <noreply@github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(conda):Add seperate env for vllm (#1148)
**Description**
This PR is related to #1117
**Notes for Reviewers**
* The gRPC server can be started as normal
* The test case can be triggered in VSCode
* Same to other this kind of PRs, add `vllm.yml` Makefile and add
`run.sh` to the main Dockerfile, and command to the main Makefile
**[Signed
commits](../CONTRIBUTING.md#signing-off-on-commits-developer-certificate-of-origin)**
- [x] Yes, I signed my commits.
<!--
Thank you for contributing to LocalAI!
Contributing Conventions
-------------------------
The draft above helps to give a quick overview of your PR.
Remember to remove this comment and to at least:
1. Include descriptive PR titles with [<component-name>] prepended. We
use [conventional
commits](https://www.conventionalcommits.org/en/v1.0.0/).
2. Build and test your changes before submitting a PR (`make build`).
3. Sign your commits
4. **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below).
5. **X/Twitter handle:** we announce bigger features on X/Twitter. If
your PR gets announced, and you'd like a mention, we'll gladly shout you
out!
By following the community's contribution conventions upfront, the
review process will
be accelerated and your PR merged more quickly.
If no one reviews your PR within a few days, please @-mention @mudler.
-->
Signed-off-by: GitHub <noreply@github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(conda):Add seperate env for huggingface (#1146)
**Description**
This PR is related to #1117
**Notes for Reviewers**
* Add conda env `huggingface.yml`
* Change the import order, and also remove the no-used packages
* Add `run.sh` and `make command` to the main Dockerfile and Makefile
* Add test cases for it. It can be triggered and succeed under VSCode
Python extension but it is hang by using `python -m unites
test_huggingface.py` in the terminal
```
Running tests (unittest): /workspaces/LocalAI/extra/grpc/huggingface
Running tests: /workspaces/LocalAI/extra/grpc/huggingface/test_huggingface.py::TestBackendServicer::test_embedding
/workspaces/LocalAI/extra/grpc/huggingface/test_huggingface.py::TestBackendServicer::test_load_model
/workspaces/LocalAI/extra/grpc/huggingface/test_huggingface.py::TestBackendServicer::test_server_startup
./test_huggingface.py::TestBackendServicer::test_embedding Passed
./test_huggingface.py::TestBackendServicer::test_load_model Passed
./test_huggingface.py::TestBackendServicer::test_server_startup Passed
Total number of tests expected to run: 3
Total number of tests run: 3
Total number of tests passed: 3
Total number of tests failed: 0
Total number of tests failed with errors: 0
Total number of tests skipped: 0
Finished running tests!
```
**[Signed
commits](../CONTRIBUTING.md#signing-off-on-commits-developer-certificate-of-origin)**
- [x] Yes, I signed my commits.
<!--
Thank you for contributing to LocalAI!
Contributing Conventions
-------------------------
The draft above helps to give a quick overview of your PR.
Remember to remove this comment and to at least:
1. Include descriptive PR titles with [<component-name>] prepended. We
use [conventional
commits](https://www.conventionalcommits.org/en/v1.0.0/).
2. Build and test your changes before submitting a PR (`make build`).
3. Sign your commits
4. **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below).
5. **X/Twitter handle:** we announce bigger features on X/Twitter. If
your PR gets announced, and you'd like a mention, we'll gladly shout you
out!
By following the community's contribution conventions upfront, the
review process will
be accelerated and your PR merged more quickly.
If no one reviews your PR within a few days, please @-mention @mudler.
-->
Signed-off-by: GitHub <noreply@github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(conda): Add the seperate conda env for VALL-E X (#1147)
**Description**
This PR is related to #1117
**Notes for Reviewers**
* The gRPC server cannot start up
```
(ttsvalle) @Aisuko ➜ /workspaces/LocalAI (feat/vall-e-x) $ /opt/conda/envs/ttsvalle/bin/python /workspaces/LocalAI/extra/grpc/vall-e-x/ttsvalle.py
Traceback (most recent call last):
File "/workspaces/LocalAI/extra/grpc/vall-e-x/ttsvalle.py", line 14, in <module>
from utils.generation import SAMPLE_RATE, generate_audio, preload_models
ModuleNotFoundError: No module named 'utils'
```
The installation steps follow
https://github.com/Plachtaa/VALL-E-X#-installation below:
* Under the `ttsvalle` conda env
```
git clone https://github.com/Plachtaa/VALL-E-X.git
cd VALL-E-X
pip install -r requirements.txt
```
**[Signed
commits](../CONTRIBUTING.md#signing-off-on-commits-developer-certificate-of-origin)**
- [x] Yes, I signed my commits.
<!--
Thank you for contributing to LocalAI!
Contributing Conventions
-------------------------
The draft above helps to give a quick overview of your PR.
Remember to remove this comment and to at least:
1. Include descriptive PR titles with [<component-name>] prepended. We
use [conventional
commits](https://www.conventionalcommits.org/en/v1.0.0/).
2. Build and test your changes before submitting a PR (`make build`).
3. Sign your commits
4. **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below).
5. **X/Twitter handle:** we announce bigger features on X/Twitter. If
your PR gets announced, and you'd like a mention, we'll gladly shout you
out!
By following the community's contribution conventions upfront, the
review process will
be accelerated and your PR merged more quickly.
If no one reviews your PR within a few days, please @-mention @mudler.
-->
Signed-off-by: GitHub <noreply@github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix: set image type
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(conda):Add seperate conda env for exllama (#1149)
Add seperate env for exllama
Signed-off-by: Aisuko <urakiny@gmail.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Setup conda
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Set image_type arg
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* ci: prepare only conda env in tests
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Dockerfile: comment manual pip calls
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* conda: add conda to PATH
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fixes
* add shebang
* Fixups
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* file perms
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* debug
* Install new conda in the worker
* Disable GPU tests for now until the worker is back
* Rename workflows
* debug
* Fixup conda install
* fixup(wrapper): pass args
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: GitHub <noreply@github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Signed-off-by: Aisuko <urakiny@gmail.com>
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
Co-authored-by: Aisuko <urakiny@gmail.com>
2023-11-04 14:30:32 +00:00
from collections import defaultdict
from enum import Enum
2023-08-09 06:38:51 +00:00
import signal
import sys
feat(conda): conda environments (#1144)
* feat(autogptq): add a separate conda environment for autogptq (#1137)
**Description**
This PR related to #1117
**Notes for Reviewers**
Here we lock down the version of the dependencies. Make sure it can be
used all the time without failed if the version of dependencies were
upgraded.
I change the order of importing packages according to the pylint, and no
change the logic of code. It should be ok.
I will do more investigate on writing some test cases for every backend.
I can run the service in my environment, but there is not exist a way to
test it. So, I am not confident on it.
Add a README.md in the `grpc` root. This is the common commands for
creating `conda` environment. And it can be used to the reference file
for creating extral gRPC backend document.
Signed-off-by: GitHub <noreply@github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* [Extra backend] Add seperate environment for ttsbark (#1141)
**Description**
This PR relates to #1117
**Notes for Reviewers**
Same to the latest PR:
* The code is also changed, but only the order of the import package
parts. And some code comments are also added.
* Add a configuration of the `conda` environment
* Add a simple test case for testing if the service can be startup in
current `conda` environment. It is succeed in VSCode, but the it is not
out of box on terminal. So, it is hard to say the test case really
useful.
**[Signed
commits](../CONTRIBUTING.md#signing-off-on-commits-developer-certificate-of-origin)**
- [x] Yes, I signed my commits.
<!--
Thank you for contributing to LocalAI!
Contributing Conventions
-------------------------
The draft above helps to give a quick overview of your PR.
Remember to remove this comment and to at least:
1. Include descriptive PR titles with [<component-name>] prepended. We
use [conventional
commits](https://www.conventionalcommits.org/en/v1.0.0/).
2. Build and test your changes before submitting a PR (`make build`).
3. Sign your commits
4. **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below).
5. **X/Twitter handle:** we announce bigger features on X/Twitter. If
your PR gets announced, and you'd like a mention, we'll gladly shout you
out!
By following the community's contribution conventions upfront, the
review process will
be accelerated and your PR merged more quickly.
If no one reviews your PR within a few days, please @-mention @mudler.
-->
Signed-off-by: GitHub <noreply@github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(conda): add make target and entrypoints for the dockerfile
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(conda): Add seperate conda env for diffusers (#1145)
**Description**
This PR relates to #1117
**Notes for Reviewers**
* Add `conda` env `diffusers.yml`
* Add Makefile to create it automatically
* Add `run.sh` to support running as a extra backend
* Also adding it to the main Dockerfile
* Add make command in the root Makefile
* Testing the server, it can start up under the env
Signed-off-by: GitHub <noreply@github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(conda):Add seperate env for vllm (#1148)
**Description**
This PR is related to #1117
**Notes for Reviewers**
* The gRPC server can be started as normal
* The test case can be triggered in VSCode
* Same to other this kind of PRs, add `vllm.yml` Makefile and add
`run.sh` to the main Dockerfile, and command to the main Makefile
**[Signed
commits](../CONTRIBUTING.md#signing-off-on-commits-developer-certificate-of-origin)**
- [x] Yes, I signed my commits.
<!--
Thank you for contributing to LocalAI!
Contributing Conventions
-------------------------
The draft above helps to give a quick overview of your PR.
Remember to remove this comment and to at least:
1. Include descriptive PR titles with [<component-name>] prepended. We
use [conventional
commits](https://www.conventionalcommits.org/en/v1.0.0/).
2. Build and test your changes before submitting a PR (`make build`).
3. Sign your commits
4. **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below).
5. **X/Twitter handle:** we announce bigger features on X/Twitter. If
your PR gets announced, and you'd like a mention, we'll gladly shout you
out!
By following the community's contribution conventions upfront, the
review process will
be accelerated and your PR merged more quickly.
If no one reviews your PR within a few days, please @-mention @mudler.
-->
Signed-off-by: GitHub <noreply@github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(conda):Add seperate env for huggingface (#1146)
**Description**
This PR is related to #1117
**Notes for Reviewers**
* Add conda env `huggingface.yml`
* Change the import order, and also remove the no-used packages
* Add `run.sh` and `make command` to the main Dockerfile and Makefile
* Add test cases for it. It can be triggered and succeed under VSCode
Python extension but it is hang by using `python -m unites
test_huggingface.py` in the terminal
```
Running tests (unittest): /workspaces/LocalAI/extra/grpc/huggingface
Running tests: /workspaces/LocalAI/extra/grpc/huggingface/test_huggingface.py::TestBackendServicer::test_embedding
/workspaces/LocalAI/extra/grpc/huggingface/test_huggingface.py::TestBackendServicer::test_load_model
/workspaces/LocalAI/extra/grpc/huggingface/test_huggingface.py::TestBackendServicer::test_server_startup
./test_huggingface.py::TestBackendServicer::test_embedding Passed
./test_huggingface.py::TestBackendServicer::test_load_model Passed
./test_huggingface.py::TestBackendServicer::test_server_startup Passed
Total number of tests expected to run: 3
Total number of tests run: 3
Total number of tests passed: 3
Total number of tests failed: 0
Total number of tests failed with errors: 0
Total number of tests skipped: 0
Finished running tests!
```
**[Signed
commits](../CONTRIBUTING.md#signing-off-on-commits-developer-certificate-of-origin)**
- [x] Yes, I signed my commits.
<!--
Thank you for contributing to LocalAI!
Contributing Conventions
-------------------------
The draft above helps to give a quick overview of your PR.
Remember to remove this comment and to at least:
1. Include descriptive PR titles with [<component-name>] prepended. We
use [conventional
commits](https://www.conventionalcommits.org/en/v1.0.0/).
2. Build and test your changes before submitting a PR (`make build`).
3. Sign your commits
4. **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below).
5. **X/Twitter handle:** we announce bigger features on X/Twitter. If
your PR gets announced, and you'd like a mention, we'll gladly shout you
out!
By following the community's contribution conventions upfront, the
review process will
be accelerated and your PR merged more quickly.
If no one reviews your PR within a few days, please @-mention @mudler.
-->
Signed-off-by: GitHub <noreply@github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(conda): Add the seperate conda env for VALL-E X (#1147)
**Description**
This PR is related to #1117
**Notes for Reviewers**
* The gRPC server cannot start up
```
(ttsvalle) @Aisuko ➜ /workspaces/LocalAI (feat/vall-e-x) $ /opt/conda/envs/ttsvalle/bin/python /workspaces/LocalAI/extra/grpc/vall-e-x/ttsvalle.py
Traceback (most recent call last):
File "/workspaces/LocalAI/extra/grpc/vall-e-x/ttsvalle.py", line 14, in <module>
from utils.generation import SAMPLE_RATE, generate_audio, preload_models
ModuleNotFoundError: No module named 'utils'
```
The installation steps follow
https://github.com/Plachtaa/VALL-E-X#-installation below:
* Under the `ttsvalle` conda env
```
git clone https://github.com/Plachtaa/VALL-E-X.git
cd VALL-E-X
pip install -r requirements.txt
```
**[Signed
commits](../CONTRIBUTING.md#signing-off-on-commits-developer-certificate-of-origin)**
- [x] Yes, I signed my commits.
<!--
Thank you for contributing to LocalAI!
Contributing Conventions
-------------------------
The draft above helps to give a quick overview of your PR.
Remember to remove this comment and to at least:
1. Include descriptive PR titles with [<component-name>] prepended. We
use [conventional
commits](https://www.conventionalcommits.org/en/v1.0.0/).
2. Build and test your changes before submitting a PR (`make build`).
3. Sign your commits
4. **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below).
5. **X/Twitter handle:** we announce bigger features on X/Twitter. If
your PR gets announced, and you'd like a mention, we'll gladly shout you
out!
By following the community's contribution conventions upfront, the
review process will
be accelerated and your PR merged more quickly.
If no one reviews your PR within a few days, please @-mention @mudler.
-->
Signed-off-by: GitHub <noreply@github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix: set image type
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(conda):Add seperate conda env for exllama (#1149)
Add seperate env for exllama
Signed-off-by: Aisuko <urakiny@gmail.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Setup conda
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Set image_type arg
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* ci: prepare only conda env in tests
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Dockerfile: comment manual pip calls
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* conda: add conda to PATH
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fixes
* add shebang
* Fixups
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* file perms
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* debug
* Install new conda in the worker
* Disable GPU tests for now until the worker is back
* Rename workflows
* debug
* Fixup conda install
* fixup(wrapper): pass args
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: GitHub <noreply@github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Signed-off-by: Aisuko <urakiny@gmail.com>
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
Co-authored-by: Aisuko <urakiny@gmail.com>
2023-11-04 14:30:32 +00:00
import time
2023-08-09 06:38:51 +00:00
import os
feat(conda): conda environments (#1144)
* feat(autogptq): add a separate conda environment for autogptq (#1137)
**Description**
This PR related to #1117
**Notes for Reviewers**
Here we lock down the version of the dependencies. Make sure it can be
used all the time without failed if the version of dependencies were
upgraded.
I change the order of importing packages according to the pylint, and no
change the logic of code. It should be ok.
I will do more investigate on writing some test cases for every backend.
I can run the service in my environment, but there is not exist a way to
test it. So, I am not confident on it.
Add a README.md in the `grpc` root. This is the common commands for
creating `conda` environment. And it can be used to the reference file
for creating extral gRPC backend document.
Signed-off-by: GitHub <noreply@github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* [Extra backend] Add seperate environment for ttsbark (#1141)
**Description**
This PR relates to #1117
**Notes for Reviewers**
Same to the latest PR:
* The code is also changed, but only the order of the import package
parts. And some code comments are also added.
* Add a configuration of the `conda` environment
* Add a simple test case for testing if the service can be startup in
current `conda` environment. It is succeed in VSCode, but the it is not
out of box on terminal. So, it is hard to say the test case really
useful.
**[Signed
commits](../CONTRIBUTING.md#signing-off-on-commits-developer-certificate-of-origin)**
- [x] Yes, I signed my commits.
<!--
Thank you for contributing to LocalAI!
Contributing Conventions
-------------------------
The draft above helps to give a quick overview of your PR.
Remember to remove this comment and to at least:
1. Include descriptive PR titles with [<component-name>] prepended. We
use [conventional
commits](https://www.conventionalcommits.org/en/v1.0.0/).
2. Build and test your changes before submitting a PR (`make build`).
3. Sign your commits
4. **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below).
5. **X/Twitter handle:** we announce bigger features on X/Twitter. If
your PR gets announced, and you'd like a mention, we'll gladly shout you
out!
By following the community's contribution conventions upfront, the
review process will
be accelerated and your PR merged more quickly.
If no one reviews your PR within a few days, please @-mention @mudler.
-->
Signed-off-by: GitHub <noreply@github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(conda): add make target and entrypoints for the dockerfile
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(conda): Add seperate conda env for diffusers (#1145)
**Description**
This PR relates to #1117
**Notes for Reviewers**
* Add `conda` env `diffusers.yml`
* Add Makefile to create it automatically
* Add `run.sh` to support running as a extra backend
* Also adding it to the main Dockerfile
* Add make command in the root Makefile
* Testing the server, it can start up under the env
Signed-off-by: GitHub <noreply@github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(conda):Add seperate env for vllm (#1148)
**Description**
This PR is related to #1117
**Notes for Reviewers**
* The gRPC server can be started as normal
* The test case can be triggered in VSCode
* Same to other this kind of PRs, add `vllm.yml` Makefile and add
`run.sh` to the main Dockerfile, and command to the main Makefile
**[Signed
commits](../CONTRIBUTING.md#signing-off-on-commits-developer-certificate-of-origin)**
- [x] Yes, I signed my commits.
<!--
Thank you for contributing to LocalAI!
Contributing Conventions
-------------------------
The draft above helps to give a quick overview of your PR.
Remember to remove this comment and to at least:
1. Include descriptive PR titles with [<component-name>] prepended. We
use [conventional
commits](https://www.conventionalcommits.org/en/v1.0.0/).
2. Build and test your changes before submitting a PR (`make build`).
3. Sign your commits
4. **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below).
5. **X/Twitter handle:** we announce bigger features on X/Twitter. If
your PR gets announced, and you'd like a mention, we'll gladly shout you
out!
By following the community's contribution conventions upfront, the
review process will
be accelerated and your PR merged more quickly.
If no one reviews your PR within a few days, please @-mention @mudler.
-->
Signed-off-by: GitHub <noreply@github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(conda):Add seperate env for huggingface (#1146)
**Description**
This PR is related to #1117
**Notes for Reviewers**
* Add conda env `huggingface.yml`
* Change the import order, and also remove the no-used packages
* Add `run.sh` and `make command` to the main Dockerfile and Makefile
* Add test cases for it. It can be triggered and succeed under VSCode
Python extension but it is hang by using `python -m unites
test_huggingface.py` in the terminal
```
Running tests (unittest): /workspaces/LocalAI/extra/grpc/huggingface
Running tests: /workspaces/LocalAI/extra/grpc/huggingface/test_huggingface.py::TestBackendServicer::test_embedding
/workspaces/LocalAI/extra/grpc/huggingface/test_huggingface.py::TestBackendServicer::test_load_model
/workspaces/LocalAI/extra/grpc/huggingface/test_huggingface.py::TestBackendServicer::test_server_startup
./test_huggingface.py::TestBackendServicer::test_embedding Passed
./test_huggingface.py::TestBackendServicer::test_load_model Passed
./test_huggingface.py::TestBackendServicer::test_server_startup Passed
Total number of tests expected to run: 3
Total number of tests run: 3
Total number of tests passed: 3
Total number of tests failed: 0
Total number of tests failed with errors: 0
Total number of tests skipped: 0
Finished running tests!
```
**[Signed
commits](../CONTRIBUTING.md#signing-off-on-commits-developer-certificate-of-origin)**
- [x] Yes, I signed my commits.
<!--
Thank you for contributing to LocalAI!
Contributing Conventions
-------------------------
The draft above helps to give a quick overview of your PR.
Remember to remove this comment and to at least:
1. Include descriptive PR titles with [<component-name>] prepended. We
use [conventional
commits](https://www.conventionalcommits.org/en/v1.0.0/).
2. Build and test your changes before submitting a PR (`make build`).
3. Sign your commits
4. **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below).
5. **X/Twitter handle:** we announce bigger features on X/Twitter. If
your PR gets announced, and you'd like a mention, we'll gladly shout you
out!
By following the community's contribution conventions upfront, the
review process will
be accelerated and your PR merged more quickly.
If no one reviews your PR within a few days, please @-mention @mudler.
-->
Signed-off-by: GitHub <noreply@github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(conda): Add the seperate conda env for VALL-E X (#1147)
**Description**
This PR is related to #1117
**Notes for Reviewers**
* The gRPC server cannot start up
```
(ttsvalle) @Aisuko ➜ /workspaces/LocalAI (feat/vall-e-x) $ /opt/conda/envs/ttsvalle/bin/python /workspaces/LocalAI/extra/grpc/vall-e-x/ttsvalle.py
Traceback (most recent call last):
File "/workspaces/LocalAI/extra/grpc/vall-e-x/ttsvalle.py", line 14, in <module>
from utils.generation import SAMPLE_RATE, generate_audio, preload_models
ModuleNotFoundError: No module named 'utils'
```
The installation steps follow
https://github.com/Plachtaa/VALL-E-X#-installation below:
* Under the `ttsvalle` conda env
```
git clone https://github.com/Plachtaa/VALL-E-X.git
cd VALL-E-X
pip install -r requirements.txt
```
**[Signed
commits](../CONTRIBUTING.md#signing-off-on-commits-developer-certificate-of-origin)**
- [x] Yes, I signed my commits.
<!--
Thank you for contributing to LocalAI!
Contributing Conventions
-------------------------
The draft above helps to give a quick overview of your PR.
Remember to remove this comment and to at least:
1. Include descriptive PR titles with [<component-name>] prepended. We
use [conventional
commits](https://www.conventionalcommits.org/en/v1.0.0/).
2. Build and test your changes before submitting a PR (`make build`).
3. Sign your commits
4. **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below).
5. **X/Twitter handle:** we announce bigger features on X/Twitter. If
your PR gets announced, and you'd like a mention, we'll gladly shout you
out!
By following the community's contribution conventions upfront, the
review process will
be accelerated and your PR merged more quickly.
If no one reviews your PR within a few days, please @-mention @mudler.
-->
Signed-off-by: GitHub <noreply@github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix: set image type
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(conda):Add seperate conda env for exllama (#1149)
Add seperate env for exllama
Signed-off-by: Aisuko <urakiny@gmail.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Setup conda
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Set image_type arg
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* ci: prepare only conda env in tests
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Dockerfile: comment manual pip calls
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* conda: add conda to PATH
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fixes
* add shebang
* Fixups
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* file perms
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* debug
* Install new conda in the worker
* Disable GPU tests for now until the worker is back
* Rename workflows
* debug
* Fixup conda install
* fixup(wrapper): pass args
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: GitHub <noreply@github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Signed-off-by: Aisuko <urakiny@gmail.com>
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
Co-authored-by: Aisuko <urakiny@gmail.com>
2023-11-04 14:30:32 +00:00
from PIL import Image
2023-08-09 06:38:51 +00:00
import torch
feat(conda): conda environments (#1144)
* feat(autogptq): add a separate conda environment for autogptq (#1137)
**Description**
This PR related to #1117
**Notes for Reviewers**
Here we lock down the version of the dependencies. Make sure it can be
used all the time without failed if the version of dependencies were
upgraded.
I change the order of importing packages according to the pylint, and no
change the logic of code. It should be ok.
I will do more investigate on writing some test cases for every backend.
I can run the service in my environment, but there is not exist a way to
test it. So, I am not confident on it.
Add a README.md in the `grpc` root. This is the common commands for
creating `conda` environment. And it can be used to the reference file
for creating extral gRPC backend document.
Signed-off-by: GitHub <noreply@github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* [Extra backend] Add seperate environment for ttsbark (#1141)
**Description**
This PR relates to #1117
**Notes for Reviewers**
Same to the latest PR:
* The code is also changed, but only the order of the import package
parts. And some code comments are also added.
* Add a configuration of the `conda` environment
* Add a simple test case for testing if the service can be startup in
current `conda` environment. It is succeed in VSCode, but the it is not
out of box on terminal. So, it is hard to say the test case really
useful.
**[Signed
commits](../CONTRIBUTING.md#signing-off-on-commits-developer-certificate-of-origin)**
- [x] Yes, I signed my commits.
<!--
Thank you for contributing to LocalAI!
Contributing Conventions
-------------------------
The draft above helps to give a quick overview of your PR.
Remember to remove this comment and to at least:
1. Include descriptive PR titles with [<component-name>] prepended. We
use [conventional
commits](https://www.conventionalcommits.org/en/v1.0.0/).
2. Build and test your changes before submitting a PR (`make build`).
3. Sign your commits
4. **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below).
5. **X/Twitter handle:** we announce bigger features on X/Twitter. If
your PR gets announced, and you'd like a mention, we'll gladly shout you
out!
By following the community's contribution conventions upfront, the
review process will
be accelerated and your PR merged more quickly.
If no one reviews your PR within a few days, please @-mention @mudler.
-->
Signed-off-by: GitHub <noreply@github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(conda): add make target and entrypoints for the dockerfile
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(conda): Add seperate conda env for diffusers (#1145)
**Description**
This PR relates to #1117
**Notes for Reviewers**
* Add `conda` env `diffusers.yml`
* Add Makefile to create it automatically
* Add `run.sh` to support running as a extra backend
* Also adding it to the main Dockerfile
* Add make command in the root Makefile
* Testing the server, it can start up under the env
Signed-off-by: GitHub <noreply@github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(conda):Add seperate env for vllm (#1148)
**Description**
This PR is related to #1117
**Notes for Reviewers**
* The gRPC server can be started as normal
* The test case can be triggered in VSCode
* Same to other this kind of PRs, add `vllm.yml` Makefile and add
`run.sh` to the main Dockerfile, and command to the main Makefile
**[Signed
commits](../CONTRIBUTING.md#signing-off-on-commits-developer-certificate-of-origin)**
- [x] Yes, I signed my commits.
<!--
Thank you for contributing to LocalAI!
Contributing Conventions
-------------------------
The draft above helps to give a quick overview of your PR.
Remember to remove this comment and to at least:
1. Include descriptive PR titles with [<component-name>] prepended. We
use [conventional
commits](https://www.conventionalcommits.org/en/v1.0.0/).
2. Build and test your changes before submitting a PR (`make build`).
3. Sign your commits
4. **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below).
5. **X/Twitter handle:** we announce bigger features on X/Twitter. If
your PR gets announced, and you'd like a mention, we'll gladly shout you
out!
By following the community's contribution conventions upfront, the
review process will
be accelerated and your PR merged more quickly.
If no one reviews your PR within a few days, please @-mention @mudler.
-->
Signed-off-by: GitHub <noreply@github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(conda):Add seperate env for huggingface (#1146)
**Description**
This PR is related to #1117
**Notes for Reviewers**
* Add conda env `huggingface.yml`
* Change the import order, and also remove the no-used packages
* Add `run.sh` and `make command` to the main Dockerfile and Makefile
* Add test cases for it. It can be triggered and succeed under VSCode
Python extension but it is hang by using `python -m unites
test_huggingface.py` in the terminal
```
Running tests (unittest): /workspaces/LocalAI/extra/grpc/huggingface
Running tests: /workspaces/LocalAI/extra/grpc/huggingface/test_huggingface.py::TestBackendServicer::test_embedding
/workspaces/LocalAI/extra/grpc/huggingface/test_huggingface.py::TestBackendServicer::test_load_model
/workspaces/LocalAI/extra/grpc/huggingface/test_huggingface.py::TestBackendServicer::test_server_startup
./test_huggingface.py::TestBackendServicer::test_embedding Passed
./test_huggingface.py::TestBackendServicer::test_load_model Passed
./test_huggingface.py::TestBackendServicer::test_server_startup Passed
Total number of tests expected to run: 3
Total number of tests run: 3
Total number of tests passed: 3
Total number of tests failed: 0
Total number of tests failed with errors: 0
Total number of tests skipped: 0
Finished running tests!
```
**[Signed
commits](../CONTRIBUTING.md#signing-off-on-commits-developer-certificate-of-origin)**
- [x] Yes, I signed my commits.
<!--
Thank you for contributing to LocalAI!
Contributing Conventions
-------------------------
The draft above helps to give a quick overview of your PR.
Remember to remove this comment and to at least:
1. Include descriptive PR titles with [<component-name>] prepended. We
use [conventional
commits](https://www.conventionalcommits.org/en/v1.0.0/).
2. Build and test your changes before submitting a PR (`make build`).
3. Sign your commits
4. **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below).
5. **X/Twitter handle:** we announce bigger features on X/Twitter. If
your PR gets announced, and you'd like a mention, we'll gladly shout you
out!
By following the community's contribution conventions upfront, the
review process will
be accelerated and your PR merged more quickly.
If no one reviews your PR within a few days, please @-mention @mudler.
-->
Signed-off-by: GitHub <noreply@github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(conda): Add the seperate conda env for VALL-E X (#1147)
**Description**
This PR is related to #1117
**Notes for Reviewers**
* The gRPC server cannot start up
```
(ttsvalle) @Aisuko ➜ /workspaces/LocalAI (feat/vall-e-x) $ /opt/conda/envs/ttsvalle/bin/python /workspaces/LocalAI/extra/grpc/vall-e-x/ttsvalle.py
Traceback (most recent call last):
File "/workspaces/LocalAI/extra/grpc/vall-e-x/ttsvalle.py", line 14, in <module>
from utils.generation import SAMPLE_RATE, generate_audio, preload_models
ModuleNotFoundError: No module named 'utils'
```
The installation steps follow
https://github.com/Plachtaa/VALL-E-X#-installation below:
* Under the `ttsvalle` conda env
```
git clone https://github.com/Plachtaa/VALL-E-X.git
cd VALL-E-X
pip install -r requirements.txt
```
**[Signed
commits](../CONTRIBUTING.md#signing-off-on-commits-developer-certificate-of-origin)**
- [x] Yes, I signed my commits.
<!--
Thank you for contributing to LocalAI!
Contributing Conventions
-------------------------
The draft above helps to give a quick overview of your PR.
Remember to remove this comment and to at least:
1. Include descriptive PR titles with [<component-name>] prepended. We
use [conventional
commits](https://www.conventionalcommits.org/en/v1.0.0/).
2. Build and test your changes before submitting a PR (`make build`).
3. Sign your commits
4. **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below).
5. **X/Twitter handle:** we announce bigger features on X/Twitter. If
your PR gets announced, and you'd like a mention, we'll gladly shout you
out!
By following the community's contribution conventions upfront, the
review process will
be accelerated and your PR merged more quickly.
If no one reviews your PR within a few days, please @-mention @mudler.
-->
Signed-off-by: GitHub <noreply@github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix: set image type
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(conda):Add seperate conda env for exllama (#1149)
Add seperate env for exllama
Signed-off-by: Aisuko <urakiny@gmail.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Setup conda
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Set image_type arg
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* ci: prepare only conda env in tests
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Dockerfile: comment manual pip calls
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* conda: add conda to PATH
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fixes
* add shebang
* Fixups
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* file perms
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* debug
* Install new conda in the worker
* Disable GPU tests for now until the worker is back
* Rename workflows
* debug
* Fixup conda install
* fixup(wrapper): pass args
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: GitHub <noreply@github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Signed-off-by: Aisuko <urakiny@gmail.com>
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
Co-authored-by: Aisuko <urakiny@gmail.com>
2023-11-04 14:30:32 +00:00
import backend_pb2
import backend_pb2_grpc
import grpc
2024-07-16 16:58:45 +00:00
from diffusers import StableDiffusion3Pipeline , StableDiffusionXLPipeline , StableDiffusionDepth2ImgPipeline , DPMSolverMultistepScheduler , StableDiffusionPipeline , DiffusionPipeline , \
2024-08-10 23:31:53 +00:00
EulerAncestralDiscreteScheduler , FluxPipeline , FluxTransformer2DModel
2023-12-15 23:06:20 +00:00
from diffusers import StableDiffusionImg2ImgPipeline , AutoPipelineForText2Image , ControlNetModel , StableVideoDiffusionPipeline
2023-08-14 21:12:00 +00:00
from diffusers . pipelines . stable_diffusion import safety_checker
2024-07-16 16:58:45 +00:00
from diffusers . utils import load_image , export_to_video
2024-03-07 13:37:45 +00:00
from compel import Compel , ReturnedEmbeddingsType
2024-08-10 23:31:53 +00:00
from optimum . quanto import freeze , qfloat8 , quantize
from transformers import CLIPTextModel , T5EncoderModel
2023-08-27 08:11:16 +00:00
from safetensors . torch import load_file
feat(conda): conda environments (#1144)
* feat(autogptq): add a separate conda environment for autogptq (#1137)
**Description**
This PR related to #1117
**Notes for Reviewers**
Here we lock down the version of the dependencies. Make sure it can be
used all the time without failed if the version of dependencies were
upgraded.
I change the order of importing packages according to the pylint, and no
change the logic of code. It should be ok.
I will do more investigate on writing some test cases for every backend.
I can run the service in my environment, but there is not exist a way to
test it. So, I am not confident on it.
Add a README.md in the `grpc` root. This is the common commands for
creating `conda` environment. And it can be used to the reference file
for creating extral gRPC backend document.
Signed-off-by: GitHub <noreply@github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* [Extra backend] Add seperate environment for ttsbark (#1141)
**Description**
This PR relates to #1117
**Notes for Reviewers**
Same to the latest PR:
* The code is also changed, but only the order of the import package
parts. And some code comments are also added.
* Add a configuration of the `conda` environment
* Add a simple test case for testing if the service can be startup in
current `conda` environment. It is succeed in VSCode, but the it is not
out of box on terminal. So, it is hard to say the test case really
useful.
**[Signed
commits](../CONTRIBUTING.md#signing-off-on-commits-developer-certificate-of-origin)**
- [x] Yes, I signed my commits.
<!--
Thank you for contributing to LocalAI!
Contributing Conventions
-------------------------
The draft above helps to give a quick overview of your PR.
Remember to remove this comment and to at least:
1. Include descriptive PR titles with [<component-name>] prepended. We
use [conventional
commits](https://www.conventionalcommits.org/en/v1.0.0/).
2. Build and test your changes before submitting a PR (`make build`).
3. Sign your commits
4. **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below).
5. **X/Twitter handle:** we announce bigger features on X/Twitter. If
your PR gets announced, and you'd like a mention, we'll gladly shout you
out!
By following the community's contribution conventions upfront, the
review process will
be accelerated and your PR merged more quickly.
If no one reviews your PR within a few days, please @-mention @mudler.
-->
Signed-off-by: GitHub <noreply@github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(conda): add make target and entrypoints for the dockerfile
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(conda): Add seperate conda env for diffusers (#1145)
**Description**
This PR relates to #1117
**Notes for Reviewers**
* Add `conda` env `diffusers.yml`
* Add Makefile to create it automatically
* Add `run.sh` to support running as a extra backend
* Also adding it to the main Dockerfile
* Add make command in the root Makefile
* Testing the server, it can start up under the env
Signed-off-by: GitHub <noreply@github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(conda):Add seperate env for vllm (#1148)
**Description**
This PR is related to #1117
**Notes for Reviewers**
* The gRPC server can be started as normal
* The test case can be triggered in VSCode
* Same to other this kind of PRs, add `vllm.yml` Makefile and add
`run.sh` to the main Dockerfile, and command to the main Makefile
**[Signed
commits](../CONTRIBUTING.md#signing-off-on-commits-developer-certificate-of-origin)**
- [x] Yes, I signed my commits.
<!--
Thank you for contributing to LocalAI!
Contributing Conventions
-------------------------
The draft above helps to give a quick overview of your PR.
Remember to remove this comment and to at least:
1. Include descriptive PR titles with [<component-name>] prepended. We
use [conventional
commits](https://www.conventionalcommits.org/en/v1.0.0/).
2. Build and test your changes before submitting a PR (`make build`).
3. Sign your commits
4. **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below).
5. **X/Twitter handle:** we announce bigger features on X/Twitter. If
your PR gets announced, and you'd like a mention, we'll gladly shout you
out!
By following the community's contribution conventions upfront, the
review process will
be accelerated and your PR merged more quickly.
If no one reviews your PR within a few days, please @-mention @mudler.
-->
Signed-off-by: GitHub <noreply@github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(conda):Add seperate env for huggingface (#1146)
**Description**
This PR is related to #1117
**Notes for Reviewers**
* Add conda env `huggingface.yml`
* Change the import order, and also remove the no-used packages
* Add `run.sh` and `make command` to the main Dockerfile and Makefile
* Add test cases for it. It can be triggered and succeed under VSCode
Python extension but it is hang by using `python -m unites
test_huggingface.py` in the terminal
```
Running tests (unittest): /workspaces/LocalAI/extra/grpc/huggingface
Running tests: /workspaces/LocalAI/extra/grpc/huggingface/test_huggingface.py::TestBackendServicer::test_embedding
/workspaces/LocalAI/extra/grpc/huggingface/test_huggingface.py::TestBackendServicer::test_load_model
/workspaces/LocalAI/extra/grpc/huggingface/test_huggingface.py::TestBackendServicer::test_server_startup
./test_huggingface.py::TestBackendServicer::test_embedding Passed
./test_huggingface.py::TestBackendServicer::test_load_model Passed
./test_huggingface.py::TestBackendServicer::test_server_startup Passed
Total number of tests expected to run: 3
Total number of tests run: 3
Total number of tests passed: 3
Total number of tests failed: 0
Total number of tests failed with errors: 0
Total number of tests skipped: 0
Finished running tests!
```
**[Signed
commits](../CONTRIBUTING.md#signing-off-on-commits-developer-certificate-of-origin)**
- [x] Yes, I signed my commits.
<!--
Thank you for contributing to LocalAI!
Contributing Conventions
-------------------------
The draft above helps to give a quick overview of your PR.
Remember to remove this comment and to at least:
1. Include descriptive PR titles with [<component-name>] prepended. We
use [conventional
commits](https://www.conventionalcommits.org/en/v1.0.0/).
2. Build and test your changes before submitting a PR (`make build`).
3. Sign your commits
4. **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below).
5. **X/Twitter handle:** we announce bigger features on X/Twitter. If
your PR gets announced, and you'd like a mention, we'll gladly shout you
out!
By following the community's contribution conventions upfront, the
review process will
be accelerated and your PR merged more quickly.
If no one reviews your PR within a few days, please @-mention @mudler.
-->
Signed-off-by: GitHub <noreply@github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(conda): Add the seperate conda env for VALL-E X (#1147)
**Description**
This PR is related to #1117
**Notes for Reviewers**
* The gRPC server cannot start up
```
(ttsvalle) @Aisuko ➜ /workspaces/LocalAI (feat/vall-e-x) $ /opt/conda/envs/ttsvalle/bin/python /workspaces/LocalAI/extra/grpc/vall-e-x/ttsvalle.py
Traceback (most recent call last):
File "/workspaces/LocalAI/extra/grpc/vall-e-x/ttsvalle.py", line 14, in <module>
from utils.generation import SAMPLE_RATE, generate_audio, preload_models
ModuleNotFoundError: No module named 'utils'
```
The installation steps follow
https://github.com/Plachtaa/VALL-E-X#-installation below:
* Under the `ttsvalle` conda env
```
git clone https://github.com/Plachtaa/VALL-E-X.git
cd VALL-E-X
pip install -r requirements.txt
```
**[Signed
commits](../CONTRIBUTING.md#signing-off-on-commits-developer-certificate-of-origin)**
- [x] Yes, I signed my commits.
<!--
Thank you for contributing to LocalAI!
Contributing Conventions
-------------------------
The draft above helps to give a quick overview of your PR.
Remember to remove this comment and to at least:
1. Include descriptive PR titles with [<component-name>] prepended. We
use [conventional
commits](https://www.conventionalcommits.org/en/v1.0.0/).
2. Build and test your changes before submitting a PR (`make build`).
3. Sign your commits
4. **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below).
5. **X/Twitter handle:** we announce bigger features on X/Twitter. If
your PR gets announced, and you'd like a mention, we'll gladly shout you
out!
By following the community's contribution conventions upfront, the
review process will
be accelerated and your PR merged more quickly.
If no one reviews your PR within a few days, please @-mention @mudler.
-->
Signed-off-by: GitHub <noreply@github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix: set image type
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(conda):Add seperate conda env for exllama (#1149)
Add seperate env for exllama
Signed-off-by: Aisuko <urakiny@gmail.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Setup conda
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Set image_type arg
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* ci: prepare only conda env in tests
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Dockerfile: comment manual pip calls
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* conda: add conda to PATH
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fixes
* add shebang
* Fixups
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* file perms
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* debug
* Install new conda in the worker
* Disable GPU tests for now until the worker is back
* Rename workflows
* debug
* Fixup conda install
* fixup(wrapper): pass args
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: GitHub <noreply@github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Signed-off-by: Aisuko <urakiny@gmail.com>
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
Co-authored-by: Aisuko <urakiny@gmail.com>
2023-11-04 14:30:32 +00:00
2023-08-09 06:38:51 +00:00
_ONE_DAY_IN_SECONDS = 60 * 60 * 24
2024-07-16 16:58:45 +00:00
COMPEL = os . environ . get ( " COMPEL " , " 0 " ) == " 1 "
XPU = os . environ . get ( " XPU " , " 0 " ) == " 1 "
CLIPSKIP = os . environ . get ( " CLIPSKIP " , " 1 " ) == " 1 "
SAFETENSORS = os . environ . get ( " SAFETENSORS " , " 1 " ) == " 1 "
CHUNK_SIZE = os . environ . get ( " CHUNK_SIZE " , " 8 " )
FPS = os . environ . get ( " FPS " , " 7 " )
DISABLE_CPU_OFFLOAD = os . environ . get ( " DISABLE_CPU_OFFLOAD " , " 0 " ) == " 1 "
FRAMES = os . environ . get ( " FRAMES " , " 64 " )
2023-08-09 06:38:51 +00:00
2024-03-07 13:37:45 +00:00
if XPU :
import intel_extension_for_pytorch as ipex
2024-07-16 16:58:45 +00:00
2024-03-07 13:37:45 +00:00
print ( ipex . xpu . get_device_name ( 0 ) )
2023-09-19 19:30:39 +00:00
# If MAX_WORKERS are specified in the environment use it, otherwise default to 1
MAX_WORKERS = int ( os . environ . get ( ' PYTHON_GRPC_MAX_WORKERS ' , ' 1 ' ) )
2024-07-16 16:58:45 +00:00
2023-08-14 21:12:00 +00:00
# https://github.com/CompVis/stable-diffusion/issues/239#issuecomment-1627615287
2024-07-16 16:58:45 +00:00
def sc ( self , clip_input , images ) : return images , [ False for i in images ]
2023-08-14 21:12:00 +00:00
# edit the StableDiffusionSafetyChecker class so that, when called, it just returns the images and an array of True values
safety_checker . StableDiffusionSafetyChecker . forward = sc
2023-08-17 21:38:59 +00:00
from diffusers . schedulers import (
DDIMScheduler ,
DPMSolverMultistepScheduler ,
DPMSolverSinglestepScheduler ,
EulerAncestralDiscreteScheduler ,
EulerDiscreteScheduler ,
HeunDiscreteScheduler ,
KDPM2AncestralDiscreteScheduler ,
KDPM2DiscreteScheduler ,
LMSDiscreteScheduler ,
PNDMScheduler ,
UniPCMultistepScheduler ,
)
2024-07-16 16:58:45 +00:00
2023-08-17 21:38:59 +00:00
# The scheduler list mapping was taken from here: https://github.com/neggles/animatediff-cli/blob/6f336f5f4b5e38e85d7f06f1744ef42d0a45f2a7/src/animatediff/schedulers.py#L39
# Credits to https://github.com/neggles
# See https://github.com/huggingface/diffusers/issues/4167 for more details on sched mapping from A1111
class DiffusionScheduler ( str , Enum ) :
ddim = " ddim " # DDIM
pndm = " pndm " # PNDM
heun = " heun " # Heun
unipc = " unipc " # UniPC
euler = " euler " # Euler
euler_a = " euler_a " # Euler a
lms = " lms " # LMS
k_lms = " k_lms " # LMS Karras
dpm_2 = " dpm_2 " # DPM2
k_dpm_2 = " k_dpm_2 " # DPM2 Karras
dpm_2_a = " dpm_2_a " # DPM2 a
k_dpm_2_a = " k_dpm_2_a " # DPM2 a Karras
dpmpp_2m = " dpmpp_2m " # DPM++ 2M
k_dpmpp_2m = " k_dpmpp_2m " # DPM++ 2M Karras
dpmpp_sde = " dpmpp_sde " # DPM++ SDE
k_dpmpp_sde = " k_dpmpp_sde " # DPM++ SDE Karras
dpmpp_2m_sde = " dpmpp_2m_sde " # DPM++ 2M SDE
k_dpmpp_2m_sde = " k_dpmpp_2m_sde " # DPM++ 2M SDE Karras
def get_scheduler ( name : str , config : dict = { } ) :
is_karras = name . startswith ( " k_ " )
if is_karras :
# strip the k_ prefix and add the karras sigma flag to config
name = name . lstrip ( " k_ " )
config [ " use_karras_sigmas " ] = True
if name == DiffusionScheduler . ddim :
sched_class = DDIMScheduler
elif name == DiffusionScheduler . pndm :
sched_class = PNDMScheduler
elif name == DiffusionScheduler . heun :
sched_class = HeunDiscreteScheduler
elif name == DiffusionScheduler . unipc :
sched_class = UniPCMultistepScheduler
elif name == DiffusionScheduler . euler :
sched_class = EulerDiscreteScheduler
elif name == DiffusionScheduler . euler_a :
sched_class = EulerAncestralDiscreteScheduler
elif name == DiffusionScheduler . lms :
sched_class = LMSDiscreteScheduler
elif name == DiffusionScheduler . dpm_2 :
# Equivalent to DPM2 in K-Diffusion
sched_class = KDPM2DiscreteScheduler
elif name == DiffusionScheduler . dpm_2_a :
# Equivalent to `DPM2 a`` in K-Diffusion
sched_class = KDPM2AncestralDiscreteScheduler
elif name == DiffusionScheduler . dpmpp_2m :
# Equivalent to `DPM++ 2M` in K-Diffusion
sched_class = DPMSolverMultistepScheduler
config [ " algorithm_type " ] = " dpmsolver++ "
config [ " solver_order " ] = 2
elif name == DiffusionScheduler . dpmpp_sde :
# Equivalent to `DPM++ SDE` in K-Diffusion
sched_class = DPMSolverSinglestepScheduler
elif name == DiffusionScheduler . dpmpp_2m_sde :
# Equivalent to `DPM++ 2M SDE` in K-Diffusion
sched_class = DPMSolverMultistepScheduler
config [ " algorithm_type " ] = " sde-dpmsolver++ "
else :
raise ValueError ( f " Invalid scheduler ' { ' k_ ' if is_karras else ' ' } { name } ' " )
return sched_class . from_config ( config )
2024-07-16 16:58:45 +00:00
2023-08-09 06:38:51 +00:00
# Implement the BackendServicer class with the service methods
class BackendServicer ( backend_pb2_grpc . BackendServicer ) :
def Health ( self , request , context ) :
return backend_pb2 . Reply ( message = bytes ( " OK " , ' utf-8 ' ) )
2024-07-16 16:58:45 +00:00
2023-08-09 06:38:51 +00:00
def LoadModel ( self , request , context ) :
try :
print ( f " Loading model { request . Model } ... " , file = sys . stderr )
print ( f " Request { request } " , file = sys . stderr )
torchType = torch . float32
2023-12-13 18:20:22 +00:00
variant = None
2023-08-09 06:38:51 +00:00
if request . F16Memory :
torchType = torch . float16
2024-07-16 16:58:45 +00:00
variant = " fp16 "
2023-08-09 06:38:51 +00:00
2023-08-14 21:12:00 +00:00
local = False
modelFile = request . Model
2023-08-15 23:11:42 +00:00
2023-12-24 18:24:52 +00:00
self . cfg_scale = 7
2024-08-10 23:31:53 +00:00
self . PipelineType = request . PipelineType
2023-08-15 23:11:42 +00:00
if request . CFGScale != 0 :
2023-12-24 18:24:52 +00:00
self . cfg_scale = request . CFGScale
2024-07-16 16:58:45 +00:00
2024-09-04 14:29:09 +00:00
clipmodel = " Lykon/dreamshaper-8 "
2023-08-17 21:38:59 +00:00
if request . CLIPModel != " " :
clipmodel = request . CLIPModel
clipsubfolder = " text_encoder "
if request . CLIPSubfolder != " " :
clipsubfolder = request . CLIPSubfolder
2024-07-16 16:58:45 +00:00
2023-08-14 21:12:00 +00:00
# Check if ModelFile exists
if request . ModelFile != " " :
if os . path . exists ( request . ModelFile ) :
local = True
modelFile = request . ModelFile
2024-07-16 16:58:45 +00:00
2023-08-14 21:12:00 +00:00
fromSingleFile = request . Model . startswith ( " http " ) or request . Model . startswith ( " / " ) or local
2024-07-16 16:58:45 +00:00
self . img2vid = False
self . txt2vid = False
2023-08-18 20:06:24 +00:00
## img2img
2023-12-13 18:20:22 +00:00
if ( request . PipelineType == " StableDiffusionImg2ImgPipeline " ) or ( request . IMG2IMG and request . PipelineType == " " ) :
2023-08-18 20:06:24 +00:00
if fromSingleFile :
self . pipe = StableDiffusionImg2ImgPipeline . from_single_file ( modelFile ,
2024-07-16 16:58:45 +00:00
torch_dtype = torchType )
2023-08-18 20:06:24 +00:00
else :
self . pipe = StableDiffusionImg2ImgPipeline . from_pretrained ( request . Model ,
2024-07-16 16:58:45 +00:00
torch_dtype = torchType )
2023-08-18 20:06:24 +00:00
2023-12-13 18:20:22 +00:00
elif request . PipelineType == " StableDiffusionDepth2ImgPipeline " :
2023-08-18 20:06:24 +00:00
self . pipe = StableDiffusionDepth2ImgPipeline . from_pretrained ( request . Model ,
2024-07-16 16:58:45 +00:00
torch_dtype = torchType )
2023-12-15 23:06:20 +00:00
## img2vid
elif request . PipelineType == " StableVideoDiffusionPipeline " :
2024-07-16 16:58:45 +00:00
self . img2vid = True
2023-12-15 23:06:20 +00:00
self . pipe = StableVideoDiffusionPipeline . from_pretrained (
request . Model , torch_dtype = torchType , variant = variant
)
if not DISABLE_CPU_OFFLOAD :
self . pipe . enable_model_cpu_offload ( )
2023-08-18 20:06:24 +00:00
## text2img
2023-12-13 18:20:22 +00:00
elif request . PipelineType == " AutoPipelineForText2Image " or request . PipelineType == " " :
self . pipe = AutoPipelineForText2Image . from_pretrained ( request . Model ,
2024-07-16 16:58:45 +00:00
torch_dtype = torchType ,
use_safetensors = SAFETENSORS ,
variant = variant )
2023-12-13 18:20:22 +00:00
elif request . PipelineType == " StableDiffusionPipeline " :
2023-08-14 21:12:00 +00:00
if fromSingleFile :
2023-08-18 20:06:24 +00:00
self . pipe = StableDiffusionPipeline . from_single_file ( modelFile ,
2024-07-16 16:58:45 +00:00
torch_dtype = torchType )
2023-08-14 21:12:00 +00:00
else :
2023-08-18 20:06:24 +00:00
self . pipe = StableDiffusionPipeline . from_pretrained ( request . Model ,
2024-07-16 16:58:45 +00:00
torch_dtype = torchType )
2023-12-13 18:20:22 +00:00
elif request . PipelineType == " DiffusionPipeline " :
2023-08-17 21:38:59 +00:00
self . pipe = DiffusionPipeline . from_pretrained ( request . Model ,
2024-07-16 16:58:45 +00:00
torch_dtype = torchType )
2023-12-15 23:06:20 +00:00
elif request . PipelineType == " VideoDiffusionPipeline " :
2024-07-16 16:58:45 +00:00
self . txt2vid = True
2023-12-15 23:06:20 +00:00
self . pipe = DiffusionPipeline . from_pretrained ( request . Model ,
2024-07-16 16:58:45 +00:00
torch_dtype = torchType )
2023-12-13 18:20:22 +00:00
elif request . PipelineType == " StableDiffusionXLPipeline " :
2023-08-14 21:12:00 +00:00
if fromSingleFile :
self . pipe = StableDiffusionXLPipeline . from_single_file ( modelFile ,
2024-07-16 16:58:45 +00:00
torch_dtype = torchType ,
use_safetensors = True )
2023-08-14 21:12:00 +00:00
else :
self . pipe = StableDiffusionXLPipeline . from_pretrained (
2024-07-16 16:58:45 +00:00
request . Model ,
torch_dtype = torchType ,
use_safetensors = True ,
2023-12-24 18:24:52 +00:00
variant = variant )
2024-06-18 13:09:39 +00:00
elif request . PipelineType == " StableDiffusion3Pipeline " :
if fromSingleFile :
self . pipe = StableDiffusion3Pipeline . from_single_file ( modelFile ,
2024-07-16 16:58:45 +00:00
torch_dtype = torchType ,
use_safetensors = True )
2024-06-18 13:09:39 +00:00
else :
self . pipe = StableDiffusion3Pipeline . from_pretrained (
2024-07-16 16:58:45 +00:00
request . Model ,
torch_dtype = torchType ,
use_safetensors = True ,
2024-06-18 13:09:39 +00:00
variant = variant )
2024-08-10 23:31:53 +00:00
elif request . PipelineType == " FluxPipeline " :
2024-10-25 08:12:43 +00:00
if fromSingleFile :
self . pipe = FluxPipeline . from_single_file ( modelFile ,
torch_dtype = torchType ,
use_safetensors = True )
else :
2024-08-10 23:31:53 +00:00
self . pipe = FluxPipeline . from_pretrained (
request . Model ,
torch_dtype = torch . bfloat16 )
2024-10-25 08:12:43 +00:00
if request . LowVRAM :
self . pipe . enable_model_cpu_offload ( )
2024-08-10 23:31:53 +00:00
elif request . PipelineType == " FluxTransformer2DModel " :
dtype = torch . bfloat16
# specify from environment or default to "ChuckMcSneed/FLUX.1-dev"
bfl_repo = os . environ . get ( " BFL_REPO " , " ChuckMcSneed/FLUX.1-dev " )
transformer = FluxTransformer2DModel . from_single_file ( modelFile , torch_dtype = dtype )
quantize ( transformer , weights = qfloat8 )
freeze ( transformer )
text_encoder_2 = T5EncoderModel . from_pretrained ( bfl_repo , subfolder = " text_encoder_2 " , torch_dtype = dtype )
quantize ( text_encoder_2 , weights = qfloat8 )
freeze ( text_encoder_2 )
self . pipe = FluxPipeline . from_pretrained ( bfl_repo , transformer = None , text_encoder_2 = None , torch_dtype = dtype )
self . pipe . transformer = transformer
self . pipe . text_encoder_2 = text_encoder_2
if request . LowVRAM :
self . pipe . enable_model_cpu_offload ( )
2023-12-13 18:20:22 +00:00
2023-08-18 20:06:24 +00:00
if CLIPSKIP and request . CLIPSkip != 0 :
2023-12-13 18:20:22 +00:00
self . clip_skip = request . CLIPSkip
else :
self . clip_skip = 0
2024-07-16 16:58:45 +00:00
2023-08-09 06:38:51 +00:00
# torch_dtype needs to be customized. float16 for GPU, float32 for CPU
# TODO: this needs to be customized
2023-08-18 20:06:24 +00:00
if request . SchedulerType != " " :
self . pipe . scheduler = get_scheduler ( request . SchedulerType , self . pipe . scheduler . config )
2024-07-16 16:58:45 +00:00
2024-03-07 13:37:45 +00:00
if COMPEL :
self . compel = Compel (
2024-07-16 16:58:45 +00:00
tokenizer = [ self . pipe . tokenizer , self . pipe . tokenizer_2 ] ,
2024-03-07 13:37:45 +00:00
text_encoder = [ self . pipe . text_encoder , self . pipe . text_encoder_2 ] ,
returned_embeddings_type = ReturnedEmbeddingsType . PENULTIMATE_HIDDEN_STATES_NON_NORMALIZED ,
requires_pooled = [ False , True ]
2024-07-16 16:58:45 +00:00
)
2023-12-13 18:20:22 +00:00
if request . ControlNet :
self . controlnet = ControlNetModel . from_pretrained (
request . ControlNet , torch_dtype = torchType , variant = variant
)
self . pipe . controlnet = self . controlnet
else :
self . controlnet = None
2024-10-31 11:12:22 +00:00
if request . LoraAdapter and not os . path . isabs ( request . LoraAdapter ) :
2023-08-27 08:11:16 +00:00
# modify LoraAdapter to be relative to modelFileBase
2024-10-31 11:12:22 +00:00
request . LoraAdapter = os . path . join ( request . ModelPath , request . LoraAdapter )
2023-09-04 17:38:38 +00:00
device = " cpu " if not request . CUDA else " cuda "
self . device = device
2023-08-27 08:11:16 +00:00
if request . LoraAdapter :
# Check if its a local file and not a directory ( we load lora differently for a safetensor file )
if os . path . exists ( request . LoraAdapter ) and not os . path . isdir ( request . LoraAdapter ) :
2024-07-16 16:58:45 +00:00
self . pipe . load_lora_weights ( request . LoraAdapter )
2023-08-27 08:11:16 +00:00
else :
self . pipe . unet . load_attn_procs ( request . LoraAdapter )
2024-11-05 14:14:33 +00:00
if len ( request . LoraAdapters ) > 0 :
i = 0
adapters_name = [ ]
adapters_weights = [ ]
for adapter in request . LoraAdapters :
if not os . path . isabs ( adapter ) :
adapter = os . path . join ( request . ModelPath , adapter )
self . pipe . load_lora_weights ( adapter , adapter_name = f " adapter_ { i } " )
adapters_name . append ( f " adapter_ { i } " )
i + = 1
for adapters_weight in request . LoraScales :
adapters_weights . append ( adapters_weight )
self . pipe . set_adapters ( adapters_name , adapter_weights = adapters_weights )
2023-08-27 08:11:16 +00:00
2024-07-16 16:58:45 +00:00
if request . CUDA :
self . pipe . to ( ' cuda ' )
if self . controlnet :
self . controlnet . to ( ' cuda ' )
if XPU :
self . pipe = self . pipe . to ( " xpu " )
2023-08-09 06:38:51 +00:00
except Exception as err :
return backend_pb2 . Result ( success = False , message = f " Unexpected { err =} , { type ( err ) =} " )
# Implement your logic here for the LoadModel service
# Replace this with your desired response
return backend_pb2 . Result ( message = " Model loaded successfully " , success = True )
2023-08-27 08:11:16 +00:00
# https://github.com/huggingface/diffusers/issues/3064
def load_lora_weights ( self , checkpoint_path , multiplier , device , dtype ) :
LORA_PREFIX_UNET = " lora_unet "
LORA_PREFIX_TEXT_ENCODER = " lora_te "
# load LoRA weight from .safetensors
state_dict = load_file ( checkpoint_path , device = device )
updates = defaultdict ( dict )
for key , value in state_dict . items ( ) :
# it is suggested to print out the key, it usually will be something like below
# "lora_te_text_model_encoder_layers_0_self_attn_k_proj.lora_down.weight"
layer , elem = key . split ( ' . ' , 1 )
updates [ layer ] [ elem ] = value
# directly update weight in diffusers model
for layer , elems in updates . items ( ) :
if " text " in layer :
layer_infos = layer . split ( LORA_PREFIX_TEXT_ENCODER + " _ " ) [ - 1 ] . split ( " _ " )
curr_layer = self . pipe . text_encoder
else :
layer_infos = layer . split ( LORA_PREFIX_UNET + " _ " ) [ - 1 ] . split ( " _ " )
curr_layer = self . pipe . unet
# find the target layer
temp_name = layer_infos . pop ( 0 )
while len ( layer_infos ) > - 1 :
try :
curr_layer = curr_layer . __getattr__ ( temp_name )
if len ( layer_infos ) > 0 :
temp_name = layer_infos . pop ( 0 )
elif len ( layer_infos ) == 0 :
break
except Exception :
if len ( temp_name ) > 0 :
temp_name + = " _ " + layer_infos . pop ( 0 )
else :
temp_name = layer_infos . pop ( 0 )
# get elements for this layer
weight_up = elems [ ' lora_up.weight ' ] . to ( dtype )
weight_down = elems [ ' lora_down.weight ' ] . to ( dtype )
2023-08-27 13:35:59 +00:00
alpha = elems [ ' alpha ' ] if ' alpha ' in elems else None
2023-08-27 08:11:16 +00:00
if alpha :
alpha = alpha . item ( ) / weight_up . shape [ 1 ]
else :
alpha = 1.0
# update weight
if len ( weight_up . shape ) == 4 :
curr_layer . weight . data + = multiplier * alpha * torch . mm ( weight_up . squeeze ( 3 ) . squeeze ( 2 ) , weight_down . squeeze ( 3 ) . squeeze ( 2 ) ) . unsqueeze ( 2 ) . unsqueeze ( 3 )
else :
curr_layer . weight . data + = multiplier * alpha * torch . mm ( weight_up , weight_down )
2023-08-09 06:38:51 +00:00
def GenerateImage ( self , request , context ) :
prompt = request . positive_prompt
2023-12-11 07:20:34 +00:00
steps = 1
if request . step != 0 :
steps = request . step
2023-08-14 21:12:00 +00:00
# create a dictionary of values for the parameters
options = {
2024-07-16 16:58:45 +00:00
" negative_prompt " : request . negative_prompt ,
2023-12-11 07:20:34 +00:00
" num_inference_steps " : steps ,
2023-08-14 21:12:00 +00:00
}
2023-12-15 23:06:20 +00:00
if request . src != " " and not self . controlnet and not self . img2vid :
2023-08-18 20:06:24 +00:00
image = Image . open ( request . src )
2023-08-17 21:38:59 +00:00
options [ " image " ] = image
2023-12-13 18:20:22 +00:00
elif self . controlnet and request . src :
pose_image = load_image ( request . src )
options [ " image " ] = pose_image
if CLIPSKIP and self . clip_skip != 0 :
2024-07-16 16:58:45 +00:00
options [ " clip_skip " ] = self . clip_skip
2023-08-17 21:38:59 +00:00
2023-08-14 21:12:00 +00:00
# Get the keys that we will build the args for our pipe for
keys = options . keys ( )
if request . EnableParameters != " " :
2024-11-06 07:53:02 +00:00
keys = [ key . strip ( ) for key in request . EnableParameters . split ( " , " ) ]
2023-08-14 21:12:00 +00:00
if request . EnableParameters == " none " :
keys = [ ]
# create a dictionary of parameters by using the keys from EnableParameters and the values from defaults
2024-11-06 07:53:02 +00:00
kwargs = { key : options . get ( key ) for key in keys if key in options }
2023-08-17 21:38:59 +00:00
Allow to manually set the seed for the SD pipeline (#998)
**Description**
Enable setting the seed for the stable diffusion pipeline. This is done
through an additional `seed` parameter in the request, such as:
```bash
curl http://localhost:8080/v1/images/generations \
-H "Content-Type: application/json" \
-d '{"model": "stablediffusion", "prompt": "prompt", "n": 1, "step": 51, "size": "512x512", "seed": 3}'
```
**Notes for Reviewers**
When the `seed` parameter is not sent, `request.seed` defaults to `0`,
making it difficult to detect an actual seed of `0`. Is there a way to
change the default to `-1` for instance ?
**[Signed
commits](../CONTRIBUTING.md#signing-off-on-commits-developer-certificate-of-origin)**
- [x] Yes, I signed my commits.
<!--
Thank you for contributing to LocalAI!
Contributing Conventions:
1. Include descriptive PR titles with [<component-name>] prepended.
2. Build and test your changes before submitting a PR.
3. Sign your commits
By following the community's contribution conventions upfront, the
review process will
be accelerated and your PR merged more quickly.
-->
2023-09-04 17:10:55 +00:00
# Set seed
if request . seed > 0 :
2023-09-04 17:38:38 +00:00
kwargs [ " generator " ] = torch . Generator ( device = self . device ) . manual_seed (
Allow to manually set the seed for the SD pipeline (#998)
**Description**
Enable setting the seed for the stable diffusion pipeline. This is done
through an additional `seed` parameter in the request, such as:
```bash
curl http://localhost:8080/v1/images/generations \
-H "Content-Type: application/json" \
-d '{"model": "stablediffusion", "prompt": "prompt", "n": 1, "step": 51, "size": "512x512", "seed": 3}'
```
**Notes for Reviewers**
When the `seed` parameter is not sent, `request.seed` defaults to `0`,
making it difficult to detect an actual seed of `0`. Is there a way to
change the default to `-1` for instance ?
**[Signed
commits](../CONTRIBUTING.md#signing-off-on-commits-developer-certificate-of-origin)**
- [x] Yes, I signed my commits.
<!--
Thank you for contributing to LocalAI!
Contributing Conventions:
1. Include descriptive PR titles with [<component-name>] prepended.
2. Build and test your changes before submitting a PR.
3. Sign your commits
By following the community's contribution conventions upfront, the
review process will
be accelerated and your PR merged more quickly.
-->
2023-09-04 17:10:55 +00:00
request . seed
)
2024-08-10 23:31:53 +00:00
if self . PipelineType == " FluxPipeline " :
kwargs [ " max_sequence_length " ] = 256
2024-11-06 07:53:02 +00:00
if request . width :
kwargs [ " width " ] = request . width
if request . height :
kwargs [ " height " ] = request . height
2024-08-10 23:31:53 +00:00
if self . PipelineType == " FluxTransformer2DModel " :
kwargs [ " output_type " ] = " pil "
kwargs [ " generator " ] = torch . Generator ( " cpu " ) . manual_seed ( 0 )
2023-12-15 23:06:20 +00:00
if self . img2vid :
# Load the conditioning image
image = load_image ( request . src )
image = image . resize ( ( 1024 , 576 ) )
generator = torch . manual_seed ( request . seed )
2023-12-24 18:24:52 +00:00
frames = self . pipe ( image , guidance_scale = self . cfg_scale , decode_chunk_size = CHUNK_SIZE , generator = generator ) . frames [ 0 ]
2023-12-15 23:06:20 +00:00
export_to_video ( frames , request . dst , fps = FPS )
return backend_pb2 . Result ( message = " Media generated successfully " , success = True )
if self . txt2vid :
2023-12-24 18:24:52 +00:00
video_frames = self . pipe ( prompt , guidance_scale = self . cfg_scale , num_inference_steps = steps , num_frames = int ( FRAMES ) ) . frames
2023-12-15 23:06:20 +00:00
export_to_video ( video_frames , request . dst )
return backend_pb2 . Result ( message = " Media generated successfully " , success = True )
2024-11-06 07:53:02 +00:00
print ( f " Generating image with { kwargs =} " , file = sys . stderr )
2023-08-16 20:24:52 +00:00
image = { }
if COMPEL :
2024-03-07 13:37:45 +00:00
conditioning , pooled = self . compel . build_conditioning_tensor ( prompt )
kwargs [ " prompt_embeds " ] = conditioning
kwargs [ " pooled_prompt_embeds " ] = pooled
2023-08-16 20:24:52 +00:00
# pass the kwargs dictionary to the self.pipe method
2023-12-24 18:24:52 +00:00
image = self . pipe (
guidance_scale = self . cfg_scale ,
2023-08-16 20:24:52 +00:00
* * kwargs
2024-07-16 16:58:45 +00:00
) . images [ 0 ]
2023-08-16 20:24:52 +00:00
else :
# pass the kwargs dictionary to the self.pipe method
image = self . pipe (
2023-12-24 18:24:52 +00:00
prompt ,
guidance_scale = self . cfg_scale ,
2023-08-16 20:24:52 +00:00
* * kwargs
2024-07-16 16:58:45 +00:00
) . images [ 0 ]
2023-08-09 06:38:51 +00:00
2023-08-14 21:12:00 +00:00
# save the result
2023-08-09 06:38:51 +00:00
image . save ( request . dst )
2023-12-15 23:06:20 +00:00
return backend_pb2 . Result ( message = " Media generated " , success = True )
2023-08-09 06:38:51 +00:00
2024-07-16 16:58:45 +00:00
2023-08-09 06:38:51 +00:00
def serve ( address ) :
2023-09-19 19:30:39 +00:00
server = grpc . server ( futures . ThreadPoolExecutor ( max_workers = MAX_WORKERS ) )
2023-08-09 06:38:51 +00:00
backend_pb2_grpc . add_BackendServicer_to_server ( BackendServicer ( ) , server )
server . add_insecure_port ( address )
server . start ( )
print ( " Server started. Listening on: " + address , file = sys . stderr )
# Define the signal handler function
def signal_handler ( sig , frame ) :
print ( " Received termination signal. Shutting down... " )
server . stop ( 0 )
sys . exit ( 0 )
# Set the signal handlers for SIGINT and SIGTERM
signal . signal ( signal . SIGINT , signal_handler )
signal . signal ( signal . SIGTERM , signal_handler )
try :
while True :
time . sleep ( _ONE_DAY_IN_SECONDS )
except KeyboardInterrupt :
server . stop ( 0 )
2024-07-16 16:58:45 +00:00
2023-08-09 06:38:51 +00:00
if __name__ == " __main__ " :
parser = argparse . ArgumentParser ( description = " Run the gRPC server. " )
parser . add_argument (
" --addr " , default = " localhost:50051 " , help = " The address to bind the server to. "
)
args = parser . parse_args ( )
2024-07-16 16:58:45 +00:00
serve ( args . addr )