mirror of
https://github.com/mudler/LocalAI.git
synced 2024-12-21 13:37:51 +00:00
f347e51927
* feat(autogptq): add a separate conda environment for autogptq (#1137) **Description** This PR related to #1117 **Notes for Reviewers** Here we lock down the version of the dependencies. Make sure it can be used all the time without failed if the version of dependencies were upgraded. I change the order of importing packages according to the pylint, and no change the logic of code. It should be ok. I will do more investigate on writing some test cases for every backend. I can run the service in my environment, but there is not exist a way to test it. So, I am not confident on it. Add a README.md in the `grpc` root. This is the common commands for creating `conda` environment. And it can be used to the reference file for creating extral gRPC backend document. Signed-off-by: GitHub <noreply@github.com> Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * [Extra backend] Add seperate environment for ttsbark (#1141) **Description** This PR relates to #1117 **Notes for Reviewers** Same to the latest PR: * The code is also changed, but only the order of the import package parts. And some code comments are also added. * Add a configuration of the `conda` environment * Add a simple test case for testing if the service can be startup in current `conda` environment. It is succeed in VSCode, but the it is not out of box on terminal. So, it is hard to say the test case really useful. **[Signed commits](../CONTRIBUTING.md#signing-off-on-commits-developer-certificate-of-origin)** - [x] Yes, I signed my commits. <!-- Thank you for contributing to LocalAI! Contributing Conventions ------------------------- The draft above helps to give a quick overview of your PR. Remember to remove this comment and to at least: 1. Include descriptive PR titles with [<component-name>] prepended. We use [conventional commits](https://www.conventionalcommits.org/en/v1.0.0/). 2. Build and test your changes before submitting a PR (`make build`). 3. Sign your commits 4. **Tag maintainer:** for a quicker response, tag the relevant maintainer (see below). 5. **X/Twitter handle:** we announce bigger features on X/Twitter. If your PR gets announced, and you'd like a mention, we'll gladly shout you out! By following the community's contribution conventions upfront, the review process will be accelerated and your PR merged more quickly. If no one reviews your PR within a few days, please @-mention @mudler. --> Signed-off-by: GitHub <noreply@github.com> Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * feat(conda): add make target and entrypoints for the dockerfile Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * feat(conda): Add seperate conda env for diffusers (#1145) **Description** This PR relates to #1117 **Notes for Reviewers** * Add `conda` env `diffusers.yml` * Add Makefile to create it automatically * Add `run.sh` to support running as a extra backend * Also adding it to the main Dockerfile * Add make command in the root Makefile * Testing the server, it can start up under the env Signed-off-by: GitHub <noreply@github.com> Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * feat(conda):Add seperate env for vllm (#1148) **Description** This PR is related to #1117 **Notes for Reviewers** * The gRPC server can be started as normal * The test case can be triggered in VSCode * Same to other this kind of PRs, add `vllm.yml` Makefile and add `run.sh` to the main Dockerfile, and command to the main Makefile **[Signed commits](../CONTRIBUTING.md#signing-off-on-commits-developer-certificate-of-origin)** - [x] Yes, I signed my commits. <!-- Thank you for contributing to LocalAI! Contributing Conventions ------------------------- The draft above helps to give a quick overview of your PR. Remember to remove this comment and to at least: 1. Include descriptive PR titles with [<component-name>] prepended. We use [conventional commits](https://www.conventionalcommits.org/en/v1.0.0/). 2. Build and test your changes before submitting a PR (`make build`). 3. Sign your commits 4. **Tag maintainer:** for a quicker response, tag the relevant maintainer (see below). 5. **X/Twitter handle:** we announce bigger features on X/Twitter. If your PR gets announced, and you'd like a mention, we'll gladly shout you out! By following the community's contribution conventions upfront, the review process will be accelerated and your PR merged more quickly. If no one reviews your PR within a few days, please @-mention @mudler. --> Signed-off-by: GitHub <noreply@github.com> Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * feat(conda):Add seperate env for huggingface (#1146) **Description** This PR is related to #1117 **Notes for Reviewers** * Add conda env `huggingface.yml` * Change the import order, and also remove the no-used packages * Add `run.sh` and `make command` to the main Dockerfile and Makefile * Add test cases for it. It can be triggered and succeed under VSCode Python extension but it is hang by using `python -m unites test_huggingface.py` in the terminal ``` Running tests (unittest): /workspaces/LocalAI/extra/grpc/huggingface Running tests: /workspaces/LocalAI/extra/grpc/huggingface/test_huggingface.py::TestBackendServicer::test_embedding /workspaces/LocalAI/extra/grpc/huggingface/test_huggingface.py::TestBackendServicer::test_load_model /workspaces/LocalAI/extra/grpc/huggingface/test_huggingface.py::TestBackendServicer::test_server_startup ./test_huggingface.py::TestBackendServicer::test_embedding Passed ./test_huggingface.py::TestBackendServicer::test_load_model Passed ./test_huggingface.py::TestBackendServicer::test_server_startup Passed Total number of tests expected to run: 3 Total number of tests run: 3 Total number of tests passed: 3 Total number of tests failed: 0 Total number of tests failed with errors: 0 Total number of tests skipped: 0 Finished running tests! ``` **[Signed commits](../CONTRIBUTING.md#signing-off-on-commits-developer-certificate-of-origin)** - [x] Yes, I signed my commits. <!-- Thank you for contributing to LocalAI! Contributing Conventions ------------------------- The draft above helps to give a quick overview of your PR. Remember to remove this comment and to at least: 1. Include descriptive PR titles with [<component-name>] prepended. We use [conventional commits](https://www.conventionalcommits.org/en/v1.0.0/). 2. Build and test your changes before submitting a PR (`make build`). 3. Sign your commits 4. **Tag maintainer:** for a quicker response, tag the relevant maintainer (see below). 5. **X/Twitter handle:** we announce bigger features on X/Twitter. If your PR gets announced, and you'd like a mention, we'll gladly shout you out! By following the community's contribution conventions upfront, the review process will be accelerated and your PR merged more quickly. If no one reviews your PR within a few days, please @-mention @mudler. --> Signed-off-by: GitHub <noreply@github.com> Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * feat(conda): Add the seperate conda env for VALL-E X (#1147) **Description** This PR is related to #1117 **Notes for Reviewers** * The gRPC server cannot start up ``` (ttsvalle) @Aisuko ➜ /workspaces/LocalAI (feat/vall-e-x) $ /opt/conda/envs/ttsvalle/bin/python /workspaces/LocalAI/extra/grpc/vall-e-x/ttsvalle.py Traceback (most recent call last): File "/workspaces/LocalAI/extra/grpc/vall-e-x/ttsvalle.py", line 14, in <module> from utils.generation import SAMPLE_RATE, generate_audio, preload_models ModuleNotFoundError: No module named 'utils' ``` The installation steps follow https://github.com/Plachtaa/VALL-E-X#-installation below: * Under the `ttsvalle` conda env ``` git clone https://github.com/Plachtaa/VALL-E-X.git cd VALL-E-X pip install -r requirements.txt ``` **[Signed commits](../CONTRIBUTING.md#signing-off-on-commits-developer-certificate-of-origin)** - [x] Yes, I signed my commits. <!-- Thank you for contributing to LocalAI! Contributing Conventions ------------------------- The draft above helps to give a quick overview of your PR. Remember to remove this comment and to at least: 1. Include descriptive PR titles with [<component-name>] prepended. We use [conventional commits](https://www.conventionalcommits.org/en/v1.0.0/). 2. Build and test your changes before submitting a PR (`make build`). 3. Sign your commits 4. **Tag maintainer:** for a quicker response, tag the relevant maintainer (see below). 5. **X/Twitter handle:** we announce bigger features on X/Twitter. If your PR gets announced, and you'd like a mention, we'll gladly shout you out! By following the community's contribution conventions upfront, the review process will be accelerated and your PR merged more quickly. If no one reviews your PR within a few days, please @-mention @mudler. --> Signed-off-by: GitHub <noreply@github.com> Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * fix: set image type Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * feat(conda):Add seperate conda env for exllama (#1149) Add seperate env for exllama Signed-off-by: Aisuko <urakiny@gmail.com> Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Setup conda Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Set image_type arg Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * ci: prepare only conda env in tests Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Dockerfile: comment manual pip calls Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * conda: add conda to PATH Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * fixes * add shebang * Fixups Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * file perms Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * debug * Install new conda in the worker * Disable GPU tests for now until the worker is back * Rename workflows * debug * Fixup conda install * fixup(wrapper): pass args Signed-off-by: Ettore Di Giacinto <mudler@localai.io> --------- Signed-off-by: GitHub <noreply@github.com> Signed-off-by: Ettore Di Giacinto <mudler@localai.io> Signed-off-by: Aisuko <urakiny@gmail.com> Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com> Co-authored-by: Aisuko <urakiny@gmail.com>
100 lines
2.6 KiB
YAML
100 lines
2.6 KiB
YAML
name: vllm
|
|
channels:
|
|
- defaults
|
|
dependencies:
|
|
- _libgcc_mutex=0.1=main
|
|
- _openmp_mutex=5.1=1_gnu
|
|
- bzip2=1.0.8=h7b6447c_0
|
|
- ca-certificates=2023.08.22=h06a4308_0
|
|
- ld_impl_linux-64=2.38=h1181459_1
|
|
- libffi=3.4.4=h6a678d5_0
|
|
- libgcc-ng=11.2.0=h1234567_1
|
|
- libgomp=11.2.0=h1234567_1
|
|
- libstdcxx-ng=11.2.0=h1234567_1
|
|
- libuuid=1.41.5=h5eee18b_0
|
|
- ncurses=6.4=h6a678d5_0
|
|
- openssl=3.0.11=h7f8727e_2
|
|
- pip=23.2.1=py311h06a4308_0
|
|
- python=3.11.5=h955ad1f_0
|
|
- readline=8.2=h5eee18b_0
|
|
- setuptools=68.0.0=py311h06a4308_0
|
|
- sqlite=3.41.2=h5eee18b_0
|
|
- tk=8.6.12=h1ccaba5_0
|
|
- wheel=0.41.2=py311h06a4308_0
|
|
- xz=5.4.2=h5eee18b_0
|
|
- zlib=1.2.13=h5eee18b_0
|
|
- pip:
|
|
- aiosignal==1.3.1
|
|
- anyio==3.7.1
|
|
- attrs==23.1.0
|
|
- certifi==2023.7.22
|
|
- charset-normalizer==3.3.0
|
|
- click==8.1.7
|
|
- cmake==3.27.6
|
|
- fastapi==0.103.2
|
|
- filelock==3.12.4
|
|
- frozenlist==1.4.0
|
|
- fsspec==2023.9.2
|
|
- grpcio==1.59.0
|
|
- h11==0.14.0
|
|
- httptools==0.6.0
|
|
- huggingface-hub==0.17.3
|
|
- idna==3.4
|
|
- jinja2==3.1.2
|
|
- jsonschema==4.19.1
|
|
- jsonschema-specifications==2023.7.1
|
|
- lit==17.0.2
|
|
- markupsafe==2.1.3
|
|
- mpmath==1.3.0
|
|
- msgpack==1.0.7
|
|
- networkx==3.1
|
|
- ninja==1.11.1
|
|
- numpy==1.26.0
|
|
- nvidia-cublas-cu11==11.10.3.66
|
|
- nvidia-cuda-cupti-cu11==11.7.101
|
|
- nvidia-cuda-nvrtc-cu11==11.7.99
|
|
- nvidia-cuda-runtime-cu11==11.7.99
|
|
- nvidia-cudnn-cu11==8.5.0.96
|
|
- nvidia-cufft-cu11==10.9.0.58
|
|
- nvidia-curand-cu11==10.2.10.91
|
|
- nvidia-cusolver-cu11==11.4.0.1
|
|
- nvidia-cusparse-cu11==11.7.4.91
|
|
- nvidia-nccl-cu11==2.14.3
|
|
- nvidia-nvtx-cu11==11.7.91
|
|
- packaging==23.2
|
|
- pandas==2.1.1
|
|
- protobuf==4.24.4
|
|
- psutil==5.9.5
|
|
- pyarrow==13.0.0
|
|
- pydantic==1.10.13
|
|
- python-dateutil==2.8.2
|
|
- python-dotenv==1.0.0
|
|
- pytz==2023.3.post1
|
|
- pyyaml==6.0.1
|
|
- ray==2.7.0
|
|
- referencing==0.30.2
|
|
- regex==2023.10.3
|
|
- requests==2.31.0
|
|
- rpds-py==0.10.4
|
|
- safetensors==0.4.0
|
|
- sentencepiece==0.1.99
|
|
- six==1.16.0
|
|
- sniffio==1.3.0
|
|
- starlette==0.27.0
|
|
- sympy==1.12
|
|
- tokenizers==0.14.1
|
|
- torch==2.0.1
|
|
- tqdm==4.66.1
|
|
- transformers==4.34.0
|
|
- triton==2.0.0
|
|
- typing-extensions==4.8.0
|
|
- tzdata==2023.3
|
|
- urllib3==2.0.6
|
|
- uvicorn==0.23.2
|
|
- uvloop==0.17.0
|
|
- vllm==0.2.0
|
|
- watchfiles==0.20.0
|
|
- websockets==11.0.3
|
|
- xformers==0.0.22
|
|
prefix: /opt/conda/envs/vllm
|