mirror of
https://github.com/mudler/LocalAI.git
synced 2024-12-18 20:27:57 +00:00
feat(entrypoint): optionally prepare extra endpoints (#1405)
entrypoint: optionally prepare extra endpoints Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
This commit is contained in:
parent
b181503c30
commit
3a4fb6fa4b
@ -348,6 +348,7 @@ there are additional environment variables available that modify the behavior of
|
||||
| `BUILD_TYPE` | | Build type. Available: `cublas`, `openblas`, `clblas` |
|
||||
| `GO_TAGS` | | Go tags. Available: `stablediffusion` |
|
||||
| `HUGGINGFACEHUB_API_TOKEN` | | Special token for interacting with HuggingFace Inference API, required only when using the `langchain-huggingface` backend |
|
||||
| `EXTRA_BACKENDS` | | A space separated list of backends to prepare. For example `EXTRA_BACKENDS="backend/python/diffusers backend/python/transformers"` prepares the conda environment on start |
|
||||
|
||||
Here is how to configure these variables:
|
||||
|
||||
@ -372,7 +373,7 @@ By default, all the backends are built.
|
||||
|
||||
LocalAI can be extended with extra backends. The backends are implemented as `gRPC` services and can be written in any language. The container images that are built and published on [quay.io](https://quay.io/repository/go-skynet/local-ai?tab=tags) contain a set of images split in core and extra. By default Images bring all the dependencies and backends supported by LocalAI (we call those `extra` images). The `-core` images instead bring only the strictly necessary dependencies to run LocalAI without only a core set of backends.
|
||||
|
||||
If you wish to build a custom container image with extra backends, you can use the core images and build only the backends you are interested into. For instance, to use the diffusers backend:
|
||||
If you wish to build a custom container image with extra backends, you can use the core images and build only the backends you are interested into or prepare the environment on startup by using the `EXTRA_BACKENDS` environment variable. For instance, to use the diffusers backend:
|
||||
|
||||
```Dockerfile
|
||||
FROM quay.io/go-skynet/local-ai:master-ffmpeg-core
|
||||
@ -395,3 +396,11 @@ ENV EXTERNAL_GRPC_BACKENDS="diffusers:/build/backend/python/diffusers/run.sh"
|
||||
You can specify remote external backends or path to local files. The syntax is `backend-name:/path/to/backend` or `backend-name:host:port`.
|
||||
|
||||
{{% /notice %}}
|
||||
|
||||
#### In runtime
|
||||
|
||||
When using the `-core` container image it is possible to prepare the python backends you are interested into by using the `EXTRA_BACKENDS` variable, for instance:
|
||||
|
||||
```bash
|
||||
docker run --env EXTRA_BACKENDS="backend/python/diffusers" quay.io/go-skynet/local-ai:master-ffmpeg-core
|
||||
```
|
@ -3,6 +3,16 @@ set -e
|
||||
|
||||
cd /build
|
||||
|
||||
# If we have set EXTRA_BACKENDS, then we need to prepare the backends
|
||||
if [ -n "$EXTRA_BACKENDS" ]; then
|
||||
echo "EXTRA_BACKENDS: $EXTRA_BACKENDS"
|
||||
# Space separated list of backends
|
||||
for backend in $EXTRA_BACKENDS; do
|
||||
echo "Preparing backend: $backend"
|
||||
make -C $backend
|
||||
done
|
||||
fi
|
||||
|
||||
if [ "$REBUILD" != "false" ]; then
|
||||
rm -rf ./local-ai
|
||||
make build -j${BUILD_PARALLELISM:-1}
|
||||
|
Loading…
Reference in New Issue
Block a user