2023-11-22 17:13:50 +00:00
+++
disableToc = false
title = "Easy Setup - GPU Docker"
weight = 2
+++
{{% notice Note %}}
- You will need about 10gb of RAM Free
2023-12-11 14:27:29 +00:00
- You will need about 15gb of space free on C drive for ``Docker compose``
2023-11-22 17:13:50 +00:00
{{% /notice %}}
2023-12-11 14:27:29 +00:00
We are going to run `LocalAI` with `docker compose` for this set up.
2023-11-22 17:13:50 +00:00
2023-12-11 14:27:29 +00:00
Lets Setup our folders for ``LocalAI``
{{< tabs > }}
{{% tab name="Windows (Batch)" %}}
```batch
mkdir "LocalAI"
cd LocalAI
mkdir "models"
mkdir "images"
2023-11-22 17:13:50 +00:00
```
2023-12-11 14:27:29 +00:00
{{% /tab %}}
2023-11-22 17:13:50 +00:00
2023-12-11 14:27:29 +00:00
{{% tab name="Linux (Bash / WSL)" %}}
2023-11-22 17:13:50 +00:00
```bash
2023-12-11 14:27:29 +00:00
mkdir -p "LocalAI"
2023-11-22 17:13:50 +00:00
cd LocalAI
2023-12-11 14:27:29 +00:00
mkdir -p "models"
mkdir -p "images"
2023-11-22 17:13:50 +00:00
```
2023-12-11 14:27:29 +00:00
{{% /tab %}}
{{< / tabs > }}
2023-11-22 17:13:50 +00:00
2023-12-11 14:27:29 +00:00
At this point we want to set up our `.env` file, here is a copy for you to use if you wish, Make sure this is in the ``LocalAI`` folder.
2023-11-22 17:13:50 +00:00
```bash
## Set number of threads.
## Note: prefer the number of physical cores. Overbooking the CPU degrades performance notably.
THREADS=2
## Specify a different bind address (defaults to ":8080")
# ADDRESS=127.0.0.1:8080
## Define galleries.
## models will to install will be visible in `/models/available`
GALLERIES=[{"name":"model-gallery", "url":"github:go-skynet/model-gallery/index.yaml"}, {"url": "github:go-skynet/model-gallery/huggingface.yaml","name":"huggingface"}]
## Default path for models
MODELS_PATH=/models
## Enable debug mode
# DEBUG=true
## Disables COMPEL (Lets Stable Diffuser work, uncomment if you plan on using it)
# COMPEL=0
## Enable/Disable single backend (useful if only one GPU is available)
# SINGLE_ACTIVE_BACKEND=true
## Specify a build type. Available: cublas, openblas, clblas.
BUILD_TYPE=cublas
## Uncomment and set to true to enable rebuilding from source
# REBUILD=true
## Enable go tags, available: stablediffusion, tts
## stablediffusion: image generation with stablediffusion
## tts: enables text-to-speech with go-piper
## (requires REBUILD=true)
#
#GO_TAGS=tts
## Path where to store generated images
# IMAGE_PATH=/tmp
## Specify a default upload limit in MB (whisper)
# UPLOAD_LIMIT
# HUGGINGFACEHUB_API_TOKEN=Token here
```
Now that we have the `.env` set lets set up our `docker-compose` file.
It will use a container from [quay.io ](https://quay.io/repository/go-skynet/local-ai?tab=tags ).
Also note this `docker-compose` file is for `CUDA` only.
Please change the image to what you need.
2023-12-08 09:27:21 +00:00
{{< tabs > }}
{{% tab name="GPU Images CUDA 11" %}}
- `master-cublas-cuda11`
- `master-cublas-cuda11-core`
- `v2.0.0-cublas-cuda11`
- `v2.0.0-cublas-cuda11-core`
- `v2.0.0-cublas-cuda11-ffmpeg`
- `v2.0.0-cublas-cuda11-ffmpeg-core`
Core Images - Smaller images without predownload python dependencies
{{% /tab %}}
{{% tab name="GPU Images CUDA 12" %}}
- `master-cublas-cuda12`
- `master-cublas-cuda12-core`
- `v2.0.0-cublas-cuda12`
- `v2.0.0-cublas-cuda12-core`
- `v2.0.0-cublas-cuda12-ffmpeg`
- `v2.0.0-cublas-cuda12-ffmpeg-core`
Core Images - Smaller images without predownload python dependencies
{{% /tab %}}
{{< / tabs > }}
2023-11-22 17:13:50 +00:00
```docker
version: '3.6'
services:
api:
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
image: quay.io/go-skynet/local-ai:[CHANGEMETOIMAGENEEDED]
tty: true # enable colorized logs
restart: always # should this be on-failure ?
ports:
- 8080:8080
env_file:
- .env
volumes:
- ./models:/models
- ./images/:/tmp/generated/images/
command: ["/usr/bin/local-ai" ]
```
Make sure to save that in the root of the `LocalAI` folder. Then lets spin up the Docker run this in a `CMD` or `BASH`
```bash
2023-12-11 14:27:29 +00:00
docker compose up -d --pull always
2023-11-22 17:13:50 +00:00
```
Now we are going to let that set up, once it is done, lets check to make sure our huggingface / localai galleries are working (wait until you see this screen to do this)
You should see:
```
┌───────────────────────────────────────────────────┐
│ Fiber v2.42.0 │
│ http://127.0.0.1:8080 │
│ (bound on host 0.0.0.0 and port 8080) │
│ │
│ Handlers ............. 1 Processes ........... 1 │
│ Prefork ....... Disabled PID ................. 1 │
└───────────────────────────────────────────────────┘
```
```bash
curl http://localhost:8080/models/available
```
Output will look like this:
![](https://cdn.discordapp.com/attachments/1116933141895053322/1134037542845566976/image.png)
2023-12-01 18:12:21 +00:00
Now that we got that setup, lets go setup a [model ]({{%relref "easy-model" %}} )