mirror of
https://github.com/mudler/LocalAI.git
synced 2024-12-19 20:57:54 +00:00
AMD/ROCm Documentation update + formatting fix (#2100)
* Update aio-images.md Signed-off-by: jtwolfe <jamie.t.wolfe@gmail.com> * Update aio-images.md Signed-off-by: jtwolfe <jamie.t.wolfe@gmail.com> * Update aio-images.md Signed-off-by: jtwolfe <jamie.t.wolfe@gmail.com> * Update GPU-acceleration.md Signed-off-by: jtwolfe <jamie.t.wolfe@gmail.com> * Update GPU-acceleration.md Signed-off-by: jtwolfe <jamie.t.wolfe@gmail.com> * Update GPU-acceleration.md Signed-off-by: jtwolfe <jamie.t.wolfe@gmail.com> * Update GPU-acceleration.md Signed-off-by: jtwolfe <jamie.t.wolfe@gmail.com> * Update GPU-acceleration.md Signed-off-by: jtwolfe <jamie.t.wolfe@gmail.com> * Update GPU-acceleration.md Signed-off-by: jtwolfe <jamie.t.wolfe@gmail.com> --------- Signed-off-by: jtwolfe <jamie.t.wolfe@gmail.com>
This commit is contained in:
parent
220958a87c
commit
729378ca98
@ -12,7 +12,7 @@ Section under construction
|
||||
This section contains instruction on how to use LocalAI with GPU acceleration.
|
||||
|
||||
{{% alert icon="⚡" context="warning" %}}
|
||||
For accelleration for AMD or Metal HW there are no specific container images, see the [build]({{%relref "docs/getting-started/build#Acceleration" %}})
|
||||
For accelleration for AMD or Metal HW is still in development, for additional details see the [build]({{%relref "docs/getting-started/build#Acceleration" %}})
|
||||
{{% /alert %}}
|
||||
|
||||
|
||||
@ -110,6 +110,143 @@ llama_model_load_internal: total VRAM used: 1598 MB
|
||||
llama_init_from_file: kv self size = 512.00 MB
|
||||
```
|
||||
|
||||
## ROCM(AMD) acceleration
|
||||
|
||||
There are a limited number of tested configurations for ROCm systems however most newer deditated GPU consumer grade devices seem to be supported under the current ROCm6 implementation.
|
||||
|
||||
Due to the nature of ROCm it is best to run all implementations in containers as this limits the number of packages required for installation on host system, compatability and package versions for dependencies across all variations of OS must be tested independently if disired, please refer to the [build]({{%relref "docs/getting-started/build#Acceleration" %}}) documentation.
|
||||
|
||||
### Requirements
|
||||
|
||||
- `ROCm 6.x.x` compatible GPU/accelerator
|
||||
- OS: `Ubuntu` (22.04, 20.04), `RHEL` (9.3, 9.2, 8.9, 8.8), `SLES` (15.5, 15.4)
|
||||
- Installed to host: `amdgpu-dkms` and `rocm` >=6.0.0 as per ROCm documentation.
|
||||
|
||||
### Recommendations
|
||||
|
||||
- Do not use on a system running Wayland.
|
||||
- If running with Xorg do not use GPU assigned for compute for desktop rendering.
|
||||
- Ensure at least 100GB of free space on disk hosting container runtime and storing images prior to installation.
|
||||
|
||||
### Limitations
|
||||
|
||||
Ongoing verification testing of ROCm compatability with integrated backends.
|
||||
Please note the following list of verified backends and devices.
|
||||
|
||||
### Verified
|
||||
|
||||
The devices in the following list have been tested with `hipblas` images running `ROCm 6.0.0`
|
||||
|
||||
| Backend | Verified | Devices |
|
||||
| ---- | ---- | ---- |
|
||||
| llama.cpp | yes | Radeon VII (gfx906) |
|
||||
| diffusers | yes | Radeon VII (gfx906) |
|
||||
| piper | yes | Radeon VII (gfx906) |
|
||||
| whisper | no | none |
|
||||
| autogptq | no | none |
|
||||
| bark | no | none |
|
||||
| coqui | no | none |
|
||||
| transformers | no | none |
|
||||
| exllama | no | none |
|
||||
| exllama2 | no | none |
|
||||
| mamba | no | none |
|
||||
| petals | no | none |
|
||||
| sentencetransformers | no | none |
|
||||
| transformers-musicgen | no | none |
|
||||
| vall-e-x | no | none |
|
||||
| vllm | no | none |
|
||||
|
||||
**You can help by expanding this list.**
|
||||
|
||||
### System Prep
|
||||
|
||||
1. Check your GPU LLVM target is compatible with the version of ROCm. This can be found in the [LLVM Docs](https://llvm.org/docs/AMDGPUUsage.html).
|
||||
2. Check which ROCm version is compatible with your LLVM target and your chosen OS (pay special attention to supported kernel versions). See the following for compatability for ([ROCm 6.0.0](https://rocm.docs.amd.com/projects/install-on-linux/en/docs-6.0.0/reference/system-requirements.html)) or ([ROCm 6.0.2](https://rocm.docs.amd.com/projects/install-on-linux/en/latest/reference/system-requirements.html))
|
||||
3. Install you chosen version of the `dkms` and `rocm` (it is recommended that the native package manager be used for this process for any OS as version changes are executed more easily via this method if updates are required). Take care to restart after installing `amdgpu-dkms` and before installing `rocm`, for details regarding this see the installation documentation for your chosen OS ([6.0.2](https://rocm.docs.amd.com/projects/install-on-linux/en/latest/how-to/native-install/index.html) or [6.0.0](https://rocm.docs.amd.com/projects/install-on-linux/en/docs-6.0.0/how-to/native-install/index.html))
|
||||
4. Deploy. Yes it's that easy.
|
||||
|
||||
#### Setup Example (Docker/containerd)
|
||||
|
||||
The following are examples of the ROCm specific configuration elements required.
|
||||
|
||||
```yaml
|
||||
# docker-compose.yaml
|
||||
# For full functionality select a non-'core' image, version locking the image is recommended for debug purposes.
|
||||
image: quay.io/go-skynet/local-ai:master-aio-gpu-hipblas
|
||||
environment:
|
||||
- DEBUG=true
|
||||
# If your gpu is not already included in the current list of default targets the following build details are required.
|
||||
- REBUILD=true
|
||||
- BUILD_TYPE=hipblas
|
||||
- GPU_TARGETS=gfx906 # Example for Radeon VII
|
||||
devices:
|
||||
# AMD GPU only require the following devices be passed through to the container for offloading to occur.
|
||||
- /dev/dri
|
||||
- /dev/kfd
|
||||
```
|
||||
|
||||
The same can also be executed as a `run` for your container runtime
|
||||
|
||||
```
|
||||
docker run \
|
||||
-e DEBUG=true \
|
||||
-e REBUILD=true \
|
||||
-e BUILD_TYPE=hipblas \
|
||||
-e GPU_TARGETS=gfx906 \
|
||||
--device /dev/dri \
|
||||
--device /dev/kfd \
|
||||
quay.io/go-skynet/local-ai:master-aio-gpu-hipblas
|
||||
```
|
||||
|
||||
Please ensure to add all other required environment variables, port forwardings, etc to your `compose` file or `run` command.
|
||||
|
||||
The rebuild process will take some time to complete when deploying these containers and it is recommended that you `pull` the image prior to deployment as depending on the version these images may be ~20GB in size.
|
||||
|
||||
#### Example (k8s) (Advanced Deployment/WIP)
|
||||
|
||||
For k8s deployments there is an additional step required before deployment, this is the deployment of the [ROCm/k8s-device-plugin](https://artifacthub.io/packages/helm/amd-gpu-helm/amd-gpu).
|
||||
For any k8s environment the documentation provided by AMD from the ROCm project should be successful. It is recommended that if you use rke2 or OpenShift that you deploy the SUSE or RedHat provided version of this resource to ensure compatability.
|
||||
After this has been completed the [helm chart from go-skynet](https://github.com/go-skynet/helm-charts) can be configured and deployed mostly un-edited.
|
||||
|
||||
The following are details of the changes that should be made to ensure proper function.
|
||||
While these details may be configurable in the `values.yaml` development of this Helm chart is ongoing and is subject to change.
|
||||
|
||||
The following details indicate the final state of the localai deployment relevant to GPU function.
|
||||
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: {NAME}-local-ai
|
||||
...
|
||||
spec:
|
||||
...
|
||||
template:
|
||||
...
|
||||
spec:
|
||||
containers:
|
||||
- env:
|
||||
- name: HIP_VISIBLE_DEVICES
|
||||
value: '0'
|
||||
# This variable indicates the devices availible to container (0:device1 1:device2 2:device3) etc.
|
||||
# For multiple devices (say device 1 and 3) the value would be equivelant to HIP_VISIBLE_DEVICES="0,2"
|
||||
# Please take note of this when an iGPU is present in host system as compatability is not assured.
|
||||
...
|
||||
resources:
|
||||
limits:
|
||||
amd.com/gpu: '1'
|
||||
requests:
|
||||
amd.com/gpu: '1'
|
||||
```
|
||||
|
||||
This configuration has been tested on a 'custom' cluster managed by SUSE Rancher that was deployed on top of Ubuntu 22.04.4, certification of other configuration is ongoing and compatability is not gauranteed.
|
||||
|
||||
### Notes
|
||||
|
||||
- When installing the ROCM kernel driver on your system ensure that you are installing an equal or newer version that that which is currently implemented in LocalAI (6.0.0 at time of writing).
|
||||
- AMD documentation indicates that this will ensure functionality however your milage may vary depending on the GPU and distro you are using.
|
||||
- If you encounter an `Error 413` on attempting to upload an audio file or image for whisper or llava/bakllava on a k8s deployment, note that the ingress for your deployment may require the annontation `nginx.ingress.kubernetes.io/proxy-body-size: "25m"` to allow larger uploads. This may be included in future versions of the helm chart.
|
||||
|
||||
## Intel acceleration (sycl)
|
||||
|
||||
### Requirements
|
||||
|
@ -9,13 +9,14 @@ All-In-One images are images that come pre-configured with a set of models and b
|
||||
|
||||
In the AIO images there are models configured with the names of OpenAI models, however, they are really backed by Open Source models. You can find the table below
|
||||
|
||||
| Category | Model name | Real model |
|
||||
| Text Generation | `gpt-4` | `phi-2`(CPU) or `hermes-2-pro-mistral`(GPU) |
|
||||
| Multimodal | `gpt-4-vision-preview` | `bakllava`(CPU) or `llava-1.6-mistral`(GPU) |
|
||||
| Text generation | `stablediffusion` | `stablediffusion`(CPU) `dreamshaper-8` (GPU) |
|
||||
| Audio transcription | `whisper-1` | `whisper` with the `whisper-base` model |
|
||||
| Text to Audio | `tts-1` | the `en-us-amy-low.onnx` model with `rhasspy` |
|
||||
| Embeddings | `text-embedding-ada-002` | |
|
||||
| Category | Model name | Real model (CPU) | Real model (GPU) |
|
||||
| ---- | ---- | ---- | ---- |
|
||||
| Text Generation | `gpt-4` | `phi-2` | `hermes-2-pro-mistral` |
|
||||
| Multimodal Vision | `gpt-4-vision-preview` | `bakllava` | `llava-1.6-mistral` |
|
||||
| Image Generation | `stablediffusion` | `stablediffusion` | `dreamshaper-8` |
|
||||
| Speech to Text | `whisper-1` | `whisper` with `whisper-base` model | <= same |
|
||||
| Text to Speech | `tts-1` | `en-us-amy-low.onnx` from `rhasspy/piper` | <= same |
|
||||
| Embeddings | `text-embedding-ada-002` | `all-MiniLM-L6-v2` in Q4 | `all-MiniLM-L6-v2` |
|
||||
|
||||
## Usage
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user