mirror of
https://github.com/mudler/LocalAI.git
synced 2024-12-18 20:27:57 +00:00
docs: enhancements (#133)
This commit is contained in:
parent
2539867247
commit
d129fabe3b
53
README.md
53
README.md
@ -152,27 +152,48 @@ Currently LocalAI comes as container images and can be used with docker or a con
|
|||||||
|
|
||||||
### Run LocalAI in Kubernetes
|
### Run LocalAI in Kubernetes
|
||||||
|
|
||||||
LocalAI can be installed inside Kubernetes with helm.
|
LocalAI can be installed inside Kubernetes with helm.
|
||||||
|
|
||||||
<details>
|
<details>
|
||||||
The local-ai Helm chart supports two options for the LocalAI server's models directory:
|
|
||||||
1. Basic deployment with no persistent volume. You must manually update the Deployment to configure your own models directory.
|
|
||||||
|
|
||||||
Install the chart with `.Values.deployment.volumes.enabled == false` and `.Values.dataVolume.enabled == false`.
|
1. Add the helm repo
|
||||||
|
|
||||||
2. Advanced, two-phase deployment to provision the models directory using a DataVolume. Requires [Containerized Data Importer CDI](https://github.com/kubevirt/containerized-data-importer) to be pre-installed in your cluster.
|
|
||||||
|
|
||||||
First, install the chart with `.Values.deployment.volumes.enabled == false` and `.Values.dataVolume.enabled == true`:
|
|
||||||
```bash
|
```bash
|
||||||
helm install local-ai charts/local-ai -n local-ai --create-namespace
|
helm repo add go-skynet https://go-skynet.github.io/helm-charts/
|
||||||
```
|
```
|
||||||
Wait for CDI to create an importer Pod for the DataVolume and for the importer pod to finish provisioning the model archive inside the PV.
|
1. Create a values files with your settings:
|
||||||
|
```bash
|
||||||
|
cat <<EOF > values.yaml
|
||||||
|
deployment:
|
||||||
|
image: quay.io/go-skynet/local-ai:latest
|
||||||
|
env:
|
||||||
|
threads: 4
|
||||||
|
contextSize: 1024
|
||||||
|
modelsPath: "/models"
|
||||||
|
# Optionally create a PVC, mount the PV to the LocalAI Deployment,
|
||||||
|
# and download a model to prepopulate the models directory
|
||||||
|
modelsVolume:
|
||||||
|
enabled: true
|
||||||
|
url: "https://gpt4all.io/models/ggml-gpt4all-j.bin"
|
||||||
|
pvc:
|
||||||
|
size: 6Gi
|
||||||
|
accessModes:
|
||||||
|
- ReadWriteOnce
|
||||||
|
auth:
|
||||||
|
# Optional value for HTTP basic access authentication header
|
||||||
|
basic: "" # 'username:password' base64 encoded
|
||||||
|
service:
|
||||||
|
type: ClusterIP
|
||||||
|
annotations: {}
|
||||||
|
# If using an AWS load balancer, you'll need to override the default 60s load balancer idle timeout
|
||||||
|
# service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "1200"
|
||||||
|
EOF
|
||||||
|
```
|
||||||
|
3. Install the helm chart:
|
||||||
|
```bash
|
||||||
|
helm repo update
|
||||||
|
helm install local-ai go-skynet/local-ai -f values.yaml
|
||||||
|
```
|
||||||
|
|
||||||
Once the PV is provisioned and the importer Pod removed, set `.Values.deployment.volumes.enabled == true` and `.Values.dataVolume.enabled == false` and upgrade the chart:
|
Check out also the [helm chart repository on GitHub](https://github.com/go-skynet/helm-charts).
|
||||||
```bash
|
|
||||||
helm upgrade local-ai -n local-ai charts/local-ai
|
|
||||||
```
|
|
||||||
This will update the local-ai Deployment to mount the PV that was provisioned by the DataVolume.
|
|
||||||
|
|
||||||
</details>
|
</details>
|
||||||
|
|
||||||
|
@ -8,15 +8,13 @@ git clone https://github.com/go-skynet/LocalAI
|
|||||||
|
|
||||||
cd LocalAI/examples/discord-bot
|
cd LocalAI/examples/discord-bot
|
||||||
|
|
||||||
git clone https://github.com/go-skynet/gpt-discord-bot.git
|
|
||||||
|
|
||||||
# (optional) Checkout a specific LocalAI tag
|
# (optional) Checkout a specific LocalAI tag
|
||||||
# git checkout -b build <TAG>
|
# git checkout -b build <TAG>
|
||||||
|
|
||||||
# Download gpt4all-j to models/
|
# Download gpt4all-j to models/
|
||||||
wget https://gpt4all.io/models/ggml-gpt4all-j.bin -O models/ggml-gpt4all-j
|
wget https://gpt4all.io/models/ggml-gpt4all-j.bin -O models/ggml-gpt4all-j
|
||||||
|
|
||||||
# Set the discord bot options
|
# Set the discord bot options (see: https://github.com/go-skynet/gpt-discord-bot#setup)
|
||||||
cp -rfv .env.example .env
|
cp -rfv .env.example .env
|
||||||
vim .env
|
vim .env
|
||||||
|
|
||||||
@ -24,5 +22,53 @@ vim .env
|
|||||||
docker-compose up -d --build
|
docker-compose up -d --build
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Note: see setup options here: https://github.com/go-skynet/gpt-discord-bot#setup
|
||||||
|
|
||||||
Open up the URL in the console and give permission to the bot in your server. Start a thread with `/chat ..`
|
Open up the URL in the console and give permission to the bot in your server. Start a thread with `/chat ..`
|
||||||
|
|
||||||
|
## Kubernetes
|
||||||
|
|
||||||
|
- install the local-ai chart first
|
||||||
|
- change OPENAI_API_BASE to point to the API address and apply the discord-bot manifest:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Namespace
|
||||||
|
metadata:
|
||||||
|
name: discord-bot
|
||||||
|
---
|
||||||
|
apiVersion: apps/v1
|
||||||
|
kind: Deployment
|
||||||
|
metadata:
|
||||||
|
name: localai
|
||||||
|
namespace: discord-bot
|
||||||
|
labels:
|
||||||
|
app: localai
|
||||||
|
spec:
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
app: localai
|
||||||
|
replicas: 1
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
app: localai
|
||||||
|
name: localai
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- name: localai-discord
|
||||||
|
env:
|
||||||
|
- name: OPENAI_API_KEY
|
||||||
|
value: "x"
|
||||||
|
- name: DISCORD_BOT_TOKEN
|
||||||
|
value: ""
|
||||||
|
- name: DISCORD_CLIENT_ID
|
||||||
|
value: ""
|
||||||
|
- name: OPENAI_API_BASE
|
||||||
|
value: "http://local-ai.default.svc.cluster.local:8080"
|
||||||
|
- name: ALLOWED_SERVER_IDS
|
||||||
|
value: "xx"
|
||||||
|
- name: SERVER_TO_MODERATION_CHANNEL
|
||||||
|
value: "1:1"
|
||||||
|
image: quay.io/go-skynet/gpt-discord-bot:main
|
||||||
|
```
|
@ -16,8 +16,6 @@ services:
|
|||||||
command: ["/usr/bin/local-ai" ]
|
command: ["/usr/bin/local-ai" ]
|
||||||
|
|
||||||
bot:
|
bot:
|
||||||
build:
|
image: quay.io/go-skynet/gpt-discord-bot:main
|
||||||
context: ./gpt-discord-bot
|
|
||||||
dockerfile: Dockerfile
|
|
||||||
env_file:
|
env_file:
|
||||||
- .env
|
- .env
|
||||||
|
3
prompt-templates/wizardlm.tmpl
Normal file
3
prompt-templates/wizardlm.tmpl
Normal file
@ -0,0 +1,3 @@
|
|||||||
|
{{.Input}}
|
||||||
|
|
||||||
|
### Response:
|
Loading…
Reference in New Issue
Block a user