diff --git a/README.md b/README.md
index 1432b596..a2cb6009 100644
--- a/README.md
+++ b/README.md
@@ -152,27 +152,48 @@ Currently LocalAI comes as container images and can be used with docker or a con
### Run LocalAI in Kubernetes
-LocalAI can be installed inside Kubernetes with helm.
-
+LocalAI can be installed inside Kubernetes with helm.
-The local-ai Helm chart supports two options for the LocalAI server's models directory:
-1. Basic deployment with no persistent volume. You must manually update the Deployment to configure your own models directory.
- Install the chart with `.Values.deployment.volumes.enabled == false` and `.Values.dataVolume.enabled == false`.
-
-2. Advanced, two-phase deployment to provision the models directory using a DataVolume. Requires [Containerized Data Importer CDI](https://github.com/kubevirt/containerized-data-importer) to be pre-installed in your cluster.
-
- First, install the chart with `.Values.deployment.volumes.enabled == false` and `.Values.dataVolume.enabled == true`:
+1. Add the helm repo
```bash
- helm install local-ai charts/local-ai -n local-ai --create-namespace
+ helm repo add go-skynet https://go-skynet.github.io/helm-charts/
```
- Wait for CDI to create an importer Pod for the DataVolume and for the importer pod to finish provisioning the model archive inside the PV.
+1. Create a values files with your settings:
+```bash
+cat < values.yaml
+deployment:
+ image: quay.io/go-skynet/local-ai:latest
+ env:
+ threads: 4
+ contextSize: 1024
+ modelsPath: "/models"
+# Optionally create a PVC, mount the PV to the LocalAI Deployment,
+# and download a model to prepopulate the models directory
+modelsVolume:
+ enabled: true
+ url: "https://gpt4all.io/models/ggml-gpt4all-j.bin"
+ pvc:
+ size: 6Gi
+ accessModes:
+ - ReadWriteOnce
+ auth:
+ # Optional value for HTTP basic access authentication header
+ basic: "" # 'username:password' base64 encoded
+service:
+ type: ClusterIP
+ annotations: {}
+ # If using an AWS load balancer, you'll need to override the default 60s load balancer idle timeout
+ # service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "1200"
+EOF
+```
+3. Install the helm chart:
+```bash
+helm repo update
+helm install local-ai go-skynet/local-ai -f values.yaml
+```
- Once the PV is provisioned and the importer Pod removed, set `.Values.deployment.volumes.enabled == true` and `.Values.dataVolume.enabled == false` and upgrade the chart:
- ```bash
- helm upgrade local-ai -n local-ai charts/local-ai
- ```
- This will update the local-ai Deployment to mount the PV that was provisioned by the DataVolume.
+Check out also the [helm chart repository on GitHub](https://github.com/go-skynet/helm-charts).
diff --git a/examples/discord-bot/README.md b/examples/discord-bot/README.md
index 6053ae87..6628c354 100644
--- a/examples/discord-bot/README.md
+++ b/examples/discord-bot/README.md
@@ -8,15 +8,13 @@ git clone https://github.com/go-skynet/LocalAI
cd LocalAI/examples/discord-bot
-git clone https://github.com/go-skynet/gpt-discord-bot.git
-
# (optional) Checkout a specific LocalAI tag
# git checkout -b build
# Download gpt4all-j to models/
wget https://gpt4all.io/models/ggml-gpt4all-j.bin -O models/ggml-gpt4all-j
-# Set the discord bot options
+# Set the discord bot options (see: https://github.com/go-skynet/gpt-discord-bot#setup)
cp -rfv .env.example .env
vim .env
@@ -24,5 +22,53 @@ vim .env
docker-compose up -d --build
```
+Note: see setup options here: https://github.com/go-skynet/gpt-discord-bot#setup
+
Open up the URL in the console and give permission to the bot in your server. Start a thread with `/chat ..`
+## Kubernetes
+
+- install the local-ai chart first
+- change OPENAI_API_BASE to point to the API address and apply the discord-bot manifest:
+
+```yaml
+apiVersion: v1
+kind: Namespace
+metadata:
+ name: discord-bot
+---
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: localai
+ namespace: discord-bot
+ labels:
+ app: localai
+spec:
+ selector:
+ matchLabels:
+ app: localai
+ replicas: 1
+ template:
+ metadata:
+ labels:
+ app: localai
+ name: localai
+ spec:
+ containers:
+ - name: localai-discord
+ env:
+ - name: OPENAI_API_KEY
+ value: "x"
+ - name: DISCORD_BOT_TOKEN
+ value: ""
+ - name: DISCORD_CLIENT_ID
+ value: ""
+ - name: OPENAI_API_BASE
+ value: "http://local-ai.default.svc.cluster.local:8080"
+ - name: ALLOWED_SERVER_IDS
+ value: "xx"
+ - name: SERVER_TO_MODERATION_CHANNEL
+ value: "1:1"
+ image: quay.io/go-skynet/gpt-discord-bot:main
+```
\ No newline at end of file
diff --git a/examples/discord-bot/docker-compose.yaml b/examples/discord-bot/docker-compose.yaml
index 19056d50..6a10f306 100644
--- a/examples/discord-bot/docker-compose.yaml
+++ b/examples/discord-bot/docker-compose.yaml
@@ -16,8 +16,6 @@ services:
command: ["/usr/bin/local-ai" ]
bot:
- build:
- context: ./gpt-discord-bot
- dockerfile: Dockerfile
+ image: quay.io/go-skynet/gpt-discord-bot:main
env_file:
- .env
diff --git a/prompt-templates/wizardlm.tmpl b/prompt-templates/wizardlm.tmpl
new file mode 100644
index 00000000..e7b1985c
--- /dev/null
+++ b/prompt-templates/wizardlm.tmpl
@@ -0,0 +1,3 @@
+{{.Input}}
+
+### Response:
\ No newline at end of file