mirror of
https://github.com/mudler/LocalAI.git
synced 2024-12-18 20:27:57 +00:00
Cleaning up examples/
models and starter .env
files (#1124)
Closes https://github.com/go-skynet/LocalAI/issues/1066 and https://github.com/go-skynet/LocalAI/issues/1065 Standardizes all `examples/`: - Models in one place (other than `rwkv`, which was one-offy) - Env files as `.env.example` with `cp` - Also standardizes comments and links docs
This commit is contained in:
parent
c223364816
commit
e34b5f0119
@ -1,5 +1,9 @@
|
|||||||
|
# CPU .env docs: https://localai.io/howtos/easy-setup-docker-cpu/
|
||||||
|
# GPU .env docs: https://localai.io/howtos/easy-setup-docker-gpu/
|
||||||
|
|
||||||
OPENAI_API_KEY=sk---anystringhere
|
OPENAI_API_KEY=sk---anystringhere
|
||||||
OPENAI_API_BASE=http://api:8080/v1
|
OPENAI_API_BASE=http://api:8080/v1
|
||||||
# Models to preload at start
|
# Models to preload at start
|
||||||
# Here we configure gpt4all as gpt-3.5-turbo and bert as embeddings
|
# Here we configure gpt4all as gpt-3.5-turbo and bert as embeddings,
|
||||||
PRELOAD_MODELS=[{"url": "github:go-skynet/model-gallery/gpt4all-j.yaml", "name": "gpt-3.5-turbo"}, { "url": "github:go-skynet/model-gallery/bert-embeddings.yaml", "name": "text-embedding-ada-002"}]
|
# see other options in the model gallery at https://github.com/go-skynet/model-gallery
|
||||||
|
PRELOAD_MODELS=[{"url": "github:go-skynet/model-gallery/gpt4all-j.yaml", "name": "gpt-3.5-turbo"}, { "url": "github:go-skynet/model-gallery/bert-embeddings.yaml", "name": "text-embedding-ada-002"}]
|
@ -10,12 +10,16 @@ git clone https://github.com/go-skynet/LocalAI
|
|||||||
|
|
||||||
cd LocalAI/examples/autoGPT
|
cd LocalAI/examples/autoGPT
|
||||||
|
|
||||||
|
cp -rfv .env.example .env
|
||||||
|
|
||||||
|
# Edit the .env file to set a different model by editing `PRELOAD_MODELS`.
|
||||||
|
vim .env
|
||||||
|
|
||||||
docker-compose run --rm auto-gpt
|
docker-compose run --rm auto-gpt
|
||||||
```
|
```
|
||||||
|
|
||||||
Note: The example automatically downloads the `gpt4all` model as it is under a permissive license. The GPT4All model does not seem to be enough to run AutoGPT. WizardLM-7b-uncensored seems to perform better (with `f16: true`).
|
Note: The example automatically downloads the `gpt4all` model as it is under a permissive license. The GPT4All model does not seem to be enough to run AutoGPT. WizardLM-7b-uncensored seems to perform better (with `f16: true`).
|
||||||
|
|
||||||
See the `.env` configuration file to set a different model with the [model-gallery](https://github.com/go-skynet/model-gallery) by editing `PRELOAD_MODELS`.
|
|
||||||
|
|
||||||
## Without docker
|
## Without docker
|
||||||
|
|
||||||
|
1
examples/chatbot-ui-manual/models
Symbolic link
1
examples/chatbot-ui-manual/models
Symbolic link
@ -0,0 +1 @@
|
|||||||
|
../models
|
@ -1,3 +1,6 @@
|
|||||||
|
# CPU .env docs: https://localai.io/howtos/easy-setup-docker-cpu/
|
||||||
|
# GPU .env docs: https://localai.io/howtos/easy-setup-docker-gpu/
|
||||||
|
|
||||||
OPENAI_API_KEY=x
|
OPENAI_API_KEY=x
|
||||||
DISCORD_BOT_TOKEN=x
|
DISCORD_BOT_TOKEN=x
|
||||||
DISCORD_CLIENT_ID=x
|
DISCORD_CLIENT_ID=x
|
||||||
|
@ -1 +1 @@
|
|||||||
../chatbot-ui/models/
|
../models
|
@ -1,7 +1,11 @@
|
|||||||
|
# CPU .env docs: https://localai.io/howtos/easy-setup-docker-cpu/
|
||||||
|
# GPU .env docs: https://localai.io/howtos/easy-setup-docker-gpu/
|
||||||
|
|
||||||
OPENAI_API_KEY=sk---anystringhere
|
OPENAI_API_KEY=sk---anystringhere
|
||||||
OPENAI_API_BASE=http://api:8080/v1
|
OPENAI_API_BASE=http://api:8080/v1
|
||||||
# Models to preload at start
|
# Models to preload at start
|
||||||
# Here we configure gpt4all as gpt-3.5-turbo and bert as embeddings
|
# Here we configure gpt4all as gpt-3.5-turbo and bert as embeddings,
|
||||||
|
# see other options in the model gallery at https://github.com/go-skynet/model-gallery
|
||||||
PRELOAD_MODELS=[{"url": "github:go-skynet/model-gallery/openllama-7b-open-instruct.yaml", "name": "gpt-3.5-turbo"}]
|
PRELOAD_MODELS=[{"url": "github:go-skynet/model-gallery/openllama-7b-open-instruct.yaml", "name": "gpt-3.5-turbo"}]
|
||||||
|
|
||||||
## Change the default number of threads
|
## Change the default number of threads
|
@ -10,9 +10,12 @@ git clone https://github.com/go-skynet/LocalAI
|
|||||||
|
|
||||||
cd LocalAI/examples/functions
|
cd LocalAI/examples/functions
|
||||||
|
|
||||||
|
cp -rfv .env.example .env
|
||||||
|
|
||||||
|
# Edit the .env file to set a different model by editing `PRELOAD_MODELS`.
|
||||||
|
vim .env
|
||||||
|
|
||||||
docker-compose run --rm functions
|
docker-compose run --rm functions
|
||||||
```
|
```
|
||||||
|
|
||||||
Note: The example automatically downloads the `openllama` model as it is under a permissive license.
|
Note: The example automatically downloads the `openllama` model as it is under a permissive license.
|
||||||
|
|
||||||
See the `.env` configuration file to set a different model with the [model-gallery](https://github.com/go-skynet/model-gallery) by editing `PRELOAD_MODELS`.
|
|
||||||
|
@ -1,3 +1,6 @@
|
|||||||
|
# CPU .env docs: https://localai.io/howtos/easy-setup-docker-cpu/
|
||||||
|
# GPU .env docs: https://localai.io/howtos/easy-setup-docker-gpu/
|
||||||
|
|
||||||
THREADS=4
|
THREADS=4
|
||||||
CONTEXT_SIZE=512
|
CONTEXT_SIZE=512
|
||||||
MODELS_PATH=/models
|
MODELS_PATH=/models
|
||||||
|
1
examples/langchain-chroma/models
Symbolic link
1
examples/langchain-chroma/models
Symbolic link
@ -0,0 +1 @@
|
|||||||
|
../models
|
@ -1 +0,0 @@
|
|||||||
{{.Input}}
|
|
@ -1,16 +0,0 @@
|
|||||||
name: gpt-3.5-turbo
|
|
||||||
parameters:
|
|
||||||
model: ggml-gpt4all-j
|
|
||||||
top_k: 80
|
|
||||||
temperature: 0.2
|
|
||||||
top_p: 0.7
|
|
||||||
context_size: 1024
|
|
||||||
stopwords:
|
|
||||||
- "HUMAN:"
|
|
||||||
- "GPT:"
|
|
||||||
roles:
|
|
||||||
user: " "
|
|
||||||
system: " "
|
|
||||||
template:
|
|
||||||
completion: completion
|
|
||||||
chat: gpt4all
|
|
@ -1,4 +0,0 @@
|
|||||||
The prompt below is a question to answer, a task to complete, or a conversation to respond to; decide which and write an appropriate response.
|
|
||||||
### Prompt:
|
|
||||||
{{.Input}}
|
|
||||||
### Response:
|
|
1
examples/langchain-huggingface/models
Symbolic link
1
examples/langchain-huggingface/models
Symbolic link
@ -0,0 +1 @@
|
|||||||
|
../models
|
@ -1 +0,0 @@
|
|||||||
{{.Input}}
|
|
@ -1,17 +0,0 @@
|
|||||||
name: gpt-3.5-turbo
|
|
||||||
parameters:
|
|
||||||
model: gpt2
|
|
||||||
top_k: 80
|
|
||||||
temperature: 0.2
|
|
||||||
top_p: 0.7
|
|
||||||
context_size: 1024
|
|
||||||
backend: "langchain-huggingface"
|
|
||||||
stopwords:
|
|
||||||
- "HUMAN:"
|
|
||||||
- "GPT:"
|
|
||||||
roles:
|
|
||||||
user: " "
|
|
||||||
system: " "
|
|
||||||
template:
|
|
||||||
completion: completion
|
|
||||||
chat: gpt4all
|
|
@ -1,4 +0,0 @@
|
|||||||
The prompt below is a question to answer, a task to complete, or a conversation to respond to; decide which and write an appropriate response.
|
|
||||||
### Prompt:
|
|
||||||
{{.Input}}
|
|
||||||
### Response:
|
|
1
examples/langchain/models
Symbolic link
1
examples/langchain/models
Symbolic link
@ -0,0 +1 @@
|
|||||||
|
../models
|
@ -1 +0,0 @@
|
|||||||
{{.Input}}
|
|
@ -1,17 +0,0 @@
|
|||||||
name: gpt-3.5-turbo
|
|
||||||
parameters:
|
|
||||||
model: ggml-gpt4all-j # ggml-koala-13B-4bit-128g
|
|
||||||
top_k: 80
|
|
||||||
temperature: 0.2
|
|
||||||
top_p: 0.7
|
|
||||||
context_size: 1024
|
|
||||||
stopwords:
|
|
||||||
- "HUMAN:"
|
|
||||||
- "GPT:"
|
|
||||||
roles:
|
|
||||||
user: " "
|
|
||||||
system: " "
|
|
||||||
backend: "gptj"
|
|
||||||
template:
|
|
||||||
completion: completion
|
|
||||||
chat: gpt4all
|
|
@ -1,4 +0,0 @@
|
|||||||
The prompt below is a question to answer, a task to complete, or a conversation to respond to; decide which and write an appropriate response.
|
|
||||||
### Prompt:
|
|
||||||
{{.Input}}
|
|
||||||
### Response:
|
|
@ -8,8 +8,6 @@ services:
|
|||||||
dockerfile: Dockerfile
|
dockerfile: Dockerfile
|
||||||
ports:
|
ports:
|
||||||
- 8080:8080
|
- 8080:8080
|
||||||
env_file:
|
|
||||||
- .env
|
|
||||||
volumes:
|
volumes:
|
||||||
- ./models:/models:cached
|
- ./models:/models:cached
|
||||||
command: ["/usr/bin/local-ai"]
|
command: ["/usr/bin/local-ai"]
|
||||||
|
7
examples/models/.gitignore
vendored
Normal file
7
examples/models/.gitignore
vendored
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
# Ignore everything but predefined models
|
||||||
|
*
|
||||||
|
!.gitignore
|
||||||
|
!completion.tmpl
|
||||||
|
!embeddings.yaml
|
||||||
|
!gpt4all.tmpl
|
||||||
|
!gpt-3.5-turbo.yaml
|
1
examples/query_data/models
Symbolic link
1
examples/query_data/models
Symbolic link
@ -0,0 +1 @@
|
|||||||
|
../models
|
@ -1 +0,0 @@
|
|||||||
{{.Input}}
|
|
@ -1,6 +0,0 @@
|
|||||||
name: text-embedding-ada-002
|
|
||||||
parameters:
|
|
||||||
model: bert
|
|
||||||
threads: 14
|
|
||||||
backend: bert-embeddings
|
|
||||||
embeddings: true
|
|
@ -1,16 +0,0 @@
|
|||||||
name: gpt-3.5-turbo
|
|
||||||
parameters:
|
|
||||||
model: ggml-gpt4all-j
|
|
||||||
top_k: 80
|
|
||||||
temperature: 0.2
|
|
||||||
top_p: 0.7
|
|
||||||
context_size: 1024
|
|
||||||
stopwords:
|
|
||||||
- "HUMAN:"
|
|
||||||
- "GPT:"
|
|
||||||
roles:
|
|
||||||
user: " "
|
|
||||||
system: " "
|
|
||||||
template:
|
|
||||||
completion: completion
|
|
||||||
chat: gpt4all
|
|
@ -1,3 +1,6 @@
|
|||||||
|
# CPU .env docs: https://localai.io/howtos/easy-setup-docker-cpu/
|
||||||
|
# GPU .env docs: https://localai.io/howtos/easy-setup-docker-gpu/
|
||||||
|
|
||||||
SLACK_APP_TOKEN=xapp-1-...
|
SLACK_APP_TOKEN=xapp-1-...
|
||||||
SLACK_BOT_TOKEN=xoxb-...
|
SLACK_BOT_TOKEN=xoxb-...
|
||||||
OPENAI_API_KEY=sk-...
|
OPENAI_API_KEY=sk-...
|
||||||
|
@ -18,7 +18,7 @@ git clone https://github.com/seratch/ChatGPT-in-Slack
|
|||||||
# Download gpt4all-j to models/
|
# Download gpt4all-j to models/
|
||||||
wget https://gpt4all.io/models/ggml-gpt4all-j.bin -O models/ggml-gpt4all-j
|
wget https://gpt4all.io/models/ggml-gpt4all-j.bin -O models/ggml-gpt4all-j
|
||||||
|
|
||||||
# Set the discord bot options (see: https://github.com/seratch/ChatGPT-in-Slack)
|
# Set the Slack bot options (see: https://github.com/seratch/ChatGPT-in-Slack)
|
||||||
cp -rfv .env.example .env
|
cp -rfv .env.example .env
|
||||||
vim .env
|
vim .env
|
||||||
|
|
||||||
|
@ -1 +1 @@
|
|||||||
../chatbot-ui/models
|
../models
|
@ -1,3 +1,6 @@
|
|||||||
|
# CPU .env docs: https://localai.io/howtos/easy-setup-docker-cpu/
|
||||||
|
# GPU .env docs: https://localai.io/howtos/easy-setup-docker-gpu/
|
||||||
|
|
||||||
# Create an app-level token with connections:write scope
|
# Create an app-level token with connections:write scope
|
||||||
SLACK_APP_TOKEN=xapp-1-...
|
SLACK_APP_TOKEN=xapp-1-...
|
||||||
# Install the app into your workspace to grab this token
|
# Install the app into your workspace to grab this token
|
||||||
|
Loading…
Reference in New Issue
Block a user