.devcontainer | ||
.github | ||
.vscode | ||
api | ||
examples | ||
models | ||
pkg/model | ||
prompt-templates | ||
tests/fixtures | ||
.dockerignore | ||
.env | ||
.gitignore | ||
.goreleaser.yaml | ||
docker-compose.yaml | ||
Dockerfile | ||
Dockerfile.dev | ||
Earthfile | ||
entrypoint.sh | ||
go.mod | ||
go.sum | ||
LICENSE | ||
main.go | ||
Makefile | ||
README.md | ||
renovate.json |
LocalAI
LocalAI is a drop-in replacement REST API compatible with OpenAI for local CPU inferencing. It allows to run models locally or on-prem with consumer grade hardware. It is based on llama.cpp, gpt4all, rwkv.cpp and ggml, including support GPT4ALL-J which is licensed under Apache 2.0.
- OpenAI compatible API
- Supports multiple-models
- Once loaded the first time, it keep models loaded in memory for faster inference
- Support for prompt templates
- Doesn't shell-out, but uses C bindings for a faster inference and better performance.
LocalAI is a community-driven project, focused on making the AI accessible to anyone. Any contribution, feedback and PR is welcome! It was initially created by mudler at the SpectroCloud OSS Office.
News
- 02-05-2023: Support for
rwkv.cpp
models ( https://github.com/go-skynet/LocalAI/pull/158 ) and for/edits
endpoint - 01-05-2023: Support for SSE stream of tokens in
llama.cpp
backends ( https://github.com/go-skynet/LocalAI/pull/152 )
Socials and community chatter
-
Follow @LocalAI_API on twitter.
-
Reddit post about LocalAI.
-
Hacker news post - help us out by voting if you like this project.
-
Tutorial to use k8sgpt with LocalAI - excellent usecase for localAI, using AI to analyse Kubernetes clusters.
Model compatibility
It is compatible with the models supported by llama.cpp supports also GPT4ALL-J and cerebras-GPT with ggml.
Tested with:
- Vicuna
- Alpaca
- GPT4ALL
- GPT4ALL-J
- Koala
- cerebras-GPT with ggml
- RWKV models with rwkv.cpp
It should also be compatible with StableLM and GPTNeoX ggml models (untested)
Note: You might need to convert older models to the new format, see here for instance to run gpt4all
.
RWKV
For rwkv
models, you need to put also the associated tokenizer along with the ggml model:
ls models
36464540 -rw-r--r-- 1 mudler mudler 1.2G May 3 10:51 rwkv_small
36464543 -rw-r--r-- 1 mudler mudler 2.4M May 3 10:51 rwkv_small.tokenizer.json
Usage
LocalAI
comes by default as a container image. You can check out all the available images with corresponding tags here.
The easiest way to run LocalAI is by using docker-compose
:
git clone https://github.com/go-skynet/LocalAI
cd LocalAI
# (optional) Checkout a specific LocalAI tag
# git checkout -b build <TAG>
# copy your models to models/
cp your-model.bin models/
# (optional) Edit the .env file to set things like context size and threads
# vim .env
# start with docker-compose
docker-compose up -d --build
# Now API is accessible at localhost:8080
curl http://localhost:8080/v1/models
# {"object":"list","data":[{"id":"your-model.bin","object":"model"}]}
curl http://localhost:8080/v1/completions -H "Content-Type: application/json" -d '{
"model": "your-model.bin",
"prompt": "A long time ago in a galaxy far, far away",
"temperature": 0.7
}'
Example: Use GPT4ALL-J model
# Clone LocalAI
git clone https://github.com/go-skynet/LocalAI
cd LocalAI
# (optional) Checkout a specific LocalAI tag
# git checkout -b build <TAG>
# Download gpt4all-j to models/
wget https://gpt4all.io/models/ggml-gpt4all-j.bin -O models/ggml-gpt4all-j
# Use a template from the examples
cp -rf prompt-templates/ggml-gpt4all-j.tmpl models/
# (optional) Edit the .env file to set things like context size and threads
# vim .env
# start with docker-compose
docker-compose up -d --build
# Now API is accessible at localhost:8080
curl http://localhost:8080/v1/models
# {"object":"list","data":[{"id":"ggml-gpt4all-j","object":"model"}]}
curl http://localhost:8080/v1/chat/completions -H "Content-Type: application/json" -d '{
"model": "ggml-gpt4all-j",
"messages": [{"role": "user", "content": "How are you?"}],
"temperature": 0.9
}'
# {"model":"ggml-gpt4all-j","choices":[{"message":{"role":"assistant","content":"I'm doing well, thanks. How about you?"}}]}
To build locally, run make build
(see below).
Other examples
To see other examples on how to integrate with other projects for instance chatbot-ui, see: examples.
Advanced configuration
LocalAI can be configured to serve user-defined models with a set of default parameters and templates.
You can create multiple yaml
files in the models path or either specify a single YAML configuration file.
Consider the following models
folder in the example/chatbot-ui
:
base ❯ ls -liah examples/chatbot-ui/models
36487587 drwxr-xr-x 2 mudler mudler 4.0K May 3 12:27 .
36487586 drwxr-xr-x 3 mudler mudler 4.0K May 3 10:42 ..
36465214 -rw-r--r-- 1 mudler mudler 10 Apr 27 07:46 completion.tmpl
36464855 -rw-r--r-- 1 mudler mudler 3.6G Apr 27 00:08 ggml-gpt4all-j
36464537 -rw-r--r-- 1 mudler mudler 245 May 3 10:42 gpt-3.5-turbo.yaml
36467388 -rw-r--r-- 1 mudler mudler 180 Apr 27 07:46 gpt4all.tmpl
In the gpt-3.5-turbo.yaml
file it is defined the gpt-3.5-turbo
model which is an alias to use gpt4all-j
with pre-defined options.
For instance, consider the following that declares gpt-3.5-turbo
backed by the ggml-gpt4all-j
model:
name: gpt-3.5-turbo
# Default model parameters
parameters:
# Relative to the models path
model: ggml-gpt4all-j
# temperature
temperature: 0.3
# all the OpenAI request options here..
# Default context size
context_size: 512
threads: 10
# Define a backend (optional). By default it will try to guess the backend the first time the model is interacted with.
backend: gptj # available: llama, stablelm, gpt2, gptj rwkv
# stopwords (if supported by the backend)
stopwords:
- "HUMAN:"
- "### Response:"
# define chat roles
roles:
user: "HUMAN:"
system: "GPT:"
template:
# template file ".tmpl" with the prompt template to use by default on the endpoint call. Note there is no extension in the files
completion: completion
chat: ggml-gpt4all-j
Specifying a config-file
via CLI allows to declare models in a single file as a list, for instance:
- name: list1
parameters:
model: testmodel
context_size: 512
threads: 10
stopwords:
- "HUMAN:"
- "### Response:"
roles:
user: "HUMAN:"
system: "GPT:"
template:
completion: completion
chat: ggml-gpt4all-j
- name: list2
parameters:
model: testmodel
context_size: 512
threads: 10
stopwords:
- "HUMAN:"
- "### Response:"
roles:
user: "HUMAN:"
system: "GPT:"
template:
completion: completion
chat: ggml-gpt4all-j
See also chatbot-ui as an example on how to use config files.
Prompt templates
The API doesn't inject a default prompt for talking to the model. You have to use a prompt similar to what's described in the standford-alpaca docs: https://github.com/tatsu-lab/stanford_alpaca#data-release.
The below instruction describes a task. Write a response that appropriately completes the request.
### Instruction:
{{.Input}}
### Response:
See the prompt-templates directory in this repository for templates for some of the most popular models.
For the edit endpoint, an example template for alpaca-based models can be:
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
{{.Instruction}}
### Input:
{{.Input}}
### Response:
CLI
You can control LocalAI with command line arguments, to specify a binding address, or the number of threads.
Usage:
local-ai --models-path <model_path> [--address <address>] [--threads <num_threads>]
Parameter | Environment Variable | Default Value | Description |
---|---|---|---|
models-path | MODELS_PATH | The path where you have models (ending with .bin ). |
|
threads | THREADS | Number of Physical cores | The number of threads to use for text generation. |
address | ADDRESS | :8080 | The address and port to listen on. |
context-size | CONTEXT_SIZE | 512 | Default token context size. |
debug | DEBUG | false | Enable debug mode. |
config-file | CONFIG_FILE | empty | Path to a LocalAI config file. |
Setup
Currently LocalAI comes as a container image and can be used with docker or a container engine of choice. You can check out all the available images with corresponding tags here.
Docker
docker run -p 8080:8080 -ti --rm quay.io/go-skynet/local-ai:latest --models-path /path/to/models --context-size 700 --threads 4
You should see:
┌───────────────────────────────────────────────────┐
│ Fiber v2.42.0 │
│ http://127.0.0.1:8080 │
│ (bound on host 0.0.0.0 and port 8080) │
│ │
│ Handlers ............. 1 Processes ........... 1 │
│ Prefork ....... Disabled PID ................. 1 │
└───────────────────────────────────────────────────┘
Build locally
In order to build the LocalAI
container image locally you can use docker
:
# build the image
docker build -t LocalAI .
docker run LocalAI
Or you can build the binary with make
:
make build
Build on mac
Building on Mac (M1 or M2) works, but you may need to install some prerequisites using brew
.
The below has been tested by one mac user and found to work. Note that this doesn't use docker to run the server:
# install build dependencies
brew install cmake
brew install go
# clone the repo
git clone https://github.com/go-skynet/LocalAI.git
cd LocalAI
# build the binary
make build
# Download gpt4all-j to models/
wget https://gpt4all.io/models/ggml-gpt4all-j.bin -O models/ggml-gpt4all-j
# Use a template from the examples
cp -rf prompt-templates/ggml-gpt4all-j.tmpl models/
# Run LocalAI
./local-ai --models-path ./models/ --debug
# Now API is accessible at localhost:8080
curl http://localhost:8080/v1/models
curl http://localhost:8080/v1/chat/completions -H "Content-Type: application/json" -d '{
"model": "ggml-gpt4all-j",
"messages": [{"role": "user", "content": "How are you?"}],
"temperature": 0.9
}'
Windows compatibility
It should work, however you need to make sure you give enough resources to the container. See https://github.com/go-skynet/LocalAI/issues/2
Run LocalAI in Kubernetes
LocalAI can be installed inside Kubernetes with helm.
- Add the helm repo
helm repo add go-skynet https://go-skynet.github.io/helm-charts/
- Create a values files with your settings:
cat <<EOF > values.yaml
deployment:
image: quay.io/go-skynet/local-ai:latest
env:
threads: 4
contextSize: 1024
modelsPath: "/models"
# Optionally create a PVC, mount the PV to the LocalAI Deployment,
# and download a model to prepopulate the models directory
modelsVolume:
enabled: true
url: "https://gpt4all.io/models/ggml-gpt4all-j.bin"
pvc:
size: 6Gi
accessModes:
- ReadWriteOnce
auth:
# Optional value for HTTP basic access authentication header
basic: "" # 'username:password' base64 encoded
service:
type: ClusterIP
annotations: {}
# If using an AWS load balancer, you'll need to override the default 60s load balancer idle timeout
# service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "1200"
EOF
- Install the helm chart:
helm repo update
helm install local-ai go-skynet/local-ai -f values.yaml
Check out also the helm chart repository on GitHub.
Supported OpenAI API endpoints
You can check out the OpenAI API reference.
Following the list of endpoints/parameters supported.
Note:
- You can also specify the model as part of the OpenAI token.
- If only one model is available, the API will use it for all the requests.
Chat completions
curl http://localhost:8080/v1/chat/completions -H "Content-Type: application/json" -d '{
"model": "ggml-koala-7b-model-q4_0-r2.bin",
"messages": [{"role": "user", "content": "Say this is a test!"}],
"temperature": 0.7
}'
Available additional parameters: top_p
, top_k
, max_tokens
Edit completions
curl http://localhost:8080/v1/edits -H "Content-Type: application/json" -d '{
"model": "ggml-koala-7b-model-q4_0-r2.bin",
"instruction": "rephrase",
"input": "Black cat jumped out of the window",
"temperature": 0.7
}'
Available additional parameters: top_p
, top_k
, max_tokens
.
Completions
To generate a completion, you can send a POST request to the /v1/completions
endpoint with the instruction as per the request body:
curl http://localhost:8080/v1/completions -H "Content-Type: application/json" -d '{
"model": "ggml-koala-7b-model-q4_0-r2.bin",
"prompt": "A long time ago in a galaxy far, far away",
"temperature": 0.7
}'
Available additional parameters: top_p
, top_k
, max_tokens
List models
curl http://localhost:8080/v1/models
Frequently asked questions
Here are answers to some of the most common questions.
How do I get models?
Most ggml-based models should work, but newer models may require additions to the API. If a model doesn't work, please feel free to open up issues. However, be cautious about downloading models from the internet and directly onto your machine, as there may be security vulnerabilities in lama.cpp or ggml that could be maliciously exploited. Some models can be found on Hugging Face: https://huggingface.co/models?search=ggml, or models from gpt4all should also work: https://github.com/nomic-ai/gpt4all.
What's the difference with Serge, or XXX?
LocalAI is a multi-model solution that doesn't focus on a specific model type (e.g., llama.cpp or alpaca.cpp), and it handles all of these internally for faster inference, easy to set up locally and deploy to Kubernetes.
Can I use it with a Discord bot, or XXX?
Yes! If the client uses OpenAI and supports setting a different base URL to send requests to, you can use the LocalAI endpoint. This allows to use this with every application that was supposed to work with OpenAI, but without changing the application!
Can this leverage GPUs?
Not currently, as ggml doesn't support GPUs yet: https://github.com/ggerganov/llama.cpp/discussions/915.
Where is the webUI?
Does it work with AutoGPT?
AutoGPT currently doesn't allow to set a different API URL, but there is a PR open for it, so this should be possible soon!
Projects already using LocalAI to run local models
Feel free to open up a PR to get your project listed!
Blog posts and other articles
- https://medium.com/@tyler_97636/k8sgpt-localai-unlock-kubernetes-superpowers-for-free-584790de9b65
- https://kairos.io/docs/examples/localai/
Short-term roadmap
- Mimic OpenAI API (https://github.com/go-skynet/LocalAI/issues/10)
- Binary releases (https://github.com/go-skynet/LocalAI/issues/6)
- Upstream our golang bindings to llama.cpp (https://github.com/ggerganov/llama.cpp/issues/351) and gpt4all
- Multi-model support
- Have a webUI!
- Allow configuration of defaults for models.
- Enable automatic downloading of models from a curated gallery, with only free-licensed models, directly from the webui.
Star history
License
LocalAI is a community-driven project. It was initially created by mudler at the SpectroCloud OSS Office.
MIT
Golang bindings used
Acknowledgements
- llama.cpp
- https://github.com/tatsu-lab/stanford_alpaca
- https://github.com/cornelk/llama-go for the initial ideas
- https://github.com/antimatter15/alpaca.cpp for the light model version (this is compatible and tested only with that checkpoint model!)