🤖 The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and local-first. Drop-in replacement for OpenAI, running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. Features: Generate Text, Audio, Video, Images, Voice Cloning, Distributed, P2P inference
Go to file
2023-04-25 08:48:43 -07:00
.devcontainer feature: add devcontainer for live debugging (#60) 2023-04-22 01:20:03 +02:00
.github/workflows chore(deps): update actions/checkout action to v3 (#82) 2023-04-25 07:46:29 +02:00
.vscode feature: add devcontainer for live debugging (#60) 2023-04-22 01:20:03 +02:00
api feat: Return OpenAI errors and update docs (#80) 2023-04-24 23:42:03 +02:00
charts/local-ai feat: Add helm chart (#56) 2023-04-21 13:22:03 -07:00
examples add chatbot-ui example 2023-04-25 08:48:43 -07:00
models Add docker-compose 2023-04-13 01:13:14 +02:00
pkg/model Add support for stablelm (#48) 2023-04-21 00:06:55 +02:00
prompt-templates Enhancements (#34) 2023-04-19 17:10:29 +02:00
.dockerignore feature: makefile & updates (#23) 2023-04-15 16:39:07 -07:00
.env fix(makefile): fix go-gpt2 folder and add verification before git clone (#51) 2023-04-22 00:29:32 +02:00
.gitignore feat: add CI/tests (#58) 2023-04-22 00:44:52 +02:00
.goreleaser.yaml Rename project to LocalAI (#35) 2023-04-19 18:43:10 +02:00
docker-compose.yaml fix(makefile): fix go-gpt2 folder and add verification before git clone (#51) 2023-04-22 00:29:32 +02:00
Dockerfile fix(makefile): fix go-gpt2 folder and add verification before git clone (#51) 2023-04-22 00:29:32 +02:00
Earthfile Rename project to LocalAI (#35) 2023-04-19 18:43:10 +02:00
go.mod fix(deps): update module github.com/gofiber/fiber/v2 to v2.44.0 (#81) 2023-04-25 07:46:14 +02:00
go.sum fix(deps): update module github.com/gofiber/fiber/v2 to v2.44.0 (#81) 2023-04-25 07:46:14 +02:00
LICENSE First import 2023-03-18 23:59:06 +01:00
main.go feat: add CI/tests (#58) 2023-04-22 00:44:52 +02:00
Makefile feat: automatic updates with renovate, docs updates (#76) 2023-04-24 18:10:58 +02:00
README.md feat: Return OpenAI errors and update docs (#80) 2023-04-24 23:42:03 +02:00
renovate.json feat: automatic updates with renovate, docs updates (#76) 2023-04-24 18:10:58 +02:00



LocalAI

⚠️ This project has been renamed from llama-cli to LocalAI to reflect the fact that we are focusing on a fast drop-in OpenAI API rather on the CLI interface. We think that there are already many projects that can be used as a CLI interface already, for instance llama.cpp and gpt4all. If you are were using llama-cli for CLI interactions and want to keep using it, use older versions or please open up an issue - contributions are welcome!

tests build container images

LocalAI is a straightforward, drop-in replacement API compatible with OpenAI for local CPU inferencing, based on llama.cpp, gpt4all and ggml, including support GPT4ALL-J which is Apache 2.0 Licensed and can be used for commercial purposes.

  • OpenAI compatible API
  • Supports multiple-models
  • Once loaded the first time, it keep models loaded in memory for faster inference
  • Support for prompt templates
  • Doesn't shell-out, but uses C bindings for a faster inference and better performance. Uses go-llama.cpp and go-gpt4all-j.cpp.

Reddit post: https://www.reddit.com/r/selfhosted/comments/12w4p2f/localai_openai_compatible_api_to_run_llm_models/

Model compatibility

It is compatible with the models supported by llama.cpp supports also GPT4ALL-J and cerebras-GPT with ggml.

Tested with:

It should also be compatible with StableLM and GPTNeoX ggml models (untested)

Note: You might need to convert older models to the new format, see here for instance to run gpt4all.

Usage

LocalAI comes by default as a container image. You can check out all the available images with corresponding tags here.

The easiest way to run LocalAI is by using docker-compose:


git clone https://github.com/go-skynet/LocalAI

cd LocalAI

# copy your models to models/
cp your-model.bin models/

# (optional) Edit the .env file to set things like context size and threads
# vim .env

# start with docker-compose
docker-compose up -d --build

# Now API is accessible at localhost:8080
curl http://localhost:8080/v1/models
# {"object":"list","data":[{"id":"your-model.bin","object":"model"}]}

curl http://localhost:8080/v1/completions -H "Content-Type: application/json" -d '{
     "model": "your-model.bin",            
     "prompt": "A long time ago in a galaxy far, far away",
     "temperature": 0.7
   }'

Example: Use GPT4ALL-J model

# Clone LocalAI
git clone https://github.com/go-skynet/LocalAI

cd LocalAI

# Download gpt4all-j to models/
wget https://gpt4all.io/models/ggml-gpt4all-j.bin -O models/ggml-gpt4all-j

# Use a template from the examples
cp -rf prompt-templates/ggml-gpt4all-j.tmpl models/

# (optional) Edit the .env file to set things like context size and threads
# vim .env

# start with docker-compose
docker-compose up -d --build

# Now API is accessible at localhost:8080
curl http://localhost:8080/v1/models
# {"object":"list","data":[{"id":"ggml-gpt4all-j","object":"model"}]}

curl http://localhost:8080/v1/chat/completions -H "Content-Type: application/json" -d '{
     "model": "ggml-gpt4all-j",
     "messages": [{"role": "user", "content": "How are you?"}],
     "temperature": 0.9 
   }'

# {"model":"ggml-gpt4all-j","choices":[{"message":{"role":"assistant","content":"I'm doing well, thanks. How about you?"}}]}

Prompt templates

The API doesn't inject a default prompt for talking to the model. You have to use a prompt similar to what's described in the standford-alpaca docs: https://github.com/tatsu-lab/stanford_alpaca#data-release.

You can use a default template for every model present in your model path, by creating a corresponding file with the `.tmpl` suffix next to your model. For instance, if the model is called `foo.bin`, you can create a sibiling file, `foo.bin.tmpl` which will be used as a default prompt, for instance this can be used with alpaca:
Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
{{.Input}}

### Response:

See the prompt-templates directory in this repository for templates for most popular models.

API

LocalAI provides an API for running text generation as a service, that follows the OpenAI reference and can be used as a drop-in. The models once loaded the first time will be kept in memory.

Example of starting the API with `docker`:
docker run -p 8080:8080 -ti --rm quay.io/go-skynet/local-ai:latest --models-path /path/to/models --context-size 700 --threads 4

And you'll see:

┌───────────────────────────────────────────────────┐ 
│                   Fiber v2.42.0                   │ 
│               http://127.0.0.1:8080               │ 
│       (bound on host 0.0.0.0 and port 8080)       │ 
│                                                   │ 
│ Handlers ............. 1  Processes ........... 1 │ 
│ Prefork ....... Disabled  PID ................. 1 │ 
└───────────────────────────────────────────────────┘ 

You can control the API server options with command line arguments:

local-api --models-path <model_path> [--address <address>] [--threads <num_threads>]

The API takes takes the following parameters:

Parameter Environment Variable Default Value Description
models-path MODELS_PATH The path where you have models (ending with .bin).
threads THREADS Number of Physical cores The number of threads to use for text generation.
address ADDRESS :8080 The address and port to listen on.
context-size CONTEXT_SIZE 512 Default token context size.
debug DEBUG false Enable debug mode.

Once the server is running, you can start making requests to it using HTTP, using the OpenAI API.

Supported OpenAI API endpoints

You can check out the OpenAI API reference.

Following the list of endpoints/parameters supported.

Note:

  • You can also specify the model a part of the OpenAI token.
  • If only one model is available, the API will use it for all the requests.

Chat completions

For example, to generate a chat completion, you can send a POST request to the `/v1/chat/completions` endpoint with the instruction as the request body:
curl http://localhost:8080/v1/chat/completions -H "Content-Type: application/json" -d '{
     "model": "ggml-koala-7b-model-q4_0-r2.bin",
     "messages": [{"role": "user", "content": "Say this is a test!"}],
     "temperature": 0.7
   }'

Available additional parameters: top_p, top_k, max_tokens

Completions

For example, to generate a completion, you can send a POST request to the `/v1/completions` endpoint with the instruction as the request body: ``` curl http://localhost:8080/v1/completions -H "Content-Type: application/json" -d '{ "model": "ggml-koala-7b-model-q4_0-r2.bin", "prompt": "A long time ago in a galaxy far, far away", "temperature": 0.7 }' ```

Available additional parameters: top_p, top_k, max_tokens

List models

You can list all the models available with:
curl http://localhost:8080/v1/models

Using other models

gpt4all (https://github.com/nomic-ai/gpt4all) works as well, however the original model needs to be converted (same applies for old alpaca models, too):

wget -O tokenizer.model https://huggingface.co/decapoda-research/llama-30b-hf/resolve/main/tokenizer.model
mkdir models
cp gpt4all.. models/
git clone https://gist.github.com/eiz/828bddec6162a023114ce19146cb2b82
pip install sentencepiece
python 828bddec6162a023114ce19146cb2b82/gistfile1.txt models tokenizer.model
# There will be a new model with the ".tmp" extension, you have to use that one!

Helm Chart Installation (run LocalAI in Kubernetes)

The local-ai Helm chart supports two options for the LocalAI server's models directory:

  1. Basic deployment with no persistent volume. You must manually update the Deployment to configure your own models directory.

    Install the chart with .Values.deployment.volumes.enabled == false and .Values.dataVolume.enabled == false.

  2. Advanced, two-phase deployment to provision the models directory using a DataVolume. Requires Containerized Data Importer CDI to be pre-installed in your cluster.

    First, install the chart with .Values.deployment.volumes.enabled == false and .Values.dataVolume.enabled == true:

    helm install local-ai charts/local-ai -n local-ai --create-namespace
    

    Wait for CDI to create an importer Pod for the DataVolume and for the importer pod to finish provisioning the model archive inside the PV.

    Once the PV is provisioned and the importer Pod removed, set .Values.deployment.volumes.enabled == true and .Values.dataVolume.enabled == false and upgrade the chart:

    helm upgrade local-ai -n local-ai charts/local-ai
    

    This will update the local-ai Deployment to mount the PV that was provisioned by the DataVolume.

Windows compatibility

It should work, however you need to make sure you give enough resources to the container. See https://github.com/go-skynet/LocalAI/issues/2

Build locally

Pre-built images might fit well for most of the modern hardware, however you can and might need to build the images manually.

In order to build the LocalAI container image locally you can use docker:

# build the image
docker build -t LocalAI .
docker run LocalAI

Or build the binary with make:

make build

Frequently asked questions

Here are answers to some of the most common questions.

How do I get models?

Most ggml-based models should work, but newer models may require additions to the API. If a model doesn't work, please feel free to open up issues. However, be cautious about downloading models from the internet and directly onto your machine, as there may be security vulnerabilities in lama.cpp or ggml that could be maliciously exploited. Some models can be found on Hugging Face: https://huggingface.co/models?search=ggml, or models from gpt4all should also work: https://github.com/nomic-ai/gpt4all.

What's the difference with Serge, or XXX?

LocalAI is a multi-model solution that doesn't focus on a specific model type (e.g., llama.cpp or alpaca.cpp), and it handles all of these internally for faster inference, easy to set up locally and deploy to Kubernetes.

Can I use it with a Discord bot, or XXX?

Yes! If the client uses OpenAI and supports setting a different base URL to send requests to, you can use the LocalAI endpoint. This allows to use this with every application that was supposed to work with OpenAI, but without changing the application!

Can this leverage GPUs?

Not currently, as ggml doesn't support GPUs yet: https://github.com/ggerganov/llama.cpp/discussions/915.

Where is the webUI?

We are working on to have a good out of the box experience - however as LocalAI is an API you can already plug it into existing projects that provides are UI interfaces to OpenAI's APIs. There are several already on github, and should be compatible with LocalAI already (as it mimics the OpenAI API)

Does it work with AutoGPT?

AutoGPT currently doesn't allow to set a different API URL, but there is a PR open for it, so this should be possible soon!

Short-term roadmap

License

MIT

Acknowledgements