🤖 The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and local-first. Drop-in replacement for OpenAI, running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. Features: Generate Text, Audio, Video, Images, Voice Cloning, Distributed, P2P inference
Go to file
Aisuko d0c033d09b
feat: add PR template and stale configuration (#316)
Signed-off-by: Aisuko <urakiny@gmail.com>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2023-05-20 09:10:20 +02:00
.devcontainer feature: add devcontainer for live debugging (#60) 2023-04-22 01:20:03 +02:00
.github feat: add PR template and stale configuration (#316) 2023-05-20 09:10:20 +02:00
.vscode fix: missing model path in launch.json (#309) 2023-05-19 16:39:48 +02:00
api feat: support shorter urls for github repositories (#314) 2023-05-20 09:06:30 +02:00
examples fix: Dockerfile.build missing cmake in rwkv example (#301) 2023-05-19 01:08:20 +02:00
models Add docker-compose 2023-04-13 01:13:14 +02:00
pkg feat: minor enhancements to /models/apply (#297) 2023-05-19 08:31:11 +02:00
prompt-templates docs: enhancements (#133) 2023-04-30 23:27:02 +02:00
tests feat: add /models/apply endpoint to prepare models (#286) 2023-05-18 15:59:03 +02:00
.dockerignore feat: add image generation with ncnn-stablediffusion (#272) 2023-05-16 19:32:53 +02:00
.env feat: add an environment variable to manage rebuild in Docker image (#290) 2023-05-18 19:18:32 +02:00
.gitignore feat: add image generation with ncnn-stablediffusion (#272) 2023-05-16 19:32:53 +02:00
.goreleaser.yaml Rename project to LocalAI (#35) 2023-04-19 18:43:10 +02:00
docker-compose.yaml docs: add discord-bot example (#126) 2023-04-30 00:31:28 +02:00
Dockerfile feat: add an environment variable to manage rebuild in Docker image (#290) 2023-05-18 19:18:32 +02:00
Dockerfile.dev docker: add openblas and opencv to images (#277) 2023-05-17 01:30:30 +02:00
Earthfile Rename project to LocalAI (#35) 2023-04-19 18:43:10 +02:00
entrypoint.sh feat: add an environment variable to manage rebuild in Docker image (#290) 2023-05-18 19:18:32 +02:00
go.mod fix(deps): update github.com/go-skynet/go-llama.cpp digest to 3ee537e (#313) 2023-05-20 00:30:49 +02:00
go.sum fix(deps): update github.com/go-skynet/go-llama.cpp digest to 3ee537e (#313) 2023-05-20 00:30:49 +02:00
LICENSE First import 2023-03-18 23:59:06 +01:00
main.go feat: add /models/apply endpoint to prepare models (#286) 2023-05-18 15:59:03 +02:00
Makefile feat: support shorter urls for github repositories (#314) 2023-05-20 09:06:30 +02:00
README.md docs: Update README 2023-05-19 19:42:40 +02:00
renovate.json ci: manually update deps 2023-05-04 15:01:29 +02:00



LocalAI

tests build container images

LocalAI is a drop-in replacement REST API that's compatible with OpenAI API specifications for local inferencing. It allows you to run models locally or on-prem with consumer grade hardware, supporting multiple model families that are compatible with the ggml format.

For a list of the supported model families, please see the model compatibility table below.

In a nutshell:

  • Local, OpenAI drop-in alternative REST API. You own your data.
  • NO GPU required. NO Internet access is required either. Optional, GPU Acceleration is available in llama.cpp-compatible LLMs. See building instructions.
  • Supports multiple models, Audio transcription, Text generation with GPTs, Image generation with stable diffusion (experimental)
  • Once loaded the first time, it keep models loaded in memory for faster inference
  • Doesn't shell-out, but uses C++ bindings for a faster inference and better performance.

LocalAI is a community-driven project, focused on making the AI accessible to anyone. Any contribution, feedback and PR is welcome! It was initially created by mudler at the SpectroCloud OSS Office.

See the usage and examples sections to learn how to use LocalAI.

How does it work?

LocalAI is an API written in Go that serves as an OpenAI shim, enabling software already developed with OpenAI SDKs to seamlessly integrate with LocalAI. It can be effortlessly implemented as a substitute, even on consumer-grade hardware. This capability is achieved by employing various C++ backends, including ggml, to perform inference on LLMs using both CPU and, if desired, GPU.

LocalAI uses C++ bindings for optimizing speed. It is based on llama.cpp, gpt4all, rwkv.cpp, ggml, whisper.cpp for audio transcriptions, bert.cpp for embedding and StableDiffusion-NCN for image generation. See the model compatibility table to learn about all the components of LocalAI.

LocalAI

News

Now LocalAI can generate images too:

mode=0 mode=1 (winograd/sgemm)
b6441997879 winograd2

Twitter: @LocalAI_API and @mudler_it

Blogs and articles

Contribute and help

To help the project you can:

  • Upvote the Reddit post about LocalAI.

  • Hacker news post - help us out by voting if you like this project.

  • If you have technological skills and want to contribute to development, have a look at the open issues. If you are new you can have a look at the good-first-issue and help-wanted labels.

  • If you don't have technological skills you can still help improving documentation or add examples or share your user-stories with our community, any help and contribution is welcome!

Model compatibility

It is compatible with the models supported by llama.cpp supports also GPT4ALL-J and cerebras-GPT with ggml.

Tested with:

Note: You might need to convert some models from older models to the new format, for indications, see the README in llama.cpp for instance to run gpt4all.

RWKV

A full example on how to run a rwkv model is in the examples.

Note: rwkv models needs to specify the backend rwkv in the YAML config files and have an associated tokenizer along that needs to be provided with it:

36464540 -rw-r--r--  1 mudler mudler 1.2G May  3 10:51 rwkv_small
36464543 -rw-r--r--  1 mudler mudler 2.4M May  3 10:51 rwkv_small.tokenizer.json

Others

It should also be compatible with StableLM and GPTNeoX ggml models (untested).

Hardware requirements

Depending on the model you are attempting to run might need more RAM or CPU resources. Check out also here for ggml based backends. rwkv is less expensive on resources.

Model compatibility table

Backend and Bindings Compatible models Completion/Chat endpoint Audio transcription/Image Embeddings support Token stream support
llama (binding) Vicuna, Alpaca, LLaMa yes no yes (doesn't seem to be accurate) yes
gpt4all-llama Vicuna, Alpaca, LLaMa yes no no yes
gpt4all-mpt MPT yes no no yes
gpt4all-j GPT4ALL-J yes no no yes
gpt2 (binding) GPT/NeoX, Cerebras yes no no no
dolly (binding) Dolly yes no no no
redpajama (binding) RedPajama yes no no no
stableLM (binding) StableLM GPT/NeoX yes no no no
replit (binding) Replit yes no no no
gptneox (binding) GPT NeoX yes no no no
starcoder (binding) Starcoder yes no no no
bloomz (binding) Bloom yes no no no
rwkv (binding) rwkv yes no no yes
bert (binding bert no no yes no
whisper whisper no Audio no no
stablediffusion (binding) stablediffusion no Image no no

Usage

LocalAI comes by default as a container image. You can check out all the available images with corresponding tags here.

The easiest way to run LocalAI is by using docker-compose (to build locally, see building LocalAI):


git clone https://github.com/go-skynet/LocalAI

cd LocalAI

# (optional) Checkout a specific LocalAI tag
# git checkout -b build <TAG>

# copy your models to models/
cp your-model.bin models/

# (optional) Edit the .env file to set things like context size and threads
# vim .env

# start with docker-compose
docker-compose up -d --pull always
# or you can build the images with:
# docker-compose up -d --build

# Now API is accessible at localhost:8080
curl http://localhost:8080/v1/models
# {"object":"list","data":[{"id":"your-model.bin","object":"model"}]}

curl http://localhost:8080/v1/completions -H "Content-Type: application/json" -d '{
     "model": "your-model.bin",            
     "prompt": "A long time ago in a galaxy far, far away",
     "temperature": 0.7
   }'

Example: Use GPT4ALL-J model

# Clone LocalAI
git clone https://github.com/go-skynet/LocalAI

cd LocalAI

# (optional) Checkout a specific LocalAI tag
# git checkout -b build <TAG>

# Download gpt4all-j to models/
wget https://gpt4all.io/models/ggml-gpt4all-j.bin -O models/ggml-gpt4all-j

# Use a template from the examples
cp -rf prompt-templates/ggml-gpt4all-j.tmpl models/

# (optional) Edit the .env file to set things like context size and threads
# vim .env

# start with docker-compose
docker-compose up -d --pull always
# or you can build the images with:
# docker-compose up -d --build
# Now API is accessible at localhost:8080
curl http://localhost:8080/v1/models
# {"object":"list","data":[{"id":"ggml-gpt4all-j","object":"model"}]}

curl http://localhost:8080/v1/chat/completions -H "Content-Type: application/json" -d '{
     "model": "ggml-gpt4all-j",
     "messages": [{"role": "user", "content": "How are you?"}],
     "temperature": 0.9 
   }'

# {"model":"ggml-gpt4all-j","choices":[{"message":{"role":"assistant","content":"I'm doing well, thanks. How about you?"}}]}

Advanced: prepare models using the API

Instead of installing models manually, you can use the LocalAI API endpoints and a model definition to install programmatically via API models in runtime.

A curated collection of model files is in the model-gallery (work in progress!).

To install for example gpt4all-j, you can send a POST call to the /models/apply endpoint with the model definition url (url) and the name of the model should have in LocalAI (name, optional):

curl http://localhost:8080/models/apply -H "Content-Type: application/json" -d '{
     "url": "https://raw.githubusercontent.com/go-skynet/model-gallery/main/gpt4all-j.yaml",
     "name": "gpt4all-j"
   }'  

Other examples

Screenshot from 2023-04-26 23-59-55

To see other examples on how to integrate with other projects for instance for question answering or for using it with chatbot-ui, see: examples.

Advanced configuration

LocalAI can be configured to serve user-defined models with a set of default parameters and templates.

You can create multiple yaml files in the models path or either specify a single YAML configuration file. Consider the following models folder in the example/chatbot-ui:

base  ls -liah examples/chatbot-ui/models 
36487587 drwxr-xr-x 2 mudler mudler 4.0K May  3 12:27 .
36487586 drwxr-xr-x 3 mudler mudler 4.0K May  3 10:42 ..
36465214 -rw-r--r-- 1 mudler mudler   10 Apr 27 07:46 completion.tmpl
36464855 -rw-r--r-- 1 mudler mudler 3.6G Apr 27 00:08 ggml-gpt4all-j
36464537 -rw-r--r-- 1 mudler mudler  245 May  3 10:42 gpt-3.5-turbo.yaml
36467388 -rw-r--r-- 1 mudler mudler  180 Apr 27 07:46 gpt4all.tmpl

In the gpt-3.5-turbo.yaml file it is defined the gpt-3.5-turbo model which is an alias to use gpt4all-j with pre-defined options.

For instance, consider the following that declares gpt-3.5-turbo backed by the ggml-gpt4all-j model:

name: gpt-3.5-turbo
# Default model parameters
parameters:
  # Relative to the models path
  model: ggml-gpt4all-j
  # temperature
  temperature: 0.3
  # all the OpenAI request options here..

# Default context size
context_size: 512
threads: 10
# Define a backend (optional). By default it will try to guess the backend the first time the model is interacted with.
backend: gptj # available: llama, stablelm, gpt2, gptj rwkv
# stopwords (if supported by the backend)
stopwords:
- "HUMAN:"
- "### Response:"
# define chat roles
roles:
  user: "HUMAN:"
  system: "GPT:"
template:
  # template file ".tmpl" with the prompt template to use by default on the endpoint call. Note there is no extension in the files
  completion: completion
  chat: ggml-gpt4all-j

Specifying a config-file via CLI allows to declare models in a single file as a list, for instance:

- name: list1
  parameters:
    model: testmodel
  context_size: 512
  threads: 10
  stopwords:
  - "HUMAN:"
  - "### Response:"
  roles:
    user: "HUMAN:"
    system: "GPT:"
  template:
    completion: completion
    chat: ggml-gpt4all-j
- name: list2
  parameters:
    model: testmodel
  context_size: 512
  threads: 10
  stopwords:
  - "HUMAN:"
  - "### Response:"
  roles:
    user: "HUMAN:"
    system: "GPT:"
  template:
    completion: completion
   chat: ggml-gpt4all-j

See also chatbot-ui as an example on how to use config files.

Full config model file reference

name: gpt-3.5-turbo

# Default model parameters
parameters:
  # Relative to the models path
  model: ggml-gpt4all-j
  # temperature
  temperature: 0.3
  # all the OpenAI request options here..
  top_k: 
  top_p: 
  max_tokens:
  batch:
  f16: true
  ignore_eos: true
  n_keep: 10
  seed: 
  mode: 
  step: 

# Default context size
context_size: 512
# Default number of threads
threads: 10
# Define a backend (optional). By default it will try to guess the backend the first time the model is interacted with.
backend: gptj # available: llama, stablelm, gpt2, gptj rwkv
# stopwords (if supported by the backend)
stopwords:
- "HUMAN:"
- "### Response:"
# string to trim space to
trimspace:
- string
# Strings to cut from the response
cutstrings:
- "string"
# define chat roles
roles:
  user: "HUMAN:"
  system: "GPT:"
  assistant: "ASSISTANT:"
template:
  # template file ".tmpl" with the prompt template to use by default on the endpoint call. Note there is no extension in the files
  completion: completion
  chat: ggml-gpt4all-j
  edit: edit_template

# Enable F16 if backend supports it
f16: true
# Enable debugging
debug: true
# Enable embeddings
embeddings: true
# Mirostat configuration (llama.cpp only)
mirostat_eta: 0.8
mirostat_tau: 0.9
mirostat: 1

# GPU Layers (only used when built with cublas)
gpu_layers: 22

# Directory used to store additional assets (used for stablediffusion)
asset_dir: ""

Prompt templates

The API doesn't inject a default prompt for talking to the model. You have to use a prompt similar to what's described in the standford-alpaca docs: https://github.com/tatsu-lab/stanford_alpaca#data-release.

You can use a default template for every model present in your model path, by creating a corresponding file with the `.tmpl` suffix next to your model. For instance, if the model is called `foo.bin`, you can create a sibling file, `foo.bin.tmpl` which will be used as a default prompt and can be used with alpaca:
The below instruction describes a task. Write a response that appropriately completes the request.

### Instruction:
{{.Input}}

### Response:

See the prompt-templates directory in this repository for templates for some of the most popular models.

For the edit endpoint, an example template for alpaca-based models can be:

Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.

### Instruction:
{{.Instruction}}

### Input:
{{.Input}}

### Response:

CLI

You can control LocalAI with command line arguments, to specify a binding address, or the number of threads.

Usage:

local-ai --models-path <model_path> [--address <address>] [--threads <num_threads>]
Parameter Environment Variable Default Value Description
models-path MODELS_PATH The path where you have models (ending with .bin).
threads THREADS Number of Physical cores The number of threads to use for text generation.
address ADDRESS :8080 The address and port to listen on.
context-size CONTEXT_SIZE 512 Default token context size.
debug DEBUG false Enable debug mode.
config-file CONFIG_FILE empty Path to a LocalAI config file.
upload_limit UPLOAD_LIMIT 5MB Upload limit for whisper.
image-path IMAGE_PATH empty Image directory to store and serve processed images.

Setup

Currently LocalAI comes as a container image and can be used with docker or a container engine of choice. You can check out all the available images with corresponding tags here.

Docker

Example of starting the API with `docker`:
docker run -p 8080:8080 -ti --rm quay.io/go-skynet/local-ai:latest --models-path /path/to/models --context-size 700 --threads 4

You should see:

┌───────────────────────────────────────────────────┐ 
│                   Fiber v2.42.0                   │ 
│               http://127.0.0.1:8080               │ 
│       (bound on host 0.0.0.0 and port 8080)       │ 
│                                                   │ 
│ Handlers ............. 1  Processes ........... 1 │ 
│ Prefork ....... Disabled  PID ................. 1 │ 
└───────────────────────────────────────────────────┘ 

Note: the binary inside the image is rebuild at the start of the container to enable CPU optimizations for the execution environment, you can set the environment variable REBUILD to false to prevent this behavior.

Build locally

In order to build the LocalAI container image locally you can use docker:

# build the image
docker build -t LocalAI .
docker run LocalAI

Or you can build the binary with make:

make build

Build on mac

Building on Mac (M1 or M2) works, but you may need to install some prerequisites using brew.

The below has been tested by one mac user and found to work. Note that this doesn't use docker to run the server:

# install build dependencies
brew install cmake
brew install go

# clone the repo
git clone https://github.com/go-skynet/LocalAI.git

cd LocalAI

# build the binary
make build

# Download gpt4all-j to models/
wget https://gpt4all.io/models/ggml-gpt4all-j.bin -O models/ggml-gpt4all-j

# Use a template from the examples
cp -rf prompt-templates/ggml-gpt4all-j.tmpl models/

# Run LocalAI
./local-ai --models-path ./models/ --debug

# Now API is accessible at localhost:8080
curl http://localhost:8080/v1/models

curl http://localhost:8080/v1/chat/completions -H "Content-Type: application/json" -d '{
     "model": "ggml-gpt4all-j",
     "messages": [{"role": "user", "content": "How are you?"}],
     "temperature": 0.9 
   }'

Build with Image generation support

Requirements: OpenCV, Gomp

Image generation is experimental and requires GO_TAGS=stablediffusion to be set during build:

make GO_TAGS=stablediffusion rebuild

Accelleration

OpenBLAS

Requirements: OpenBLAS

make BUILD_TYPE=openblas build

CuBLAS

Requirement: Nvidia CUDA toolkit

Note: CuBLAS support is experimental, and has not been tested on real HW. please report any issues you find!

make BUILD_TYPE=cublas build

More informations available in the upstream PR: https://github.com/ggerganov/llama.cpp/pull/1412

Windows compatibility

It should work, however you need to make sure you give enough resources to the container. See https://github.com/go-skynet/LocalAI/issues/2

Run LocalAI in Kubernetes

LocalAI can be installed inside Kubernetes with helm.

  1. Add the helm repo
    helm repo add go-skynet https://go-skynet.github.io/helm-charts/
    
  2. Create a values files with your settings:
cat <<EOF > values.yaml
deployment:
  image: quay.io/go-skynet/local-ai:latest
  env:
    threads: 4
    contextSize: 1024
    modelsPath: "/models"
# Optionally create a PVC, mount the PV to the LocalAI Deployment,
# and download a model to prepopulate the models directory
modelsVolume:
  enabled: true
  url: "https://gpt4all.io/models/ggml-gpt4all-j.bin"
  pvc:
    size: 6Gi
    accessModes:
    - ReadWriteOnce
  auth:
    # Optional value for HTTP basic access authentication header
    basic: "" # 'username:password' base64 encoded
service:
  type: ClusterIP
  annotations: {}
  # If using an AWS load balancer, you'll need to override the default 60s load balancer idle timeout
  # service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "1200"
EOF
  1. Install the helm chart:
helm repo update
helm install local-ai go-skynet/local-ai -f values.yaml

Check out also the helm chart repository on GitHub.

Supported OpenAI API endpoints

You can check out the OpenAI API reference.

Following the list of endpoints/parameters supported.

Note:

  • You can also specify the model as part of the OpenAI token.
  • If only one model is available, the API will use it for all the requests.

Chat completions

For example, to generate a chat completion, you can send a POST request to the `/v1/chat/completions` endpoint with the instruction as the request body:
curl http://localhost:8080/v1/chat/completions -H "Content-Type: application/json" -d '{
     "model": "ggml-koala-7b-model-q4_0-r2.bin",
     "messages": [{"role": "user", "content": "Say this is a test!"}],
     "temperature": 0.7
   }'

Available additional parameters: top_p, top_k, max_tokens

Edit completions

To generate an edit completion you can send a POST request to the `/v1/edits` endpoint with the instruction as the request body:
curl http://localhost:8080/v1/edits -H "Content-Type: application/json" -d '{
     "model": "ggml-koala-7b-model-q4_0-r2.bin",
     "instruction": "rephrase",
     "input": "Black cat jumped out of the window",
     "temperature": 0.7
   }'

Available additional parameters: top_p, top_k, max_tokens.

Completions

To generate a completion, you can send a POST request to the /v1/completions endpoint with the instruction as per the request body:

curl http://localhost:8080/v1/completions -H "Content-Type: application/json" -d '{
     "model": "ggml-koala-7b-model-q4_0-r2.bin",
     "prompt": "A long time ago in a galaxy far, far away",
     "temperature": 0.7
   }'

Available additional parameters: top_p, top_k, max_tokens

List models

You can list all the models available with:
curl http://localhost:8080/v1/models

Embeddings

OpenAI docs: https://platform.openai.com/docs/api-reference/embeddings

The embedding endpoint is experimental and enabled only if the model is configured with embeddings: true in its yaml file, for example:

name: text-embedding-ada-002
parameters:
  model: bert
embeddings: true
backend: "bert-embeddings"

There is an example available here.

Note: embeddings is supported only with llama.cpp compatible models and bert models. bert is more performant and available independently of the LLM model.

Transcriptions endpoint

Note: requires ffmpeg in the container image, which is currently not shipped due to licensing issues. We will prepare separated images with ffmpeg. (stay tuned!)

Download one of the models from https://huggingface.co/ggerganov/whisper.cpp/tree/main in the models folder, and create a YAML file for your model:

name: whisper-1
backend: whisper
parameters:
  model: whisper-en

The transcriptions endpoint then can be tested like so:

wget --quiet --show-progress -O gb1.ogg https://upload.wikimedia.org/wikipedia/commons/1/1f/George_W_Bush_Columbia_FINAL.ogg

curl http://localhost:8080/v1/audio/transcriptions -H "Content-Type: multipart/form-data" -F file="@$PWD/gb1.ogg" -F model="whisper-1"                                                     

{"text":"My fellow Americans, this day has brought terrible news and great sadness to our country.At nine o'clock this morning, Mission Control in Houston lost contact with our Space ShuttleColumbia.A short time later, debris was seen falling from the skies above Texas.The Columbia's lost.There are no survivors.One board was a crew of seven.Colonel Rick Husband, Lieutenant Colonel Michael Anderson, Commander Laurel Clark, Captain DavidBrown, Commander William McCool, Dr. Kultna Shavla, and Elon Ramon, a colonel in the IsraeliAir Force.These men and women assumed great risk in the service to all humanity.In an age when spaceflight has come to seem almost routine, it is easy to overlook thedangers of travel by rocket and the difficulties of navigating the fierce outer atmosphere ofthe Earth.These astronauts knew the dangers, and they faced them willingly, knowing they had a highand noble purpose in life.Because of their courage and daring and idealism, we will miss them all the more.All Americans today are thinking as well of the families of these men and women who havebeen given this sudden shock and grief.You're not alone.Our entire nation agrees with you, and those you loved will always have the respect andgratitude of this country.The cause in which they died will continue.Mankind has led into the darkness beyond our world by the inspiration of discovery andthe longing to understand.Our journey into space will go on.In the skies today, we saw destruction and tragedy.As farther than we can see, there is comfort and hope.In the words of the prophet Isaiah, \"Lift your eyes and look to the heavens who createdall these, he who brings out the starry hosts one by one and calls them each by name.\"Because of his great power and mighty strength, not one of them is missing.The same creator who names the stars also knows the names of the seven souls we mourntoday.The crew of the shuttle Columbia did not return safely to Earth yet we can pray that all aresafely home.May God bless the grieving families and may God continue to bless America.[BLANK_AUDIO]"}

Image generation

OpenAI docs: https://platform.openai.com/docs/api-reference/images/create

LocalAI supports generating images with Stable diffusion, running on CPU.

mode=0 mode=1 (winograd/sgemm)
test b643343452981
b6441997879 winograd2
winograd winograd3

To generate an image you can send a POST request to the /v1/images/generations endpoint with the instruction as the request body:

# 512x512 is supported too
curl http://localhost:8080/v1/images/generations -H "Content-Type: application/json" -d '{
            "prompt": "A cute baby sea otter",
            "size": "256x256" 
          }'

Available additional parameters: mode, step.

Note: To set a negative prompt, you can split the prompt with |, for instance: a cute baby sea otter|malformed.

curl http://localhost:8080/v1/images/generations -H "Content-Type: application/json" -d '{
            "prompt": "floating hair, portrait, ((loli)), ((one girl)), cute face, hidden hands, asymmetrical bangs, beautiful detailed eyes, eye shadow, hair ornament, ribbons, bowties, buttons, pleated skirt, (((masterpiece))), ((best quality)), colorful|((part of the head)), ((((mutated hands and fingers)))), deformed, blurry, bad anatomy, disfigured, poorly drawn face, mutation, mutated, extra limb, ugly, poorly drawn hands, missing limb, blurry, floating limbs, disconnected limbs, malformed hands, blur, out of focus, long neck, long body, Octane renderer, lowres, bad anatomy, bad hands, text",
            "size": "256x256"
          }'

Note: image generator supports images up to 512x512. You can use other tools however to upscale the image, for instance: https://github.com/upscayl/upscayl.

Setup

Note: In order to use the images/generation endpoint, you need to build LocalAI with GO_TAGS=stablediffusion.

  1. Create a model file stablediffusion.yaml in the models folder:
name: stablediffusion
backend: stablediffusion
asset_dir: stablediffusion_assets
  1. Create a stablediffusion_assets directory inside your models directory
  2. Download the ncnn assets from https://github.com/EdVince/Stable-Diffusion-NCNN#out-of-box and place them in stablediffusion_assets.

The models directory should look like the following:

models
├── stablediffusion_assets
│   ├── AutoencoderKL-256-256-fp16-opt.param
│   ├── AutoencoderKL-512-512-fp16-opt.param
│   ├── AutoencoderKL-base-fp16.param
│   ├── AutoencoderKL-encoder-512-512-fp16.bin
│   ├── AutoencoderKL-fp16.bin
│   ├── FrozenCLIPEmbedder-fp16.bin
│   ├── FrozenCLIPEmbedder-fp16.param
│   ├── log_sigmas.bin
│   ├── tmp-AutoencoderKL-encoder-256-256-fp16.param
│   ├── UNetModel-256-256-MHA-fp16-opt.param
│   ├── UNetModel-512-512-MHA-fp16-opt.param
│   ├── UNetModel-base-MHA-fp16.param
│   ├── UNetModel-MHA-fp16.bin
│   └── vocab.txt
└── stablediffusion.yaml

LocalAI API endpoints

Besides the OpenAI endpoints, there are additional LocalAI-only API endpoints.

Applying a model - /models/apply

This endpoint can be used to install a model in runtime.

LocalAI will create a batch process that downloads the required files from a model definition and automatically reload itself to include the new model.

Input: url, name (optional), files (optional)

curl http://localhost:8080/models/apply -H "Content-Type: application/json" -d '{
     "url": "<MODEL_DEFINITION_URL>",
     "name": "<MODEL_NAME>",
     "files": [
        {
            "uri": "<additional_file>",
            "sha256": "<additional_file_hash>",
            "name": "<additional_file_name>"
        }
     ]
   }

An optional, list of additional files can be specified to be downloaded. The name allows to override the model name.

Returns an uuid and an url to follow up the state of the process:

{ "uid":"251475c9-f666-11ed-95e0-9a8a4480ac58", "status":"http://localhost:8080/models/jobs/251475c9-f666-11ed-95e0-9a8a4480ac58"}

To see a collection example of curated models definition files, see the model-gallery.

Inquiry model job state /models/jobs/<uid>

This endpoint returns the state of the batch job associated to a model

This endpoint can be used with the uuid returned by /models/apply to check a job state:

curl http://localhost:8080/models/jobs/251475c9-f666-11ed-95e0-9a8a4480ac58

Returns a json containing the error, and if the job is being processed:

{"error":null,"processed":true,"message":"completed"}

Clients

OpenAI clients are already compatible with LocalAI by overriding the basePath, or the target URL.

Javascript

https://github.com/openai/openai-node/

import { Configuration, OpenAIApi } from 'openai';

const configuration = new Configuration({
  basePath: `http://localhost:8080/v1`
});
const openai = new OpenAIApi(configuration);

Python

https://github.com/openai/openai-python

Set the OPENAI_API_BASE environment variable, or by code:

import openai

openai.api_base = "http://localhost:8080/v1"

# create a chat completion
chat_completion = openai.ChatCompletion.create(model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Hello world"}])

# print the completion
print(completion.choices[0].message.content)

Frequently asked questions

Here are answers to some of the most common questions.

How do I get models?

Most ggml-based models should work, but newer models may require additions to the API. If a model doesn't work, please feel free to open up issues. However, be cautious about downloading models from the internet and directly onto your machine, as there may be security vulnerabilities in lama.cpp or ggml that could be maliciously exploited. Some models can be found on Hugging Face: https://huggingface.co/models?search=ggml, or models from gpt4all should also work: https://github.com/nomic-ai/gpt4all.

What's the difference with Serge, or XXX?

LocalAI is a multi-model solution that doesn't focus on a specific model type (e.g., llama.cpp or alpaca.cpp), and it handles all of these internally for faster inference, easy to set up locally and deploy to Kubernetes.

Can I use it with a Discord bot, or XXX?

Yes! If the client uses OpenAI and supports setting a different base URL to send requests to, you can use the LocalAI endpoint. This allows to use this with every application that was supposed to work with OpenAI, but without changing the application!

Can this leverage GPUs?

There is partial GPU support, see build instructions above.

Where is the webUI?

There is the availability of localai-webui and chatbot-ui in the examples section and can be setup as per the instructions. However as LocalAI is an API you can already plug it into existing projects that provides are UI interfaces to OpenAI's APIs. There are several already on github, and should be compatible with LocalAI already (as it mimics the OpenAI API)

Does it work with AutoGPT?

AutoGPT currently doesn't allow to set a different API URL, but there is a PR open for it, so this should be possible soon!

Projects already using LocalAI to run local models

Feel free to open up a PR to get your project listed!

Short-term roadmap

Star history

LocalAI Star history Chart

License

LocalAI is a community-driven project. It was initially created by Ettore Di Giacinto at the SpectroCloud OSS Office.

MIT

Golang bindings used

Acknowledgements

LocalAI couldn't have been built without the help of great software already available from the community. Thank you!

Contributors