🤖 The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and local-first. Drop-in replacement for OpenAI, running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. Features: Generate Text, Audio, Video, Images, Voice Cloning, Distributed, P2P inference
Go to file
Ettore Di Giacinto cb5d6f6e3a ci: track updates for new deps
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-07-15 10:04:09 +02:00
.github ci: track updates for new deps 2023-07-15 10:04:09 +02:00
.vscode feat: Add more test-cases and remove dev container (#433) 2023-05-30 13:01:55 +02:00
api feat: move other backends to grpc 2023-07-15 01:19:43 +02:00
cmd/grpc feat: move other backends to grpc 2023-07-15 01:19:43 +02:00
examples example(slack-qa-bot): Add slack QA bot example (#654) 2023-06-22 18:07:15 +02:00
internal feat: cleanups, small enhancements 2023-07-04 18:58:19 +02:00
models Add docker-compose 2023-04-13 01:13:14 +02:00
pkg feat: move other backends to grpc 2023-07-15 01:19:43 +02:00
prompt-templates docs: enhancements (#133) 2023-04-30 23:27:02 +02:00
tests feat: update go-gpt2 (#359) 2023-05-23 21:47:47 +02:00
.dockerignore Remove .git from .dockerignore 2023-07-06 21:25:10 +02:00
.env Make REBUILD=false default behavior 2023-07-07 00:29:14 +02:00
.gitignore feat: move other backends to grpc 2023-07-15 01:19:43 +02:00
assets.go feat: Update gpt4all, support multiple implementations in runtime (#472) 2023-06-01 23:38:52 +02:00
docker-compose.yaml images: cleanup, drop .dev Dockerfile (#437) 2023-05-30 15:58:10 +02:00
Dockerfile Make REBUILD=false default behavior 2023-07-07 00:29:14 +02:00
Earthfile Rename project to LocalAI (#35) 2023-04-19 18:43:10 +02:00
entrypoint.sh Make REBUILD=false default behavior 2023-07-07 00:29:14 +02:00
go.mod feat: add falcon ggllm via grpc client 2023-07-15 01:19:43 +02:00
go.sum feat: add falcon ggllm via grpc client 2023-07-15 01:19:43 +02:00
LICENSE docs: update docs/license(clarification) and point to new website (#415) 2023-05-29 23:09:19 +02:00
main.go feat: move other backends to grpc 2023-07-15 01:19:43 +02:00
Makefile fix: fix copy 2023-07-15 01:19:43 +02:00
README.md Update README with sponsors section (#732) 2023-07-09 14:14:54 +02:00
renovate.json ci: manually update deps 2023-05-04 15:01:29 +02:00



LocalAI

tests build container images

Documentation website

LocalAI is a drop-in replacement REST API that's compatible with OpenAI API specifications for local inferencing. It allows you to run LLMs (and not only) locally or on-prem with consumer grade hardware, supporting multiple model families that are compatible with the ggml format. Does not require GPU.

For a list of the supported model families, please see the model compatibility table.

In a nutshell:

  • Local, OpenAI drop-in alternative REST API. You own your data.
  • NO GPU required. NO Internet access is required either
    • Optional, GPU Acceleration is available in llama.cpp-compatible LLMs. See also the build section.
  • Supports multiple models:
    • 📖 Text generation with GPTs (llama.cpp, gpt4all.cpp, ... and more)
    • 🗣 Text to Audio 🎺🆕
    • 🔈 Audio to Text (Audio transcription with whisper.cpp)
    • 🎨 Image generation with stable diffusion
  • 🏃 Once loaded the first time, it keep models loaded in memory for faster inference
  • Doesn't shell-out, but uses C++ bindings for a faster inference and better performance.

LocalAI was created by Ettore Di Giacinto and is a community-driven project, focused on making the AI accessible to anyone. Any contribution, feedback and PR is welcome!

See the Getting started and examples sections to learn how to use LocalAI. For a list of curated models check out the model gallery.

ChatGPT OSS alternative Image generation
Screenshot from 2023-04-26 23-59-55 b6441997879
Telegram bot Flowise
Screenshot from 2023-06-09 00-36-26 Screenshot from 2023-05-30 18-01-03

Hot topics / Roadmap

News

For latest news, follow also on Twitter @LocalAI_API and @mudler_it

Media, Blogs, Social

Contribute and help

To help the project you can:

  • Hacker news post - help us out by voting if you like this project.

  • If you have technological skills and want to contribute to development, have a look at the open issues. If you are new you can have a look at the good-first-issue and help-wanted labels.

  • If you don't have technological skills you can still help improving documentation or add examples or share your user-stories with our community, any help and contribution is welcome!

Usage

Check out the Getting started section. Here below you will find generic, quick instructions to get ready and use LocalAI.

The easiest way to run LocalAI is by using docker-compose (to build locally, see building LocalAI):


git clone https://github.com/go-skynet/LocalAI

cd LocalAI

# (optional) Checkout a specific LocalAI tag
# git checkout -b build <TAG>

# copy your models to models/
cp your-model.bin models/

# (optional) Edit the .env file to set things like context size and threads
# vim .env

# start with docker-compose
docker-compose up -d --pull always
# or you can build the images with:
# docker-compose up -d --build

# Now API is accessible at localhost:8080
curl http://localhost:8080/v1/models
# {"object":"list","data":[{"id":"your-model.bin","object":"model"}]}

curl http://localhost:8080/v1/completions -H "Content-Type: application/json" -d '{
     "model": "your-model.bin",            
     "prompt": "A long time ago in a galaxy far, far away",
     "temperature": 0.7
   }'

Example: Use GPT4ALL-J model

# Clone LocalAI
git clone https://github.com/go-skynet/LocalAI

cd LocalAI

# (optional) Checkout a specific LocalAI tag
# git checkout -b build <TAG>

# Download gpt4all-j to models/
wget https://gpt4all.io/models/ggml-gpt4all-j.bin -O models/ggml-gpt4all-j

# Use a template from the examples
cp -rf prompt-templates/ggml-gpt4all-j.tmpl models/

# (optional) Edit the .env file to set things like context size and threads
# vim .env

# start with docker-compose
docker-compose up -d --pull always
# or you can build the images with:
# docker-compose up -d --build
# Now API is accessible at localhost:8080
curl http://localhost:8080/v1/models
# {"object":"list","data":[{"id":"ggml-gpt4all-j","object":"model"}]}

curl http://localhost:8080/v1/chat/completions -H "Content-Type: application/json" -d '{
     "model": "ggml-gpt4all-j",
     "messages": [{"role": "user", "content": "How are you?"}],
     "temperature": 0.9 
   }'

# {"model":"ggml-gpt4all-j","choices":[{"message":{"role":"assistant","content":"I'm doing well, thanks. How about you?"}}]}

Build locally

In order to build the LocalAI container image locally you can use docker:

# build the image
docker build -t localai .
docker run localai

Or you can build the binary with make:

make build

See the build section in our documentation for detailed instructions.

Run LocalAI in Kubernetes

LocalAI can be installed inside Kubernetes with helm. See installation instructions.

Supported API endpoints

See the list of the supported API endpoints and how to configure image generation and audio transcription.

Frequently asked questions

See the FAQ section for a list of common questions.

Projects already using LocalAI to run local models

Feel free to open up a PR to get your project listed!

Sponsors

Do you find LocalAI useful?

Support the project by becoming a backer or sponsor. Your logo will show up here with a link to your website.

A huge thank you to our generous sponsors who support this project:

Spectro Cloud logo_600x600px_transparent bg
Spectro Cloud
Spectro Cloud kindly supports LocalAI by providing GPU and computing resources to run tests on lamdalabs!

Star history

LocalAI Star history Chart

License

LocalAI is a community-driven project created by Ettore Di Giacinto.

MIT

Author

Ettore Di Giacinto and others

Acknowledgements

LocalAI couldn't have been built without the help of great software already available from the community. Thank you!

Contributors