🤖 The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and local-first. Drop-in replacement for OpenAI, running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. Features: Generate Text, Audio, Video, Images, Voice Cloning, Distributed, P2P inference
Go to file
2023-08-07 00:01:01 +02:00
.github fix: symlink libphonemize in the container 2023-07-28 19:40:21 +02:00
.vscode feat: Add more test-cases and remove dev container () 2023-05-30 13:01:55 +02:00
api fix: do not break on newlines on function returns () 2023-08-04 21:46:36 +02:00
cmd/grpc feat: add rope settings and negative prompt, drop grammar backend () 2023-07-25 19:05:27 +02:00
examples examples: Update README 2023-08-06 23:07:06 +02:00
extra feat: add ngqa and RMSNormEps parameters () 2023-08-03 00:51:08 +02:00
internal feat: cleanups, small enhancements 2023-07-04 18:58:19 +02:00
models Add docker-compose 2023-04-13 01:13:14 +02:00
pkg fix: do not break on newlines on function returns () 2023-08-04 21:46:36 +02:00
prompt-templates feat(llama2): add template for chat messages () 2023-07-22 11:31:39 -04:00
tests feat(llama2): add template for chat messages () 2023-07-22 11:31:39 -04:00
.dockerignore Remove .git from .dockerignore 2023-07-06 21:25:10 +02:00
.env Make REBUILD=false default behavior 2023-07-07 00:29:14 +02:00
.gitattributes Create .gitattributes to force git clone to keep the LF line endings on .sh files () 2023-07-30 15:27:43 +02:00
.gitignore fix: update gitignore and make clean () 2023-07-25 23:02:46 +02:00
assets.go feat: Update gpt4all, support multiple implementations in runtime () 2023-06-01 23:38:52 +02:00
docker-compose.yaml images: cleanup, drop .dev Dockerfile () 2023-05-30 15:58:10 +02:00
Dockerfile fix: symlink libphonemize in the container 2023-07-28 19:40:21 +02:00
Earthfile Rename project to LocalAI () 2023-04-19 18:43:10 +02:00
entrypoint.sh Added CPU information to entrypoint.sh () 2023-07-23 19:27:55 +00:00
go.mod fix(deps): update github.com/go-skynet/go-llama.cpp digest to 50cee77 () 2023-08-03 19:08:04 +02:00
go.sum fix(deps): update github.com/go-skynet/go-llama.cpp digest to 50cee77 () 2023-08-03 19:08:04 +02:00
LICENSE docs: update docs/license(clarification) and point to new website () 2023-05-29 23:09:19 +02:00
main.go feat: add external grpc and model autoloading 2023-07-20 22:10:12 +02:00
Makefile ⬆️ Update nomic-ai/gpt4all () 2023-08-03 19:07:53 +02:00
README.md readme: simplify, remove dups with website 2023-08-07 00:01:01 +02:00
renovate.json ci: manually update deps 2023-05-04 15:01:29 +02:00



LocalAI

LocalAI forks LocalAI stars LocalAI pull-requests

💡 Get help - FAQ 💭Discussions 💬 Discord 📖 Documentation website

💻 Quickstart 📣 News 🛫 Examples 🖼️ Models

testsBuild and Releasebuild container imagesBump dependenciesArtifact Hub

LocalAI is a drop-in replacement REST API that's compatible with OpenAI API specifications for local inferencing. It allows you to run LLMs (and not only) locally or on-prem with consumer grade hardware, supporting multiple model families that are compatible with the ggml format. Does not require GPU.

Follow LocalAI

Follow LocalAI_API Join LocalAI Discord Community

Connect with the Creator

Follow mudler_it Follow on Github

Share LocalAI Repository

Follow _LocalAI Share on Telegram Share on Reddit Buy Me A Coffee


In a nutshell:

  • Local, OpenAI drop-in alternative REST API. You own your data.
  • NO GPU required. NO Internet access is required either
    • Optional, GPU Acceleration is available in llama.cpp-compatible LLMs. See also the build section.
  • Supports multiple models
  • 🏃 Once loaded the first time, it keep models loaded in memory for faster inference
  • Doesn't shell-out, but uses C++ bindings for a faster inference and better performance.

LocalAI was created by Ettore Di Giacinto and is a community-driven project, focused on making the AI accessible to anyone. Any contribution, feedback and PR is welcome!

Note that this started just as a fun weekend project in order to try to create the necessary pieces for a full AI assistant like ChatGPT: the community is growing fast and we are working hard to make it better and more stable. If you want to help, please consider contributing (see below)!

🚀 Features

🔥🔥 Hot topics / Roadmap

📖 🎥 Media, Blogs, Social

💻 Usage

Check out the Getting started section. Here below you will find generic, quick instructions to get ready and use LocalAI.

The easiest way to run LocalAI is by using docker-compose (to build locally, see building LocalAI):


git clone https://github.com/go-skynet/LocalAI

cd LocalAI

# (optional) Checkout a specific LocalAI tag
# git checkout -b build <TAG>

# copy your models to models/
cp your-model.bin models/

# (optional) Edit the .env file to set things like context size and threads
# vim .env

# start with docker-compose
docker-compose up -d --pull always
# or you can build the images with:
# docker-compose up -d --build

# Now API is accessible at localhost:8080
curl http://localhost:8080/v1/models
# {"object":"list","data":[{"id":"your-model.bin","object":"model"}]}

curl http://localhost:8080/v1/completions -H "Content-Type: application/json" -d '{
     "model": "your-model.bin",            
     "prompt": "A long time ago in a galaxy far, far away",
     "temperature": 0.7
   }'

💡 Example: Use GPT4ALL-J model

# Clone LocalAI
git clone https://github.com/go-skynet/LocalAI

cd LocalAI

# (optional) Checkout a specific LocalAI tag
# git checkout -b build <TAG>

# Download gpt4all-j to models/
wget https://gpt4all.io/models/ggml-gpt4all-j.bin -O models/ggml-gpt4all-j

# Use a template from the examples
cp -rf prompt-templates/ggml-gpt4all-j.tmpl models/

# (optional) Edit the .env file to set things like context size and threads
# vim .env

# start with docker-compose
docker-compose up -d --pull always
# or you can build the images with:
# docker-compose up -d --build
# Now API is accessible at localhost:8080
curl http://localhost:8080/v1/models
# {"object":"list","data":[{"id":"ggml-gpt4all-j","object":"model"}]}

curl http://localhost:8080/v1/chat/completions -H "Content-Type: application/json" -d '{
     "model": "ggml-gpt4all-j",
     "messages": [{"role": "user", "content": "How are you?"}],
     "temperature": 0.9 
   }'

# {"model":"ggml-gpt4all-j","choices":[{"message":{"role":"assistant","content":"I'm doing well, thanks. How about you?"}}]}

🔗 Resources

❤️ Sponsors

Do you find LocalAI useful?

Support the project by becoming a backer or sponsor. Your logo will show up here with a link to your website.

A huge thank you to our generous sponsors who support this project:

Spectro Cloud logo_600x600px_transparent bg
Spectro Cloud
Spectro Cloud kindly supports LocalAI by providing GPU and computing resources to run tests on lamdalabs!

🌟 Star history

LocalAI Star history Chart

📖 License

LocalAI is a community-driven project created by Ettore Di Giacinto.

MIT - Author Ettore Di Giacinto

🙇 Acknowledgements

LocalAI couldn't have been built without the help of great software already available from the community. Thank you!

🤗 Contributors

This is a community project, a special thanks to our contributors! 🤗