🤖 The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and local-first. Drop-in replacement for OpenAI, running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. Features: Generate Text, Audio, Video, Images, Voice Cloning, Distributed, P2P inference
Go to file
Ettore Di Giacinto b09bae3443 fix: autogptq requirements
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-08-08 00:22:15 +02:00
.github fix: symlink libphonemize in the container 2023-07-28 19:40:21 +02:00
.vscode feat: Add more test-cases and remove dev container (#433) 2023-05-30 13:01:55 +02:00
api feat: add initial Bark backend implementation 2023-08-07 22:53:28 +02:00
cmd/grpc feat: add rope settings and negative prompt, drop grammar backend (#797) 2023-07-25 19:05:27 +02:00
examples Update to working k8sgpt + localai example in documentation (#852) 2023-08-01 22:31:36 +02:00
extra fix: autogptq requirements 2023-08-08 00:22:15 +02:00
internal feat: cleanups, small enhancements 2023-07-04 18:58:19 +02:00
models Add docker-compose 2023-04-13 01:13:14 +02:00
pkg feat: add initial AutoGPTQ backend implementation 2023-08-07 22:53:28 +02:00
prompt-templates feat(llama2): add template for chat messages (#782) 2023-07-22 11:31:39 -04:00
tests feat(llama2): add template for chat messages (#782) 2023-07-22 11:31:39 -04:00
.dockerignore Remove .git from .dockerignore 2023-07-06 21:25:10 +02:00
.env Make REBUILD=false default behavior 2023-07-07 00:29:14 +02:00
.gitattributes Create .gitattributes to force git clone to keep the LF line endings on .sh files (#838) 2023-07-30 15:27:43 +02:00
.gitignore fix: update gitignore and make clean (#798) 2023-07-25 23:02:46 +02:00
assets.go feat: Update gpt4all, support multiple implementations in runtime (#472) 2023-06-01 23:38:52 +02:00
docker-compose.yaml images: cleanup, drop .dev Dockerfile (#437) 2023-05-30 15:58:10 +02:00
Dockerfile fix: autogptq requirements 2023-08-08 00:22:15 +02:00
Earthfile Rename project to LocalAI (#35) 2023-04-19 18:43:10 +02:00
entrypoint.sh Added CPU information to entrypoint.sh (#794) 2023-07-23 19:27:55 +00:00
go.mod fix(deps): update github.com/go-skynet/go-llama.cpp digest to 50cee77 (#861) 2023-08-03 19:08:04 +02:00
go.sum fix(deps): update github.com/go-skynet/go-llama.cpp digest to 50cee77 (#861) 2023-08-03 19:08:04 +02:00
LICENSE docs: update docs/license(clarification) and point to new website (#415) 2023-05-29 23:09:19 +02:00
main.go feat: add external grpc and model autoloading 2023-07-20 22:10:12 +02:00
Makefile feat: add initial Bark backend implementation 2023-08-07 22:53:28 +02:00
README.md Update README.md 2023-08-06 11:57:28 +02:00
renovate.json ci: manually update deps 2023-05-04 15:01:29 +02:00



LocalAI

tests build container images Artifact Hub

💡 Get help - FAQ 💭Discussions 💬 Discord 📖 Documentation website

LocalAI is a drop-in replacement REST API that's compatible with OpenAI API specifications for local inferencing. It allows you to run LLMs (and not only) locally or on-prem with consumer grade hardware, supporting multiple model families that are compatible with the ggml format. Does not require GPU.

In a nutshell:

LocalAI was created by Ettore Di Giacinto and is a community-driven project, focused on making the AI accessible to anyone. Any contribution, feedback and PR is welcome!

Note that this started just as a fun weekend project in order to try to create the necessary pieces for a full AI assistant like ChatGPT: the community is growing fast and we are working hard to make it better and more stable. If you want to help, please consider contributing (see below)!

See the Getting started and examples sections to learn how to use LocalAI. For a list of curated models check out the model gallery.

ChatGPT OSS alternative Image generation
Screenshot from 2023-04-26 23-59-55 b6441997879
Telegram bot Flowise
Screenshot from 2023-06-09 00-36-26 Screenshot from 2023-05-30 18-01-03

Hot topics / Roadmap

News

Check the news and the release notes in the dedicated section

  • 🔥🔥🔥 23-07-2023: v1.22.0: LLaMa2, huggingface embeddings, and more ! Changelog

For latest news, follow also on Twitter @LocalAI_API and @mudler_it

Media, Blogs, Social

Contribute and help

To help the project you can:

  • Hacker news post - help us out by voting if you like this project.

  • If you have technological skills and want to contribute to development, have a look at the open issues. If you are new you can have a look at the good-first-issue and help-wanted labels.

  • If you don't have technological skills you can still help improving documentation or add examples or share your user-stories with our community, any help and contribution is welcome!

Usage

Check out the Getting started section. Here below you will find generic, quick instructions to get ready and use LocalAI.

The easiest way to run LocalAI is by using docker-compose (to build locally, see building LocalAI):


git clone https://github.com/go-skynet/LocalAI

cd LocalAI

# (optional) Checkout a specific LocalAI tag
# git checkout -b build <TAG>

# copy your models to models/
cp your-model.bin models/

# (optional) Edit the .env file to set things like context size and threads
# vim .env

# start with docker-compose
docker-compose up -d --pull always
# or you can build the images with:
# docker-compose up -d --build

# Now API is accessible at localhost:8080
curl http://localhost:8080/v1/models
# {"object":"list","data":[{"id":"your-model.bin","object":"model"}]}

curl http://localhost:8080/v1/completions -H "Content-Type: application/json" -d '{
     "model": "your-model.bin",            
     "prompt": "A long time ago in a galaxy far, far away",
     "temperature": 0.7
   }'

Example: Use GPT4ALL-J model

# Clone LocalAI
git clone https://github.com/go-skynet/LocalAI

cd LocalAI

# (optional) Checkout a specific LocalAI tag
# git checkout -b build <TAG>

# Download gpt4all-j to models/
wget https://gpt4all.io/models/ggml-gpt4all-j.bin -O models/ggml-gpt4all-j

# Use a template from the examples
cp -rf prompt-templates/ggml-gpt4all-j.tmpl models/

# (optional) Edit the .env file to set things like context size and threads
# vim .env

# start with docker-compose
docker-compose up -d --pull always
# or you can build the images with:
# docker-compose up -d --build
# Now API is accessible at localhost:8080
curl http://localhost:8080/v1/models
# {"object":"list","data":[{"id":"ggml-gpt4all-j","object":"model"}]}

curl http://localhost:8080/v1/chat/completions -H "Content-Type: application/json" -d '{
     "model": "ggml-gpt4all-j",
     "messages": [{"role": "user", "content": "How are you?"}],
     "temperature": 0.9 
   }'

# {"model":"ggml-gpt4all-j","choices":[{"message":{"role":"assistant","content":"I'm doing well, thanks. How about you?"}}]}

Build locally

In order to build the LocalAI container image locally you can use docker:

# build the image
docker build -t localai .
docker run localai

Or you can build the binary with make:

make build

See the build section in our documentation for detailed instructions.

Run LocalAI in Kubernetes

LocalAI can be installed inside Kubernetes with helm. See installation instructions.

Supported API endpoints

See the list of the LocalAI features for a full tour of the available API endpoints.

Frequently asked questions

See the FAQ section for a list of common questions.

Projects already using LocalAI to run local models

Feel free to open up a PR to get your project listed!

Sponsors

Do you find LocalAI useful?

Support the project by becoming a backer or sponsor. Your logo will show up here with a link to your website.

A huge thank you to our generous sponsors who support this project:

Spectro Cloud logo_600x600px_transparent bg
Spectro Cloud
Spectro Cloud kindly supports LocalAI by providing GPU and computing resources to run tests on lamdalabs!

Star history

LocalAI Star history Chart

License

LocalAI is a community-driven project created by Ettore Di Giacinto.

MIT

Author

Ettore Di Giacinto and others

Acknowledgements

LocalAI couldn't have been built without the help of great software already available from the community. Thank you!

Contributors