🤖 The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and local-first. Drop-in replacement for OpenAI, running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. Features: Generate Text, Audio, Video, Images, Voice Cloning, Distributed, P2P inference
Go to file
Dave 9a8a249932
feat: devcontainer part 3 (#3318)
* stash initial fixes, attempt to open branch inside container

Signed-off-by: Dave Lee <dave@gray101.com>

* add yq, from inside DC

Signed-off-by: Dave Lee <dave@gray101.com>

* stash progress, rebuild container

Signed-off-by: Dave Lee <dave@gray101.com>

* snap

Signed-off-by: Dave Lee <dave@gray101.com>

* split builder into builder-sd, will speed up devcontainer build times and potentially help caching in other situations.

Signed-off-by: Dave Lee <dave@gray101.com>

* fix yq

Signed-off-by: Dave Lee <dave@gray101.com>

* fix paths

Signed-off-by: Dave Lee <dave@gray101.com>

* fix paths - new folder to bypass the .dockerignore which _should_ exclude the other files

Signed-off-by: Dave Lee <dave@gray101.com>

* fix

Signed-off-by: Dave Lee <dave@gray101.com>

* fix ]

Signed-off-by: Dave Lee <dave@gray101.com>

---------

Signed-off-by: Dave Lee <dave@gray101.com>
2024-08-20 12:16:21 +02:00
.devcontainer feat: devcontainer part 3 (#3318) 2024-08-20 12:16:21 +02:00
.devcontainer-scripts feat: devcontainer part 3 (#3318) 2024-08-20 12:16:21 +02:00
.github chore: drop petals (#3316) 2024-08-20 10:01:38 +02:00
.vscode feat: Initial Version of vscode DevContainer (#3217) 2024-08-14 09:06:41 +02:00
aio models(gallery): add mistral-0.3 and command-r, update functions (#2388) 2024-05-23 19:16:08 +02:00
backend chore(deps): Bump grpcio from 1.65.4 to 1.65.5 in /backend/python/transformers-musicgen (#3308) 2024-08-20 08:57:58 +00:00
configuration refactor: move remaining api packages to core (#1731) 2024-03-01 16:19:53 +01:00
core chore(ux): allow to create and drag dots in the animation (#3287) 2024-08-19 20:40:55 +02:00
custom-ca-certs feat(certificates): add support for custom CA certificates (#880) 2023-11-01 20:10:14 +01:00
docs chore: drop petals (#3316) 2024-08-20 10:01:38 +02:00
embedded chore(ux): add animated header with anime.js in p2p sections (#3271) 2024-08-19 18:05:02 +02:00
examples chore(deps): Bump langchain-community from 0.2.11 to 0.2.12 in /examples/langchain/langchainpy-localai-example (#3311) 2024-08-20 09:18:05 +00:00
gallery models(gallery): add llama-3.1-storm-8b-q4_k_m (#3270) 2024-08-19 11:39:47 +02:00
internal feat: cleanups, small enhancements 2023-07-04 18:58:19 +02:00
models Add docker-compose 2023-04-13 01:13:14 +02:00
pkg chore: drop gpt4all.cpp (#3106) 2024-08-07 23:35:55 +02:00
prompt-templates Requested Changes from GPT4ALL to Luna-AI-Llama2 (#1092) 2023-09-22 11:22:17 +02:00
scripts fix(scripts): minor fixup to gallery scripts 2024-07-13 11:36:20 +02:00
swagger feat(swagger): update swagger (#3196) 2024-08-07 21:54:27 +00:00
tests rf: centralize base64 image handling (#2595) 2024-06-24 08:34:36 +02:00
.dockerignore feat: Initial Version of vscode DevContainer (#3217) 2024-08-14 09:06:41 +02:00
.editorconfig feat(stores): Vector store backend (#1795) 2024-03-22 21:14:04 +01:00
.env feat: Initial Version of vscode DevContainer (#3217) 2024-08-14 09:06:41 +02:00
.gitattributes Create .gitattributes to force git clone to keep the LF line endings on .sh files (#838) 2023-07-30 15:27:43 +02:00
.gitignore feat: devcontainer part 3 (#3318) 2024-08-20 12:16:21 +02:00
.gitmodules docs/examples: enhancements (#1572) 2024-01-18 19:41:08 +01:00
.yamllint fix: yamlint warnings and errors (#2131) 2024-04-25 17:25:56 +00:00
assets.go feat: Update gpt4all, support multiple implementations in runtime (#472) 2023-06-01 23:38:52 +02:00
CONTRIBUTING.md Update CONTRIBUTING.md 2024-04-12 15:27:40 +02:00
docker-compose.yaml feat: Initial Version of vscode DevContainer (#3217) 2024-08-14 09:06:41 +02:00
Dockerfile feat: devcontainer part 3 (#3318) 2024-08-20 12:16:21 +02:00
Dockerfile.aio feat(aio): entrypoint, update workflows (#1872) 2024-03-21 22:09:04 +01:00
Earthfile Rename project to LocalAI (#35) 2023-04-19 18:43:10 +02:00
Entitlements.plist Feat: OSX Local Codesigning (#1319) 2023-11-23 15:22:54 +01:00
entrypoint.sh deps(llama.cpp): bump to latest, update build variables (#2669) 2024-06-27 23:10:04 +02:00
go.mod feat(explorer): make possible to run sync in a separate process (#3224) 2024-08-12 19:25:44 +02:00
go.sum feat(explorer): make possible to run sync in a separate process (#3224) 2024-08-12 19:25:44 +02:00
LICENSE docs/examples: enhancements (#1572) 2024-01-18 19:41:08 +01:00
main.go chore: fix go.mod module (#2635) 2024-06-23 08:24:36 +00:00
Makefile chore: drop petals (#3316) 2024-08-20 10:01:38 +02:00
README.md Update README.md 2024-08-02 12:45:22 +02:00
renovate.json ci: manually update deps 2023-05-04 15:01:29 +02:00
SECURITY.md Create SECURITY.md 2024-02-29 19:53:04 +01:00



LocalAI

LocalAI forks LocalAI stars LocalAI pull-requests

LocalAI Docker hub LocalAI Quay.io

Follow LocalAI_API Join LocalAI Discord Community

💡 Get help - FAQ 💭Discussions 💬 Discord 📖 Documentation website

💻 Quickstart 📣 News 🛫 Examples 🖼️ Models 🚀 Roadmap

testsBuild and Releasebuild container imagesBump dependenciesArtifact Hub

LocalAI is the free, Open Source OpenAI alternative. LocalAI act as a drop-in replacement REST API thats compatible with OpenAI (Elevenlabs, Anthropic... ) API specifications for local AI inferencing. It allows you to run LLMs, generate images, audio (and not only) locally or on-prem with consumer grade hardware, supporting multiple model families. Does not require GPU. It is created and maintained by Ettore Di Giacinto.

screen

Run the installer script:

curl https://localai.io/install.sh | sh

Or run with docker:

docker run -ti --name local-ai -p 8080:8080 localai/localai:latest-aio-cpu
# Alternative images:
# - if you have an Nvidia GPU:
# docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-aio-gpu-nvidia-cuda-12
# - without preconfigured models
# docker run -ti --name local-ai -p 8080:8080 localai/localai:latest
# - without preconfigured models for Nvidia GPUs
# docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-gpu-nvidia-cuda-12 

💻 Getting started

🔥🔥 Hot topics / Roadmap

Roadmap

Hot topics (looking for contributors):

If you want to help and contribute, issues up for grabs: https://github.com/mudler/LocalAI/issues?q=is%3Aissue+is%3Aopen+label%3A%22up+for+grabs%22

🚀 Features

💻 Usage

Check out the Getting started section in our documentation.

🔗 Community and integrations

Build and deploy custom containers:

WebUIs:

Model galleries

Other:

🔗 Resources

📖 🎥 Media, Blogs, Social

Citation

If you utilize this repository, data in a downstream project, please consider citing it with:

@misc{localai,
  author = {Ettore Di Giacinto},
  title = {LocalAI: The free, Open source OpenAI alternative},
  year = {2023},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/go-skynet/LocalAI}},

❤️ Sponsors

Do you find LocalAI useful?

Support the project by becoming a backer or sponsor. Your logo will show up here with a link to your website.

A huge thank you to our generous sponsors who support this project covering CI expenses, and our Sponsor list:


🌟 Star history

LocalAI Star history Chart

📖 License

LocalAI is a community-driven project created by Ettore Di Giacinto.

MIT - Author Ettore Di Giacinto mudler@localai.io

🙇 Acknowledgements

LocalAI couldn't have been built without the help of great software already available from the community. Thank you!

🤗 Contributors

This is a community project, a special thanks to our contributors! 🤗