[](https://renovatebot.com) This PR contains the following updates: | Package | Type | Update | Change | |---|---|---|---| | [github.com/valyala/fasthttp](https://togithub.com/valyala/fasthttp) | require | minor | `v1.48.0` -> `v1.49.0` | --- ### Release Notes <details> <summary>valyala/fasthttp (github.com/valyala/fasthttp)</summary> ### [`v1.49.0`](https://togithub.com/valyala/fasthttp/releases/tag/v1.49.0) [Compare Source](https://togithub.com/valyala/fasthttp/compare/v1.48.0...v1.49.0) - [`0e99e64`](https://togithub.com/valyala/fasthttp/commit/0e99e64) Update golangci-lint and gosec ([#​1609](https://togithub.com/valyala/fasthttp/issues/1609)) (Erik Dubbelboer) - [`6aea1e0`](https://togithub.com/valyala/fasthttp/commit/6aea1e0) fix round2\_32, split round2 tests because they depend on sizeof int at compile time ([#​1607](https://togithub.com/valyala/fasthttp/issues/1607)) (Duncan Overbruck) - [`4b0e6c7`](https://togithub.com/valyala/fasthttp/commit/4b0e6c7) Update ErrNoMultipartForm (Erik Dubbelboer) - [`727021a`](https://togithub.com/valyala/fasthttp/commit/727021a) Update security policy (Erik Dubbelboer) - [`54fdc7a`](https://togithub.com/valyala/fasthttp/commit/54fdc7a) Abstracts the RoundTripper interface and provides a default implement ([#​1602](https://togithub.com/valyala/fasthttp/issues/1602)) (Tim) - [`e181af1`](https://togithub.com/valyala/fasthttp/commit/e181af1) fasthttpproxy support ipv6 ([#​1597](https://togithub.com/valyala/fasthttp/issues/1597)) (Pluto) - [`6eb2249`](https://togithub.com/valyala/fasthttp/commit/6eb2249) fix:fasthttp server with tlsConfig ([#​1595](https://togithub.com/valyala/fasthttp/issues/1595)) (Zhang Xiaopei) - [`1c85d43`](https://togithub.com/valyala/fasthttp/commit/1c85d43) Fix round2 (Erik Dubbelboer) - [`064124e`](https://togithub.com/valyala/fasthttp/commit/064124e) Avoid nolint:errcheck in header tests ([#​1589](https://togithub.com/valyala/fasthttp/issues/1589)) (Oleksandr Redko) - [`0d0bbfe`](https://togithub.com/valyala/fasthttp/commit/0d0bbfe) Auto add 'Vary' header after compression ([#​1585](https://togithub.com/valyala/fasthttp/issues/1585)) (AutumnSun) - [`d229959`](https://togithub.com/valyala/fasthttp/commit/d229959) Remove unnecessary indent blocks ([#​1586](https://togithub.com/valyala/fasthttp/issues/1586)) (Oleksandr Redko) - [`6b68042`](https://togithub.com/valyala/fasthttp/commit/6b68042) Use timeout in TCPDialer to resolveTCPAddrs ([#​1582](https://togithub.com/valyala/fasthttp/issues/1582)) (un000) </details> --- ### Configuration 📅 **Schedule**: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined). 🚦 **Automerge**: Disabled by config. Please merge this manually once you are satisfied. ♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox. 🔕 **Ignore**: Close this PR and you won't be reminded about this update again. --- - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box --- This PR has been generated by [Mend Renovate](https://www.mend.io/free-developer-tools/renovate/). View repository job log [here](https://developer.mend.io/github/go-skynet/LocalAI). <!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzNi42OC4wIiwidXBkYXRlZEluVmVyIjoiMzYuNjguMCIsInRhcmdldEJyYW5jaCI6Im1hc3RlciJ9--> Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
LocalAI
💡 Get help - ❓FAQ 💭Discussions 💬 Discord 📖 Documentation website
LocalAI is a drop-in replacement REST API that's compatible with OpenAI API specifications for local inferencing. It allows you to run LLMs (and not only) locally or on-prem with consumer grade hardware, supporting multiple model families that are compatible with the ggml format. Does not require GPU.
Follow LocalAI
Connect with the Creator
Share LocalAI Repository
In a nutshell:
- Local, OpenAI drop-in alternative REST API. You own your data.
- NO GPU required. NO Internet access is required either
- Optional, GPU Acceleration is available in
llama.cpp
-compatible LLMs. See also the build section.
- Optional, GPU Acceleration is available in
- Supports multiple models
- 🏃 Once loaded the first time, it keep models loaded in memory for faster inference
- ⚡ Doesn't shell-out, but uses C++ bindings for a faster inference and better performance.
LocalAI was created by Ettore Di Giacinto and is a community-driven project, focused on making the AI accessible to anyone. Any contribution, feedback and PR is welcome!
Note that this started just as a fun weekend project in order to try to create the necessary pieces for a full AI assistant like ChatGPT
: the community is growing fast and we are working hard to make it better and more stable. If you want to help, please consider contributing (see below)!
🔥🔥 Hot topics / Roadmap
🚀 Features
- 📖 Text generation with GPTs (
llama.cpp
,gpt4all.cpp
, ... 📖 and more) - 🗣 Text to Audio
- 🔈 Audio to Text (Audio transcription with
whisper.cpp
) - 🎨 Image generation with stable diffusion
- 🔥 OpenAI functions 🆕
- 🧠 Embeddings generation for vector databases
- ✍️ Constrained grammars
- 🖼️ Download Models directly from Huggingface
📖 🎥 Media, Blogs, Social
- Create a slackbot for teams and OSS projects that answer to documentation
- LocalAI meets k8sgpt
- Question Answering on Documents locally with LangChain, LocalAI, Chroma, and GPT4All
- Tutorial to use k8sgpt with LocalAI
💻 Usage
Check out the Getting started section in our documentation.
💡 Example: Use GPT4ALL-J model
See the documentation
🔗 Resources
❤️ Sponsors
Do you find LocalAI useful?
Support the project by becoming a backer or sponsor. Your logo will show up here with a link to your website.
A huge thank you to our generous sponsors who support this project:
Spectro Cloud |
Spectro Cloud kindly supports LocalAI by providing GPU and computing resources to run tests on lamdalabs! |
🌟 Star history
📖 License
LocalAI is a community-driven project created by Ettore Di Giacinto.
MIT - Author Ettore Di Giacinto
🙇 Acknowledgements
LocalAI couldn't have been built without the help of great software already available from the community. Thank you!
- llama.cpp
- https://github.com/tatsu-lab/stanford_alpaca
- https://github.com/cornelk/llama-go for the initial ideas
- https://github.com/antimatter15/alpaca.cpp
- https://github.com/EdVince/Stable-Diffusion-NCNN
- https://github.com/ggerganov/whisper.cpp
- https://github.com/saharNooby/rwkv.cpp
- https://github.com/rhasspy/piper
- https://github.com/cmp-nct/ggllm.cpp
🤗 Contributors
This is a community project, a special thanks to our contributors! 🤗