diff --git a/README.md b/README.md index 776fb117..be1f58e6 100644 --- a/README.md +++ b/README.md @@ -1,7 +1,6 @@


-
- LocalAI +

@@ -48,25 +47,58 @@ [![tests](https://github.com/go-skynet/LocalAI/actions/workflows/test.yml/badge.svg)](https://github.com/go-skynet/LocalAI/actions/workflows/test.yml)[![Build and Release](https://github.com/go-skynet/LocalAI/actions/workflows/release.yaml/badge.svg)](https://github.com/go-skynet/LocalAI/actions/workflows/release.yaml)[![build container images](https://github.com/go-skynet/LocalAI/actions/workflows/image.yml/badge.svg)](https://github.com/go-skynet/LocalAI/actions/workflows/image.yml)[![Bump dependencies](https://github.com/go-skynet/LocalAI/actions/workflows/bump_deps.yaml/badge.svg)](https://github.com/go-skynet/LocalAI/actions/workflows/bump_deps.yaml)[![Artifact Hub](https://img.shields.io/endpoint?url=https://artifacthub.io/badge/repository/localai)](https://artifacthub.io/packages/search?repo=localai) -**LocalAI** is the free, Open Source OpenAI alternative. LocalAI act as a drop-in replacement REST API that’s compatible with OpenAI (Elevenlabs, Anthropic... ) API specifications for local AI inferencing. It allows you to run LLMs, generate images, audio (and not only) locally or on-prem with consumer grade hardware, supporting multiple model families. Does not require GPU. It is created and maintained by [Ettore Di Giacinto](https://github.com/mudler). +**LocalAI** is the free, Open Source OpenAI alternative. LocalAI act as a drop-in replacement REST API that's compatible with OpenAI (Elevenlabs, Anthropic... ) API specifications for local AI inferencing. It allows you to run LLMs, generate images, audio (and not only) locally or on-prem with consumer grade hardware, supporting multiple model families. Does not require GPU. It is created and maintained by [Ettore Di Giacinto](https://github.com/mudler). + + +## πŸ“šπŸ†• Local Stack Family + +πŸ†• LocalAI is now part of a comprehensive suite of AI tools designed to work together: + + + + + + + + + + +
+ + LocalAGI Logo + + +

LocalAGI

+

A powerful Local AI agent management platform that serves as a drop-in replacement for OpenAI's Responses API, enhanced with advanced agentic capabilities.

+
+ + LocalRecall Logo + + +

LocalRecall

+

A REST-ful API and knowledge base management system that provides persistent memory and storage capabilities for AI agents.

+
+ +## Screenshots + | Talk Interface | Generate Audio | | --- | --- | -| ![Screenshot 2025-03-31 at 12-01-36 LocalAI - Talk](https://github.com/user-attachments/assets/9841b1ee-88af-4b96-8ec0-41b17364efa7) | ![Screenshot 2025-03-31 at 12-01-29 LocalAI - Generate audio with voice-en-us-ryan-low](https://github.com/user-attachments/assets/d729f6f4-0621-4715-bda3-35fe6e159524) | +| ![Screenshot 2025-03-31 at 12-01-36 LocalAI - Talk](./docs/assets/images/screenshots/screenshot_tts.png) | ![Screenshot 2025-03-31 at 12-01-29 LocalAI - Generate audio with voice-en-us-ryan-low](./docs/assets/images/screenshots/screenshot_tts.png) | | Models Overview | Generate Images | | --- | --- | -| ![Screenshot 2025-03-31 at 12-01-20 LocalAI - Models](https://github.com/user-attachments/assets/3cf0b918-ba8e-498a-a3cd-485db5984325) | ![Screenshot 2025-03-31 at 12-31-41 LocalAI - Generate images with flux 1-dev](https://github.com/user-attachments/assets/6753d23d-218b-4e07-94b8-9e6c5a4f2311) | +| ![Screenshot 2025-03-31 at 12-01-20 LocalAI - Models](./docs/assets/images/screenshots/screenshot_gallery.png) | ![Screenshot 2025-03-31 at 12-31-41 LocalAI - Generate images with flux 1-dev](./docs/assets/images/screenshots/screenshot_image.png) | -| Chat Interface | API Overview | +| Chat Interface | Home | | --- | --- | -| ![Screenshot 2025-03-31 at 11-57-44 LocalAI - Chat with localai-functioncall-qwen2 5-7b-v0 5](https://github.com/user-attachments/assets/048eab31-0f0c-4d52-a920-3715233f9bf3) | ![Screenshot 2025-03-31 at 11-57-23 LocalAI API - c2a39e3 (c2a39e3639227cfd94ffffe9f5691239acc275a8)](https://github.com/user-attachments/assets/2540e8ce-1a2c-4c12-800c-763bd9be247f) | +| ![Screenshot 2025-03-31 at 11-57-44 LocalAI - Chat with localai-functioncall-qwen2 5-7b-v0 5](./docs/assets/images/screenshots/screenshot_chat.png) | ![Screenshot 2025-03-31 at 11-57-23 LocalAI API - c2a39e3 (c2a39e3639227cfd94ffffe9f5691239acc275a8)](./docs/assets/images/screenshots/screenshot_home.png) | | Login | Swarm | | --- | --- | -|![Screenshot 2025-03-31 at 12-09-59 ](https://github.com/user-attachments/assets/5af681b0-dd8e-4fe8-a234-a22f8a040547) | ![Screenshot 2025-03-31 at 12-10-39 LocalAI - P2P dashboard](https://github.com/user-attachments/assets/b9527176-63d6-4d2e-8ed1-7fde13a9b0ad) | +|![Screenshot 2025-03-31 at 12-09-59 ](./docs/assets/images/screenshots/screenshot_login.png) | ![Screenshot 2025-03-31 at 12-10-39 LocalAI - P2P dashboard](./docs/assets/images/screenshots/screenshot_p2p.png) | -## Quickstart +## πŸ’» Quickstart Run the installer script: @@ -108,10 +140,11 @@ local-ai run https://gist.githubusercontent.com/.../phi-2.yaml local-ai run oci://localai/phi-2:latest ``` -[πŸ’» Getting started](https://localai.io/basics/getting_started/index.html) +For more information, see [πŸ’» Getting started](https://localai.io/basics/getting_started/index.html) ## πŸ“° Latest project news +- Apr 2025: [LocalAGI](https://github.com/mudler/LocalAGI) and [LocalRecall](https://github.com/mudler/LocalRecall) join the LocalAI family stack. - Apr 2025: WebUI overhaul, AIO images updates - Feb 2025: Backend cleanup, Breaking changes, new backends (kokoro, OutelTTS, faster-whisper), Nvidia L4T images - Jan 2025: LocalAI model release: https://huggingface.co/mudler/LocalAI-functioncall-phi-4-v0.3, SANA support in diffusers: https://github.com/mudler/LocalAI/pull/4603 @@ -127,19 +160,6 @@ local-ai run oci://localai/phi-2:latest Roadmap items: [List of issues](https://github.com/mudler/LocalAI/issues?q=is%3Aissue+is%3Aopen+label%3Aroadmap) -## πŸ”₯πŸ”₯ Hot topics (looking for help): - -- Multimodal with vLLM and Video understanding: https://github.com/mudler/LocalAI/pull/3729 -- Realtime API https://github.com/mudler/LocalAI/issues/3714 -- WebUI improvements: https://github.com/mudler/LocalAI/issues/2156 -- Backends v2: https://github.com/mudler/LocalAI/issues/1126 -- Improving UX v2: https://github.com/mudler/LocalAI/issues/1373 -- Assistant API: https://github.com/mudler/LocalAI/issues/1273 -- Vulkan: https://github.com/mudler/LocalAI/issues/1647 -- Anthropic API: https://github.com/mudler/LocalAI/issues/1808 - -If you want to help and contribute, issues up for grabs: https://github.com/mudler/LocalAI/issues?q=is%3Aissue+is%3Aopen+label%3A%22up+for+grabs%22 - ## πŸš€ [Features](https://localai.io/features/) - πŸ“– [Text generation with GPTs](https://localai.io/features/text-generation/) (`llama.cpp`, `transformers`, `vllm` ... [:book: and more](https://localai.io/model-compatibility/index.html#model-compatibility-table)) @@ -153,12 +173,10 @@ If you want to help and contribute, issues up for grabs: https://github.com/mudl - πŸ₯½ [Vision API](https://localai.io/features/gpt-vision/) - πŸ“ˆ [Reranker API](https://localai.io/features/reranker/) - πŸ†•πŸ–§ [P2P Inferencing](https://localai.io/features/distribute/) +- [Agentic capabilities](https://github.com/mudler/LocalAGI) - πŸ”Š Voice activity detection (Silero-VAD support) - 🌍 Integrated WebUI! -## πŸ’» Usage - -Check out the [Getting started](https://localai.io/basics/getting_started/index.html) section in our documentation. ### πŸ”— Community and integrations diff --git a/core/http/app.go b/core/http/app.go index 9cbeefff..57f95465 100644 --- a/core/http/app.go +++ b/core/http/app.go @@ -142,9 +142,9 @@ func API(application *application.Application) (*fiber.App, error) { httpFS := http.FS(embedDirStatic) router.Use(favicon.New(favicon.Config{ - URL: "/favicon.ico", + URL: "/favicon.svg", FileSystem: httpFS, - File: "static/favicon.ico", + File: "static/favicon.svg", })) router.Use("/static", filesystem.New(filesystem.Config{ diff --git a/core/http/explorer.go b/core/http/explorer.go index 36609add..e3001f3a 100644 --- a/core/http/explorer.go +++ b/core/http/explorer.go @@ -29,9 +29,9 @@ func Explorer(db *explorer.Database) *fiber.App { httpFS := http.FS(embedDirStatic) app.Use(favicon.New(favicon.Config{ - URL: "/favicon.ico", + URL: "/favicon.svg", FileSystem: httpFS, - File: "static/favicon.ico", + File: "static/favicon.svg", })) app.Use("/static", filesystem.New(filesystem.Config{ diff --git a/core/http/static/favicon.ico b/core/http/static/favicon.ico deleted file mode 100644 index 05a5fa9e..00000000 Binary files a/core/http/static/favicon.ico and /dev/null differ diff --git a/core/http/static/favicon.svg b/core/http/static/favicon.svg new file mode 100644 index 00000000..5e881d4b --- /dev/null +++ b/core/http/static/favicon.svg @@ -0,0 +1,171 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/core/http/static/logo.png b/core/http/static/logo.png new file mode 100644 index 00000000..de98e67b Binary files /dev/null and b/core/http/static/logo.png differ diff --git a/core/http/static/logo_horizontal.png b/core/http/static/logo_horizontal.png new file mode 100644 index 00000000..97df1a8b Binary files /dev/null and b/core/http/static/logo_horizontal.png differ diff --git a/core/http/views/login.html b/core/http/views/login.html index aed40d2f..f167dc60 100644 --- a/core/http/views/login.html +++ b/core/http/views/login.html @@ -12,7 +12,7 @@
- + LocalAI Logo
diff --git a/core/http/views/partials/head.html b/core/http/views/partials/head.html index 00fd4101..77b6d34e 100644 --- a/core/http/views/partials/head.html +++ b/core/http/views/partials/head.html @@ -3,7 +3,7 @@ {{.Title}} - + diff --git a/core/http/views/partials/navbar.html b/core/http/views/partials/navbar.html index 7fabae37..efec457b 100644 --- a/core/http/views/partials/navbar.html +++ b/core/http/views/partials/navbar.html @@ -4,10 +4,9 @@
- LocalAI Logo - LocalAI + class="h-14 mr-3 brightness-110 transition-all duration-300 group-hover:brightness-125">
diff --git a/core/http/views/partials/navbar_explorer.html b/core/http/views/partials/navbar_explorer.html index 80fd9758..3b9e5a8f 100644 --- a/core/http/views/partials/navbar_explorer.html +++ b/core/http/views/partials/navbar_explorer.html @@ -4,10 +4,9 @@
- LocalAI Logo - LocalAI
diff --git a/docs/assets/images/imagen.png b/docs/assets/images/imagen.png new file mode 100644 index 00000000..7d9808f4 Binary files /dev/null and b/docs/assets/images/imagen.png differ diff --git a/docs/assets/images/localai_screenshot.png b/docs/assets/images/localai_screenshot.png new file mode 100644 index 00000000..17774d1a Binary files /dev/null and b/docs/assets/images/localai_screenshot.png differ diff --git a/docs/assets/images/logos/logo.png b/docs/assets/images/logos/logo.png new file mode 100644 index 00000000..de98e67b Binary files /dev/null and b/docs/assets/images/logos/logo.png differ diff --git a/docs/assets/images/logos/logo.svg b/docs/assets/images/logos/logo.svg new file mode 100644 index 00000000..5e881d4b --- /dev/null +++ b/docs/assets/images/logos/logo.svg @@ -0,0 +1,171 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/docs/assets/images/screenshots/screenshot_chat.png b/docs/assets/images/screenshots/screenshot_chat.png new file mode 100644 index 00000000..bc621ba7 Binary files /dev/null and b/docs/assets/images/screenshots/screenshot_chat.png differ diff --git a/docs/assets/images/screenshots/screenshot_gallery.png b/docs/assets/images/screenshots/screenshot_gallery.png new file mode 100644 index 00000000..c8a33642 Binary files /dev/null and b/docs/assets/images/screenshots/screenshot_gallery.png differ diff --git a/docs/assets/images/screenshots/screenshot_home.png b/docs/assets/images/screenshots/screenshot_home.png new file mode 100644 index 00000000..18777a46 Binary files /dev/null and b/docs/assets/images/screenshots/screenshot_home.png differ diff --git a/docs/assets/images/screenshots/screenshot_image.png b/docs/assets/images/screenshots/screenshot_image.png new file mode 100644 index 00000000..7d9808f4 Binary files /dev/null and b/docs/assets/images/screenshots/screenshot_image.png differ diff --git a/docs/assets/images/screenshots/screenshot_login.png b/docs/assets/images/screenshots/screenshot_login.png new file mode 100644 index 00000000..82b1614a Binary files /dev/null and b/docs/assets/images/screenshots/screenshot_login.png differ diff --git a/docs/assets/images/screenshots/screenshot_p2p.png b/docs/assets/images/screenshots/screenshot_p2p.png new file mode 100644 index 00000000..fbeb75ef Binary files /dev/null and b/docs/assets/images/screenshots/screenshot_p2p.png differ diff --git a/docs/assets/images/screenshots/screenshot_talk.png b/docs/assets/images/screenshots/screenshot_talk.png new file mode 100644 index 00000000..956b59d1 Binary files /dev/null and b/docs/assets/images/screenshots/screenshot_talk.png differ diff --git a/docs/assets/images/screenshots/screenshot_tts.png b/docs/assets/images/screenshots/screenshot_tts.png new file mode 100644 index 00000000..8df68f70 Binary files /dev/null and b/docs/assets/images/screenshots/screenshot_tts.png differ diff --git a/docs/assets/jsconfig.json b/docs/assets/jsconfig.json index 9f2d1c43..f3bd7ab2 100644 --- a/docs/assets/jsconfig.json +++ b/docs/assets/jsconfig.json @@ -3,7 +3,7 @@ "baseUrl": ".", "paths": { "*": [ - "../../../../.cache/hugo_cache/modules/filecache/modules/pkg/mod/github.com/gohugoio/hugo-mod-jslibs-dist/popperjs/v2@v2.21100.20000/package/dist/cjs/popper.js/*", + "../../../../.cache/hugo_cache/modules/filecache/modules/pkg/mod/github.com/gohugoio/hugo-mod-jslibs-dist/popperjs/v2@v2.21100.20000/package/dist/cjs/*", "../../../../.cache/hugo_cache/modules/filecache/modules/pkg/mod/github.com/twbs/bootstrap@v5.3.2+incompatible/js/*" ] } diff --git a/docs/config.toml b/docs/config.toml index 52602750..97e89ce6 100644 --- a/docs/config.toml +++ b/docs/config.toml @@ -48,9 +48,9 @@ defaultContentLanguage = 'en' [params.docs] # Parameters for the /docs 'template' - logo = "https://github.com/go-skynet/LocalAI/assets/2420543/0966aa2a-166e-4f99-a3e5-6c915fc997dd" - logo_text = "LocalAI" - title = "LocalAI documentation" # default html title for documentation pages/sections + logo = "https://raw.githubusercontent.com/mudler/LocalAI/refs/heads/master/core/http/static/logo.png" + logo_text = "" + title = "LocalAI" # default html title for documentation pages/sections pathName = "docs" # path name for documentation site | default "docs" @@ -108,6 +108,7 @@ defaultContentLanguage = 'en' # indexName = "" # Index Name to perform search on (or set env variable HUGO_PARAM_DOCSEARCH_indexName) [params.analytics] # Parameters for Analytics (Google, Plausible) + # google = "G-XXXXXXXXXX" # Replace with your Google Analytics ID # plausibleURL = "/docs/s" # (or set via env variable HUGO_PARAM_ANALYTICS_plausibleURL) # plausibleAPI = "/docs/s" # optional - (or set via env variable HUGO_PARAM_ANALYTICS_plausibleAPI) # plausibleDomain = "" # (or set via env variable HUGO_PARAM_ANALYTICS_plausibleDomain) diff --git a/docs/content/docs/features/distributed_inferencing.md b/docs/content/docs/features/distributed_inferencing.md index 71d29f39..d599de87 100644 --- a/docs/content/docs/features/distributed_inferencing.md +++ b/docs/content/docs/features/distributed_inferencing.md @@ -13,6 +13,8 @@ LocalAI supports two modes of distributed inferencing via p2p: - **Federated Mode**: Requests are shared between the cluster and routed to a single worker node in the network based on the load balancer's decision. - **Worker Mode** (aka "model sharding" or "splitting weights"): Requests are processed by all the workers which contributes to the final inference result (by sharing the model weights). +A list of global instances shared by the community is available at [explorer.localai.io](https://explorer.localai.io). + ## Usage Starting LocalAI with `--p2p` generates a shared token for connecting multiple instances: and that's all you need to create AI clusters, eliminating the need for intricate network setups. diff --git a/docs/content/docs/getting-started/quickstart.md b/docs/content/docs/getting-started/quickstart.md index 4e14c505..0d962d3c 100644 --- a/docs/content/docs/getting-started/quickstart.md +++ b/docs/content/docs/getting-started/quickstart.md @@ -18,14 +18,45 @@ To access the WebUI with an API_KEY, browser extensions such as [Requestly](http {{% /alert %}} -## Using the Bash Installer +## Quickstart -Install LocalAI easily using the bash installer with the following command: -```sh +### Using the Bash Installer +```bash curl https://localai.io/install.sh | sh ``` +### Run with docker: +```bash +# CPU only image: +docker run -ti --name local-ai -p 8080:8080 localai/localai:latest-cpu + +# Nvidia GPU: +docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-gpu-nvidia-cuda-12 + +# CPU and GPU image (bigger size): +docker run -ti --name local-ai -p 8080:8080 localai/localai:latest + +# AIO images (it will pre-download a set of models ready for use, see https://localai.io/basics/container/) +docker run -ti --name local-ai -p 8080:8080 localai/localai:latest-aio-cpu +``` + +### Load models: + +```bash +# From the model gallery (see available models with `local-ai models list`, in the WebUI from the model tab, or visiting https://models.localai.io) +local-ai run llama-3.2-1b-instruct:q4_k_m +# Start LocalAI with the phi-2 model directly from huggingface +local-ai run huggingface://TheBloke/phi-2-GGUF/phi-2.Q8_0.gguf +# Install and run a model from the Ollama OCI registry +local-ai run ollama://gemma:2b +# Run a model from a configuration file +local-ai run https://gist.githubusercontent.com/.../phi-2.yaml +# Install and run a model from a standard OCI registry (e.g., Docker Hub) +local-ai run oci://localai/phi-2:latest +``` + + For a full list of options, refer to the [Installer Options]({{% relref "docs/advanced/installer" %}}) documentation. Binaries can also be [manually downloaded]({{% relref "docs/reference/binaries" %}}). diff --git a/docs/content/docs/overview.md b/docs/content/docs/overview.md index 11b9ce7d..981ba765 100644 --- a/docs/content/docs/overview.md +++ b/docs/content/docs/overview.md @@ -1,4 +1,3 @@ - +++ title = "Overview" weight = 1 @@ -7,162 +6,96 @@ description = "What is LocalAI?" tags = ["Beginners"] categories = [""] author = "Ettore Di Giacinto" -# This allows to overwrite the landing page -url = '/' icon = "info" +++ -

- -

+# Welcome to LocalAI -

- -LocalAI forks - - -LocalAI stars - - -LocalAI pull-requests - - - - -

+LocalAI is your complete AI stack for running AI models locally. It's designed to be simple, efficient, and accessible, providing a drop-in replacement for OpenAI's API while keeping your data private and secure. -

- -LocalAI Docker hub - - -LocalAI Quay.io - -

+## Why LocalAI? -

-mudler%2FLocalAI | Trendshift -

+In today's AI landscape, privacy, control, and flexibility are paramount. LocalAI addresses these needs by: -

- -Follow LocalAI_API - - -Join LocalAI Discord Community - -

+- **Privacy First**: Your data never leaves your machine +- **Complete Control**: Run models on your terms, with your hardware +- **Open Source**: MIT licensed and community-driven +- **Flexible Deployment**: From laptops to servers, with or without GPUs +- **Extensible**: Add new models and features as needed -> πŸ’‘ Get help - [❓FAQ](https://localai.io/faq/) [πŸ’­Discussions](https://github.com/go-skynet/LocalAI/discussions) [πŸ’­Discord](https://discord.gg/uJAeKSAGDy) -> -> [πŸ’» Quickstart](https://localai.io/basics/getting_started/) [πŸ–ΌοΈ Models](https://models.localai.io/) [πŸš€ Roadmap](https://github.com/mudler/LocalAI/issues?q=is%3Aissue+is%3Aopen+label%3Aroadmap) [πŸ₯½ Demo](https://demo.localai.io) [🌍 Explorer](https://explorer.localai.io) [πŸ›« Examples](https://github.com/go-skynet/LocalAI/tree/master/examples/) +## Core Components +LocalAI is more than just a single tool - it's a complete ecosystem: -**LocalAI** is the free, Open Source OpenAI alternative. LocalAI act as a drop-in replacement REST API that's compatible with OpenAI API specifications for local inferencing. It allows you to run LLMs, generate images, audio (and not only) locally or on-prem with consumer grade hardware, supporting multiple model families and architectures. Does not require GPU. It is created and maintained by [Ettore Di Giacinto](https://github.com/mudler). +1. **[LocalAI Core](https://github.com/mudler/LocalAI)** + - OpenAI-compatible API + - Multiple model support (LLMs, image, audio) + - No GPU required + - Fast inference with native bindings + - [Github repository](https://github.com/mudler/LocalAI) +2. **[LocalAGI](https://github.com/mudler/LocalAGI)** + - Autonomous AI agents + - No coding required + - WebUI and REST API support + - Extensible agent framework + - [Github repository](https://github.com/mudler/LocalAGI) -## Start LocalAI +3. **[LocalRecall](https://github.com/mudler/LocalRecall)** + - Semantic search + - Memory management + - Vector database + - Perfect for AI applications + - [Github repository](https://github.com/mudler/LocalRecall) -Start the image with Docker to have a functional clone of OpenAI! πŸš€: +## Getting Started -```bash -docker run -p 8080:8080 --name local-ai -ti localai/localai:latest-aio-cpu -# Do you have a Nvidia GPUs? Use this instead -# CUDA 11 -# docker run -p 8080:8080 --gpus all --name local-ai -ti localai/localai:latest-aio-gpu-nvidia-cuda-11 -# CUDA 12 -# docker run -p 8080:8080 --gpus all --name local-ai -ti localai/localai:latest-aio-gpu-nvidia-cuda-12 -``` - -Or just use the bash installer: +The fastest way to get started is with our one-line installer: ```bash curl https://localai.io/install.sh | sh ``` -See the [πŸ’» Quickstart](https://localai.io/basics/getting_started/) for all the options and way you can run LocalAI! +Or use Docker for a quick start: -## What is LocalAI? +```bash +docker run -p 8080:8080 --name local-ai -ti localai/localai:latest-aio-cpu +``` -In a nutshell: +For more detailed installation options and configurations, see our [Getting Started guide](/basics/getting_started/). -- Local, OpenAI drop-in alternative REST API. You own your data. -- NO GPU required. NO Internet access is required either - - Optional, GPU Acceleration is available. See also the [build section](https://localai.io/basics/build/index.html). -- Supports multiple models -- πŸƒ Once loaded the first time, it keep models loaded in memory for faster inference -- ⚑ Doesn't shell-out, but uses bindings for a faster inference and better performance. +## Key Features -LocalAI is focused on making the AI accessible to anyone. Any contribution, feedback and PR is welcome! +- **Text Generation**: Run various LLMs locally +- **Image Generation**: Create images with stable diffusion +- **Audio Processing**: Text-to-speech and speech-to-text +- **Vision API**: Image understanding and analysis +- **Embeddings**: Vector database support +- **Functions**: OpenAI-compatible function calling +- **P2P**: Distributed inference capabilities -Note that this started just as a fun weekend project by [mudler](https://github.com/mudler) in order to try to create the necessary pieces for a full AI assistant like `ChatGPT`: the community is growing fast and we are working hard to make it better and more stable. If you want to help, please consider contributing (see below)! +## Community and Support -### πŸš€ Features +LocalAI is a community-driven project. You can: -- πŸ“– [Text generation with GPTs](https://localai.io/features/text-generation/) (`llama.cpp`, `gpt4all.cpp`, ... [:book: and more](https://localai.io/model-compatibility/index.html#model-compatibility-table)) -- πŸ—£ [Text to Audio](https://localai.io/features/text-to-audio/) -- πŸ”ˆ [Audio to Text](https://localai.io/features/audio-to-text/) (Audio transcription with `whisper.cpp`) -- 🎨 [Image generation with stable diffusion](https://localai.io/features/image-generation) -- πŸ”₯ [OpenAI functions](https://localai.io/features/openai-functions/) πŸ†• -- 🧠 [Embeddings generation for vector databases](https://localai.io/features/embeddings/) -- ✍️ [Constrained grammars](https://localai.io/features/constrained_grammars/) -- πŸ–ΌοΈ [Download Models directly from Huggingface ](https://localai.io/models/) -- πŸ₯½ [Vision API](https://localai.io/features/gpt-vision/) -- πŸ’Ύ [Stores](https://localai.io/stores) -- πŸ“ˆ [Reranker](https://localai.io/features/reranker/) -- πŸ†•πŸ–§ [P2P Inferencing](https://localai.io/features/distribute/) +- Join our [Discord community](https://discord.gg/uJAeKSAGDy) +- Check out our [GitHub repository](https://github.com/mudler/LocalAI) +- Contribute to the project +- Share your use cases and examples -## Contribute and help +## Next Steps -To help the project you can: +Ready to dive in? Here are some recommended next steps: -- If you have technological skills and want to contribute to development, have a look at the open issues. If you are new you can have a look at the [good-first-issue](https://github.com/go-skynet/LocalAI/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22) and [help-wanted](https://github.com/go-skynet/LocalAI/issues?q=is%3Aissue+is%3Aopen+label%3A%22help+wanted%22) labels. +1. [Install LocalAI](/basics/getting_started/) +2. [Explore available models](https://models.localai.io) +3. [Model compatibility](/model-compatibility/) +4. [Try out examples](https://github.com/mudler/LocalAI-examples) +5. [Join the community](https://discord.gg/uJAeKSAGDy) +6. [Check the LocalAI Github repository](https://github.com/mudler/LocalAI) +7. [Check the LocalAGI Github repository](https://github.com/mudler/LocalAGI) -- If you don't have technological skills you can still help improving documentation or [add examples](https://github.com/go-skynet/LocalAI/tree/master/examples) or share your user-stories with our community, any help and contribution is welcome! -## 🌟 Star history +## License -[![LocalAI Star history Chart](https://api.star-history.com/svg?repos=mudler/LocalAI&type=Date)](https://star-history.com/#mudler/LocalAI&Date) - -## ❀️ Sponsors - -> Do you find LocalAI useful? - -Support the project by becoming [a backer or sponsor](https://github.com/sponsors/mudler). Your logo will show up here with a link to your website. - -A huge thank you to our generous sponsors who support this project covering CI expenses, and our [Sponsor list](https://github.com/sponsors/mudler): - -

- - - - -
-
-

- -## πŸ“– License - -LocalAI is a community-driven project created by [Ettore Di Giacinto](https://github.com/mudler/). - -MIT - Author Ettore Di Giacinto - -## πŸ™‡ Acknowledgements - -LocalAI couldn't have been built without the help of great software already available from the community. Thank you! - -- [llama.cpp](https://github.com/ggerganov/llama.cpp) -- https://github.com/tatsu-lab/stanford_alpaca -- https://github.com/cornelk/llama-go for the initial ideas -- https://github.com/antimatter15/alpaca.cpp -- https://github.com/EdVince/Stable-Diffusion-NCNN -- https://github.com/ggerganov/whisper.cpp -- https://github.com/saharNooby/rwkv.cpp -- https://github.com/rhasspy/piper - -## πŸ€— Contributors - -This is a community project, a special thanks to our contributors! πŸ€— - - - +LocalAI is MIT licensed, created and maintained by [Ettore Di Giacinto](https://github.com/mudler). diff --git a/docs/data/landing.yaml b/docs/data/landing.yaml index 76246c0e..ff3f02e6 100644 --- a/docs/data/landing.yaml +++ b/docs/data/landing.yaml @@ -2,38 +2,212 @@ # Hero hero: - enable: false + enable: true weight: 10 template: hero + backgroundImage: + path: "images/templates/hero" + filename: + desktop: "gradient-desktop.webp" + mobile: "gradient-mobile.webp" + + badge: + text: "⭐ 31.7k+ stars on GitHub!" + color: primary + pill: false + soft: true + + titleLogo: + path: "images/logos" + filename: "logo.png" + alt: "LocalAI Logo" + height: 540px + + title: "" + subtitle: | + **The free, OpenAI, Anthropic alternative. Your All-in-One Complete AI Stack** - Run powerful language models, autonomous agents, and document intelligence **locally** on your hardware. + + **No cloud, no limits, no compromise.** + + image: + path: "images" + filename: "localai_screenshot.png" + alt: "LocalAI Screenshot" + boxShadow: true + rounded: true + + ctaButton: + icon: rocket_launch + btnText: "Get Started" + url: "/basics/getting_started/" + cta2Button: + icon: code + btnText: "View on GitHub" + url: "https://github.com/mudler/LocalAI" + + info: | + **Drop-in replacement for OpenAI API** - modular suite of tools that work seamlessly together or independently. + + Start with **[LocalAI](https://localai.io)**'s OpenAI-compatible API, extend with **[LocalAGI](https://github.com/mudler/LocalAGI)**'s autonomous agents, and enhance with **[LocalRecall](https://github.com/mudler/LocalRecall)**'s semantic search - all running locally on your hardware. + + **Open Source** MIT Licensed. + # Feature Grid featureGrid: - enable: false + enable: true weight: 20 template: feature grid + title: Why choose LocalAI? + subtitle: | + **OpenAI API Compatible** - Run AI models locally with our modular ecosystem. From language models to autonomous agents and semantic search, build your complete AI stack without the cloud. + + items: + - title: LLM Inferencing + icon: memory_alt + description: LocalAI is a free, **Open Source** OpenAI alternative. Run **LLMs**, generate **images**, **audio** and more **locally** with consumer grade hardware. + ctaLink: + text: learn more + url: /basics/getting_started/ + - title: Agentic-first + icon: smart_toy + description: | + Extend LocalAI with LocalAGI, an autonomous AI agent platform that runs locally, no coding required. + Build and deploy autonomous agents with ease. Interact with REST APIs or use the WebUI. + ctaLink: + text: learn more + url: https://github.com/mudler/LocalAGI + + - title: Memory and Knowledge base + icon: psychology + description: + Extend LocalAI with LocalRecall, A local rest api for semantic search and memory management. Perfect for AI applications. + ctaLink: + text: learn more + url: https://github.com/mudler/LocalRecall + + - title: OpenAI Compatible + icon: api + description: Drop-in replacement for OpenAI API. Compatible with existing applications and libraries. + ctaLink: + text: learn more + url: /basics/getting_started/ + + - title: No GPU Required + icon: memory + description: Run on consumer grade hardware. No need for expensive GPUs or cloud services. + ctaLink: + text: learn more + url: /basics/getting_started/ + + - title: Multiple Models + icon: hub + description: | + Support for various model families including LLMs, image generation, and audio models. + Supports multiple backends for inferencing, including vLLM, llama.cpp, and more. + You can switch between them as needed and install them from the Web interface or the CLI. + ctaLink: + text: learn more + url: /model-compatibility + + - title: Privacy Focused + icon: security + description: Keep your data local. No data leaves your machine, ensuring complete privacy. + ctaLink: + text: learn more + url: /basics/container/ + + - title: Easy Setup + icon: settings + description: Simple installation and configuration. Get started in minutes with Binaries installation, Docker, Podman, Kubernetes or local installation. + ctaLink: + text: learn more + url: /basics/getting_started/ + + - title: Community Driven + icon: groups + description: Active community support and regular updates. Contribute and help shape the future of LocalAI. + ctaLink: + text: learn more + url: https://github.com/mudler/LocalAI + + + + - title: Extensible + icon: extension + description: Easy to extend and customize. Add new models and features as needed. + ctaLink: + text: learn more + url: /docs/integrations/ + + - title: Peer 2 Peer + icon: hub + description: | + LocalAI is designed to be a decentralized LLM inference, powered by a peer-to-peer system based on libp2p. + It is designed to be used in a local or remote network, and is compatible with any LLM model. + It works both in federated mode or by splitting models weights. + ctaLink: + text: learn more + url: /features/distribute/ + + - title: Open Source + icon: code + description: MIT licensed. Free to use, modify, and distribute. Community contributions welcome. + ctaLink: + text: learn more + url: https://github.com/mudler/LocalAI + imageText: enable: true weight: 25 template: image text - title: LocalAI - subtitle: The Free, Open Source OpenAI Alternative + title: Run AI models locally with ease + subtitle: | + LocalAI makes it simple to run various AI models on your own hardware. From text generation to image creation, autonomous agents to semantic search - all orchestrated through a unified API. list: - - text: Optimized, fast inference - icon: speed + - text: OpenAI API compatibility + icon: api - - text: Comprensive support for many models architectures - icon: area_chart + - text: Multiple model support + icon: hub - - text: Easy to deploy with Docker - icon: accessibility + - text: Image understanding + icon: image + + - text: Image generation + icon: image + + - text: Audio generation + icon: music_note + + - text: Voice activity detection + icon: mic + + - text: Speech recognition + icon: mic + + - text: Video generation + icon: movie + + - text: Privacy focused + icon: security + + - text: Autonomous agents with [LocalAGI](https://github.com/mudler/LocalAGI) + icon: smart_toy + + - text: Semantic search with [LocalRecall](https://github.com/mudler/LocalRecall) + icon: psychology + + - text: Agent orchestration + icon: hub image: - path: "images/logos" - filename: "logo.png" - alt: "LocalAI logo" # Optional but recommended + path: "images" + filename: "imagen.png" + alt: "LocalAI Image generation" imgOrder: desktop: 2 @@ -41,10 +215,62 @@ imageText: ctaButton: text: Learn more - url: "/docs/" + url: "/basics/getting_started/" # Image compare imageCompare: enable: false weight: 30 template: image compare + + title: LocalAI in Action + subtitle: See how LocalAI can transform your local AI experience with various models and capabilities. + + items: + - title: Text Generation + config: { + startingPoint: 50, + addCircle: true, + addCircleBlur: false, + showLabels: true, + labelOptions: { + before: 'Dark', + after: 'Light', + onHover: false + } + } + imagePath: "images/screenshots" + imageBefore: "text_generation_input.webp" + imageAfter: "text_generation_output.webp" + + - title: Image Generation + config: { + startingPoint: 50, + addCircle: true, + addCircleBlur: true, + showLabels: true, + labelOptions: { + before: 'Prompt', + after: 'Result', + onHover: true + } + } + imagePath: "images/screenshots" + imageBefore: "imagen_before.webp" + imageAfter: "imagen_after.webp" + + - title: Audio Generation + config: { + startingPoint: 50, + addCircle: true, + addCircleBlur: false, + showLabels: true, + labelOptions: { + before: 'Text', + after: 'Audio', + onHover: false + } + } + imagePath: "images/screenshots" + imageBefore: "audio_generation_text.webp" + imageAfter: "audio_generation_waveform.webp" \ No newline at end of file diff --git a/docs/layouts/index.html b/docs/layouts/index.html deleted file mode 100644 index e69de29b..00000000 diff --git a/docs/layouts/partials/docs/top-header.html b/docs/layouts/partials/docs/top-header.html index 375ff779..4bc974a8 100644 --- a/docs/layouts/partials/docs/top-header.html +++ b/docs/layouts/partials/docs/top-header.html @@ -82,7 +82,7 @@ {{ end -}} - {{ if .Site.IsMultiLingual }} + {{ if hugo.IsMultilingual }} +
+ + + + + \ No newline at end of file diff --git a/docs/layouts/partials/logo.html b/docs/layouts/partials/logo.html index 6b9f500b..f6b7cf6f 100644 --- a/docs/layouts/partials/logo.html +++ b/docs/layouts/partials/logo.html @@ -1 +1 @@ - + diff --git a/docs/netlify.toml b/docs/netlify.toml index 24a7cb29..fb4d98cd 100644 --- a/docs/netlify.toml +++ b/docs/netlify.toml @@ -1,4 +1,4 @@ [build] [build.environment] -HUGO_VERSION = "0.121.2" +HUGO_VERSION = "0.146.3" GO_VERSION = "1.22.2" diff --git a/docs/static/android-chrome-192x192.png b/docs/static/android-chrome-192x192.png index f83d7f00..b462084e 100644 Binary files a/docs/static/android-chrome-192x192.png and b/docs/static/android-chrome-192x192.png differ diff --git a/docs/static/android-chrome-512x512.png b/docs/static/android-chrome-512x512.png index c5a3be50..2db366d4 100644 Binary files a/docs/static/android-chrome-512x512.png and b/docs/static/android-chrome-512x512.png differ diff --git a/docs/static/apple-touch-icon.png b/docs/static/apple-touch-icon.png index 9628a2da..b820e718 100644 Binary files a/docs/static/apple-touch-icon.png and b/docs/static/apple-touch-icon.png differ diff --git a/docs/static/favicon-16x16.png b/docs/static/favicon-16x16.png index ee4b2846..c1407a0e 100644 Binary files a/docs/static/favicon-16x16.png and b/docs/static/favicon-16x16.png differ diff --git a/docs/static/favicon-32x32.png b/docs/static/favicon-32x32.png index 9580f203..9762f16b 100644 Binary files a/docs/static/favicon-32x32.png and b/docs/static/favicon-32x32.png differ diff --git a/docs/static/favicon.ico b/docs/static/favicon.ico index 05a5fa9e..6d838e78 100644 Binary files a/docs/static/favicon.ico and b/docs/static/favicon.ico differ diff --git a/docs/static/favicon.svg b/docs/static/favicon.svg new file mode 100644 index 00000000..5e881d4b --- /dev/null +++ b/docs/static/favicon.svg @@ -0,0 +1,171 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/docs/static/site.webmanifest b/docs/static/site.webmanifest new file mode 100644 index 00000000..45dc8a20 --- /dev/null +++ b/docs/static/site.webmanifest @@ -0,0 +1 @@ +{"name":"","short_name":"","icons":[{"src":"/android-chrome-192x192.png","sizes":"192x192","type":"image/png"},{"src":"/android-chrome-512x512.png","sizes":"512x512","type":"image/png"}],"theme_color":"#ffffff","background_color":"#ffffff","display":"standalone"} \ No newline at end of file diff --git a/docs/themes/lotusdocs b/docs/themes/lotusdocs index f5785a23..975da91e 160000 --- a/docs/themes/lotusdocs +++ b/docs/themes/lotusdocs @@ -1 +1 @@ -Subproject commit f5785a2399ca09e7fb4e7e3d69b397f85df42a24 +Subproject commit 975da91e839cfdb5c20fb66961468e77b8a9f8fd