diff --git a/README.md b/README.md index 3ad268ce..893e7983 100644 --- a/README.md +++ b/README.md @@ -46,12 +46,29 @@ **LocalAI** is the free, Open Source OpenAI alternative. LocalAI act as a drop-in replacement REST API that’s compatible with OpenAI (Elevenlabs, Anthropic... ) API specifications for local AI inferencing. It allows you to run LLMs, generate images, audio (and not only) locally or on-prem with consumer grade hardware, supporting multiple model families. Does not require GPU. It is created and maintained by [Ettore Di Giacinto](https://github.com/mudler). +![screen](https://github.com/mudler/LocalAI/assets/2420543/20b5ccd2-8393-44f0-aaf6-87a23806381e) + +```bash +docker run -ti --name local-ai -p 8080:8080 localai/localai:latest-aio-cpu +# Alternative images: +# - if you have an Nvidia GPU: +# docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-aio-gpu-nvidia-cuda-12 +# - without preconfigured models +# docker run -ti --name local-ai -p 8080:8080 localai/localai:latest +# - without preconfigured models for Nvidia GPUs +# docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-gpu-nvidia-cuda-12 +``` + +[πŸ’» Getting started](https://localai.io/basics/getting_started/index.html) + ## πŸ”₯πŸ”₯ Hot topics / Roadmap [Roadmap](https://github.com/mudler/LocalAI/issues?q=is%3Aissue+is%3Aopen+label%3Aroadmap) -- Function calls without grammars and mixed mode: https://github.com/mudler/LocalAI/pull/2328 -- Distributed inferencing: https://github.com/mudler/LocalAI/pull/2324 +- πŸ”₯πŸ”₯ Decentralized llama.cpp: https://github.com/mudler/LocalAI/pull/2343 (peer2peer llama.cpp!) +- πŸ”₯πŸ”₯ Openvoice: https://github.com/mudler/LocalAI/pull/2334 +- πŸ†• Function calls without grammars and mixed mode: https://github.com/mudler/LocalAI/pull/2328 +- πŸ”₯πŸ”₯ Distributed inferencing: https://github.com/mudler/LocalAI/pull/2324 - Chat, TTS, and Image generation in the WebUI: https://github.com/mudler/LocalAI/pull/2222 - Reranker API: https://github.com/mudler/LocalAI/pull/2121 @@ -66,18 +83,6 @@ Hot topics (looking for contributors): If you want to help and contribute, issues up for grabs: https://github.com/mudler/LocalAI/issues?q=is%3Aissue+is%3Aopen+label%3A%22up+for+grabs%22 -## πŸ’» [Getting started](https://localai.io/basics/getting_started/index.html) - -For a detailed step-by-step introduction, refer to the [Getting Started](https://localai.io/basics/getting_started/index.html) guide. - -For those in a hurry, here's a straightforward one-liner to launch a LocalAI AIO(All-in-one) Image using `docker`: - -```bash -docker run -ti --name local-ai -p 8080:8080 localai/localai:latest-aio-cpu -# or, if you have an Nvidia GPU: -# docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-aio-gpu-nvidia-cuda-12 -``` - ## πŸš€ [Features](https://localai.io/features/) - πŸ“– [Text generation with GPTs](https://localai.io/features/text-generation/) (`llama.cpp`, `gpt4all.cpp`, ... [:book: and more](https://localai.io/model-compatibility/index.html#model-compatibility-table))