mirror of
https://github.com/mudler/LocalAI.git
synced 2025-04-09 04:14:47 +00:00
docs(aio-usage): update docs to show examples (#1921)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
This commit is contained in:
parent
23b833d171
commit
13ccd2afef
@ -68,8 +68,8 @@ services:
|
||||
healthcheck:
|
||||
test: ["CMD", "curl", "-f", "http://localhost:8080/readyz"]
|
||||
interval: 1m
|
||||
timeout: 120m
|
||||
retries: 120
|
||||
timeout: 20m
|
||||
retries: 5
|
||||
ports:
|
||||
- 8080:8080
|
||||
environment:
|
||||
@ -89,8 +89,208 @@ services:
|
||||
|
||||
For a list of all the container-images available, see [Container images]({{%relref "docs/reference/container-images" %}}). To learn more about All-in-one images instead, see [All-in-one Images]({{%relref "docs/reference/aio-images" %}}).
|
||||
|
||||
{{% alert icon="💡 Models caching" %}}
|
||||
|
||||
The **AIO** image will download the needed models on the first run if not already present and store those in `/build/models` inside the container. The AIO models will be automatically updated with new versions of AIO images.
|
||||
|
||||
You can change the directory inside the container by specifying a `MODELS_PATH` environment variable (or `--models-path`).
|
||||
|
||||
If you want to use a named model or a local directory, you can mount it as a volume to `/build/models`:
|
||||
|
||||
```bash
|
||||
docker run -p 8080:8080 --name local-ai -ti -v $PWD/models:/build/models localai/localai:latest-aio-cpu
|
||||
```
|
||||
|
||||
or associate a volume:
|
||||
|
||||
```bash
|
||||
docker create volume localai-models
|
||||
docker run -p 8080:8080 --name local-ai -ti -v localai-models:/build/models localai/localai:latest-aio-cpu
|
||||
```
|
||||
|
||||
{{% /alert %}}
|
||||
|
||||
## Try it out
|
||||
|
||||
LocalAI does not ship a webui by default, however you can use 3rd party projects to interact with it (see also [All-in-one Images]({{%relref "docs/integrations" %}}) ). However, you can test out the API endpoints using `curl`.
|
||||
|
||||
### Text Generation
|
||||
|
||||
Creates a model response for the given chat conversation. [OpenAI documentation](https://platform.openai.com/docs/api-reference/chat/create).
|
||||
|
||||
<details>
|
||||
|
||||
```bash
|
||||
curl http://localhost:8080/v1/chat/completions \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{ "model": "gpt-4", "messages": [{"role": "user", "content": "How are you doing?", "temperature": 0.1}] }'
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
### GPT Vision
|
||||
|
||||
Understand images.
|
||||
|
||||
<details>
|
||||
|
||||
```bash
|
||||
curl http://localhost:8080/v1/chat/completions \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"model": "gpt-4-vision-preview",
|
||||
"messages": [
|
||||
{
|
||||
"role": "user", "content": [
|
||||
{"type":"text", "text": "What is in the image?"},
|
||||
{
|
||||
"type": "image_url",
|
||||
"image_url": {
|
||||
"url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"
|
||||
}
|
||||
}
|
||||
],
|
||||
"temperature": 0.9
|
||||
}
|
||||
]
|
||||
}'
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
### Function calling
|
||||
|
||||
Call functions
|
||||
|
||||
<details>
|
||||
|
||||
```bash
|
||||
curl https://localhost:8080/v1/chat/completions \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"model": "gpt-4",
|
||||
"messages": [
|
||||
{
|
||||
"role": "user",
|
||||
"content": "What is the weather like in Boston?"
|
||||
}
|
||||
],
|
||||
"tools": [
|
||||
{
|
||||
"type": "function",
|
||||
"function": {
|
||||
"name": "get_current_weather",
|
||||
"description": "Get the current weather in a given location",
|
||||
"parameters": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"location": {
|
||||
"type": "string",
|
||||
"description": "The city and state, e.g. San Francisco, CA"
|
||||
},
|
||||
"unit": {
|
||||
"type": "string",
|
||||
"enum": ["celsius", "fahrenheit"]
|
||||
}
|
||||
},
|
||||
"required": ["location"]
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"tool_choice": "auto"
|
||||
}'
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
### Image Generation
|
||||
|
||||
Creates an image given a prompt. [OpenAI documentation](https://platform.openai.com/docs/api-reference/images/create).
|
||||
|
||||
<details>
|
||||
|
||||
```bash
|
||||
curl http://localhost:8080/v1/images/generations \
|
||||
-H "Content-Type: application/json" -d '{
|
||||
"prompt": "A cute baby sea otter",
|
||||
"size": "256x256"
|
||||
}'
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
### Text to speech
|
||||
|
||||
|
||||
Generates audio from the input text. [OpenAI documentation](https://platform.openai.com/docs/api-reference/audio/createSpeech).
|
||||
|
||||
<details>
|
||||
|
||||
```bash
|
||||
curl http://localhost:8080/v1/audio/speech \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"model": "tts-1",
|
||||
"input": "The quick brown fox jumped over the lazy dog.",
|
||||
"voice": "alloy"
|
||||
}' \
|
||||
--output speech.mp3
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
### Audio Transcription
|
||||
|
||||
Transcribes audio into the input language. [OpenAI Documentation](https://platform.openai.com/docs/api-reference/audio/createTranscription).
|
||||
|
||||
<details>
|
||||
|
||||
Download first a sample to transcribe:
|
||||
|
||||
```bash
|
||||
wget --quiet --show-progress -O gb1.ogg https://upload.wikimedia.org/wikipedia/commons/1/1f/George_W_Bush_Columbia_FINAL.ogg
|
||||
```
|
||||
|
||||
Send the example audio file to the transcriptions endpoint :
|
||||
```bash
|
||||
curl http://localhost:8080/v1/audio/transcriptions \
|
||||
-H "Content-Type: multipart/form-data" \
|
||||
-F file="@$PWD/gb1.ogg" -F model="whisper-1"
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
### Embeddings Generation
|
||||
|
||||
Get a vector representation of a given input that can be easily consumed by machine learning models and algorithms. [OpenAI Embeddings](https://platform.openai.com/docs/api-reference/embeddings).
|
||||
|
||||
<details>
|
||||
|
||||
```bash
|
||||
curl http://localhost:8080/embeddings \
|
||||
-X POST -H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"input": "Your text string goes here",
|
||||
"model": "text-embedding-ada-002"
|
||||
}'
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
{{% alert icon="💡" %}}
|
||||
|
||||
Don't use the model file as `model` in the request unless you want to handle the prompt template for yourself.
|
||||
|
||||
Use the model names like you would do with OpenAI like in the examples below. For instance `gpt-4-vision-preview`, or `gpt-4`.
|
||||
|
||||
{{% /alert %}}
|
||||
|
||||
## What's next?
|
||||
|
||||
There is much more to explore! run any model from huggingface, video generation, and voice cloning with LocalAI, check out the [features]({{%relref "docs/features" %}}) section for a full overview.
|
||||
|
||||
Explore further resources and community contributions:
|
||||
|
||||
- [Build LocalAI and the container image]({{%relref "docs/getting-started/build" %}})
|
||||
|
@ -7,15 +7,28 @@ weight = 26
|
||||
|
||||
All-In-One images are images that come pre-configured with a set of models and backends to fully leverage almost all the LocalAI featureset. These images are available for both CPU and GPU environments. The AIO images are designed to be easy to use and requires no configuration. Models configuration can be found [here](https://github.com/mudler/LocalAI/tree/master/aio) separated by size.
|
||||
|
||||
What you can find configured out of the box:
|
||||
In the AIO images there are models configured with the names of OpenAI models, however, they are really backed by Open Source models. You can find the table below
|
||||
|
||||
- Image generation
|
||||
- Text generation
|
||||
- Text to audio
|
||||
- Audio transcription
|
||||
- Embeddings
|
||||
- GPT Vision
|
||||
| Category | Model name | Real model |
|
||||
| Text Generation | `gpt-4` | `phi-2`(CPU) or `hermes-2-pro-mistral`(GPU) |
|
||||
| Multimodal | `gpt-4-vision-preview` | `bakllava`(CPU) or `llava-1.6-mistral`(GPU) |
|
||||
| Text generation | `stablediffusion` | `stablediffusion`(CPU) `dreamshaper-8` (GPU) |
|
||||
| Audio transcription | `whisper-1` | `whisper` with the `whisper-base` model |
|
||||
| Text to Audio | `tts-1` | the `en-us-amy-low.onnx` model with `rhasspy` |
|
||||
| Embeddings | `text-embedding-ada-002` | |
|
||||
|
||||
## Usage
|
||||
|
||||
Select the image (CPU or GPU) and start the container with Docker:
|
||||
|
||||
```bash
|
||||
# CPU example
|
||||
docker run -p 8080:8080 --name local-ai -ti localai/localai:latest-aio-cpu
|
||||
```
|
||||
|
||||
LocalAI will automatically download all the required models, and the API will be available at [localhost:8080](http://localhost:8080/v1/models).
|
||||
|
||||
## Available images
|
||||
|
||||
| Description | Quay | Docker Hub |
|
||||
| --- | --- |-----------------------------------------------|
|
||||
@ -37,12 +50,3 @@ The AIO Images are inheriting the same environment variables as the base images
|
||||
| `MODELS` | Auto-detected | A list of models YAML Configuration file URI/URL (see also [running models]({{%relref "docs/getting-started/run-other-models" %}})) |
|
||||
|
||||
|
||||
## Example
|
||||
|
||||
Start the image with Docker:
|
||||
|
||||
```bash
|
||||
docker run -p 8080:8080 --name local-ai -ti localai/localai:latest-aio-cpu
|
||||
```
|
||||
|
||||
LocalAI will automatically download all the required models, and will be available at [localhost:8080](http://localhost:8080/v1/models).
|
||||
|
Loading…
x
Reference in New Issue
Block a user