mirror of
https://github.com/mudler/LocalAI.git
synced 2024-12-18 20:27:57 +00:00
docs: more swagger, update docs (#2907)
* docs(swagger): finish convering gallery section Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * docs: add section to explain how to install models with local-ai run Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Minor docs adjustments Signed-off-by: Ettore Di Giacinto <mudler@localai.io> --------- Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
This commit is contained in:
parent
53c8ab1020
commit
607900a4bb
@ -10,6 +10,7 @@ import (
|
||||
"dario.cat/mergo"
|
||||
"github.com/mudler/LocalAI/core/config"
|
||||
"github.com/mudler/LocalAI/pkg/downloader"
|
||||
"github.com/mudler/LocalAI/pkg/utils"
|
||||
"github.com/rs/zerolog/log"
|
||||
"gopkg.in/yaml.v2"
|
||||
)
|
||||
@ -189,6 +190,12 @@ func DeleteModelFromSystem(basePath string, name string, additionalFiles []strin
|
||||
|
||||
galleryFile := filepath.Join(basePath, galleryFileName(name))
|
||||
|
||||
for _, f := range []string{configFile, galleryFile} {
|
||||
if err := utils.VerifyPath(f, basePath); err != nil {
|
||||
return fmt.Errorf("failed to verify path %s: %w", f, err)
|
||||
}
|
||||
}
|
||||
|
||||
var err error
|
||||
// Delete all the files associated to the model
|
||||
// read the model config
|
||||
|
@ -34,6 +34,10 @@ func CreateModelGalleryEndpointService(galleries []config.Gallery, modelPath str
|
||||
}
|
||||
}
|
||||
|
||||
// GetOpStatusEndpoint returns the job status
|
||||
// @Summary Returns the job status
|
||||
// @Success 200 {object} gallery.GalleryOpStatus "Response"
|
||||
// @Router /models/jobs/{uuid} [get]
|
||||
func (mgs *ModelGalleryEndpointService) GetOpStatusEndpoint() func(c *fiber.Ctx) error {
|
||||
return func(c *fiber.Ctx) error {
|
||||
status := mgs.galleryApplier.GetStatus(c.Params("uuid"))
|
||||
@ -44,6 +48,10 @@ func (mgs *ModelGalleryEndpointService) GetOpStatusEndpoint() func(c *fiber.Ctx)
|
||||
}
|
||||
}
|
||||
|
||||
// GetAllStatusEndpoint returns all the jobs status progress
|
||||
// @Summary Returns all the jobs status progress
|
||||
// @Success 200 {object} map[string]gallery.GalleryOpStatus "Response"
|
||||
// @Router /models/jobs [get]
|
||||
func (mgs *ModelGalleryEndpointService) GetAllStatusEndpoint() func(c *fiber.Ctx) error {
|
||||
return func(c *fiber.Ctx) error {
|
||||
return c.JSON(mgs.galleryApplier.GetAllStatus())
|
||||
|
@ -8,9 +8,9 @@ icon = "rocket_launch"
|
||||
|
||||
## Running other models
|
||||
|
||||
> _Do you have already a model file? Skip to [Run models manually]({{%relref "docs/getting-started/manual" %}})_.
|
||||
> _Do you have already a model file? Skip to [Run models manually]({{%relref "docs/getting-started/models" %}})_.
|
||||
|
||||
To load models into LocalAI, you can either [use models manually]({{%relref "docs/getting-started/manual" %}}) or configure LocalAI to pull the models from external sources, like Huggingface and configure the model.
|
||||
To load models into LocalAI, you can either [use models manually]({{%relref "docs/getting-started/models" %}}) or configure LocalAI to pull the models from external sources, like Huggingface and configure the model.
|
||||
|
||||
To do that, you can point LocalAI to an URL to a YAML configuration file - however - LocalAI does also have some popular model configuration embedded in the binary as well. Below you can find a list of the models configuration that LocalAI has pre-built, see [Model customization]({{%relref "docs/getting-started/customize-model" %}}) on how to configure models from URLs.
|
||||
|
||||
|
@ -1,21 +1,69 @@
|
||||
---
|
||||
+++
|
||||
disableToc = false
|
||||
title = "Install and Run Models"
|
||||
weight = 4
|
||||
icon = "rocket_launch"
|
||||
+++
|
||||
|
||||
disableToc: false
|
||||
title: "Run models manually"
|
||||
weight: 5
|
||||
icon: "rocket_launch"
|
||||
To install models with LocalAI, you can:
|
||||
|
||||
---
|
||||
- Browse the Model Gallery from the Web Interface and install models with a couple of clicks. For more details, refer to the [Gallery Documentation]({{% relref "docs/features/model-gallery" %}}).
|
||||
- Specify a model from the LocalAI gallery during startup, e.g., `local-ai run <model_gallery_name>`.
|
||||
- Use a URI to specify a model file (e.g., `huggingface://...`, `oci://`, or `ollama://`) when starting LocalAI, e.g., `local-ai run huggingface://TheBloke/phi-2-GGUF/phi-2.Q8_0.gguf`.
|
||||
- Specify a URL to a model configuration file when starting LocalAI, e.g., `local-ai run https://gist.githubusercontent.com/.../phi-2.yaml`.
|
||||
- Manually install the models by copying the files into the models directory (`--models`).
|
||||
|
||||
# Run Models Manually
|
||||
## Run and Install Models via the Gallery
|
||||
|
||||
To run models available in the LocalAI gallery, you can use the WebUI or specify the model name when starting LocalAI. Models can be found in the gallery via the Web interface, the [model gallery](https://models.localai.io), or the CLI with: `local-ai models list`.
|
||||
|
||||
To install a model from the gallery, use the model name as the URI. For example, to run LocalAI with the Hermes model, execute:
|
||||
|
||||
```bash
|
||||
local-ai run hermes-2-theta-llama-3-8b
|
||||
```
|
||||
|
||||
To install only the model, use:
|
||||
|
||||
```bash
|
||||
local-ai models install hermes-2-theta-llama-3-8b
|
||||
```
|
||||
|
||||
Note: The galleries available in LocalAI can be customized to point to a different URL or a local directory. For more information on how to setup your own gallery, see the [Gallery Documentation]({{% relref "docs/features/model-gallery" %}}).
|
||||
|
||||
## Run Models via URI
|
||||
|
||||
To run models via URI, specify a URI to a model file or a configuration file when starting LocalAI. Valid syntax includes:
|
||||
|
||||
- `file://path/to/model`
|
||||
- `huggingface://repository_id/model_file` (e.g., `huggingface://TheBloke/phi-2-GGUF/phi-2.Q8_0.gguf`)
|
||||
- From OCIs: `oci://container_image:tag`, `ollama://model_id:tag`
|
||||
- From configuration files: `https://gist.githubusercontent.com/.../phi-2.yaml`
|
||||
|
||||
Configuration files can be used to customize the model defaults and settings. For advanced configurations, refer to the [Customize Models section]({{% relref "docs/getting-started/customize-model" %}}).
|
||||
|
||||
### Examples
|
||||
|
||||
```bash
|
||||
# Start LocalAI with the phi-2 model
|
||||
local-ai run huggingface://TheBloke/phi-2-GGUF/phi-2.Q8_0.gguf
|
||||
# Install and run a model from the Ollama OCI registry
|
||||
local-ai run ollama://gemma:2b
|
||||
# Run a model from a configuration file
|
||||
local-ai run https://gist.githubusercontent.com/.../phi-2.yaml
|
||||
# Install and run a model from a standard OCI registry (e.g., Docker Hub)
|
||||
local-ai run oci://localai/phi-2:latest
|
||||
```
|
||||
|
||||
## Run Models Manually
|
||||
|
||||
Follow these steps to manually run models using LocalAI:
|
||||
|
||||
1. **Prepare Your Model and Configuration Files**:
|
||||
Ensure you have a model file and a configuration YAML file, if necessary. Customize model defaults and specific settings with a configuration file. For advanced configurations, refer to the [Advanced Documentation]({{% relref "docs/advanced" %}}).
|
||||
Ensure you have a model file and, if necessary, a configuration YAML file. Customize model defaults and settings with a configuration file. For advanced configurations, refer to the [Advanced Documentation]({{% relref "docs/advanced" %}}).
|
||||
|
||||
2. **GPU Acceleration**:
|
||||
For instructions on GPU acceleration, visit the [GPU acceleration]({{% relref "docs/features/gpu-acceleration" %}}) page.
|
||||
For instructions on GPU acceleration, visit the [GPU Acceleration]({{% relref "docs/features/gpu-acceleration" %}}) page.
|
||||
|
||||
3. **Run LocalAI**:
|
||||
Choose one of the following methods to run LocalAI:
|
||||
@ -160,5 +208,3 @@ For instructions on building LocalAI from source, see the [Build Section]({{% re
|
||||
{{< /tabs >}}
|
||||
|
||||
For more model configurations, visit the [Examples Section](https://github.com/mudler/LocalAI/tree/master/examples/configurations).
|
||||
|
||||
---
|
@ -38,13 +38,13 @@ For detailed instructions, see [Using container images]({{% relref "docs/getting
|
||||
|
||||
## Running LocalAI with All-in-One (AIO) Images
|
||||
|
||||
> _Already have a model file? Skip to [Run models manually]({{% relref "docs/getting-started/manual" %}})_.
|
||||
> _Already have a model file? Skip to [Run models manually]({{% relref "docs/getting-started/models" %}})_.
|
||||
|
||||
LocalAI's All-in-One (AIO) images are pre-configured with a set of models and backends to fully leverage almost all the features of LocalAI. If pre-configured models are not required, you can use the standard [images]({{% relref "docs/getting-started/container-images" %}}).
|
||||
|
||||
These images are available for both CPU and GPU environments. AIO images are designed for ease of use and require no additional configuration.
|
||||
|
||||
It is recommended to use AIO images if you prefer not to configure the models manually or via the web interface. For running specific models, refer to the [manual method]({{% relref "docs/getting-started/manual" %}}).
|
||||
It is recommended to use AIO images if you prefer not to configure the models manually or via the web interface. For running specific models, refer to the [manual method]({{% relref "docs/getting-started/models" %}}).
|
||||
|
||||
The AIO images come pre-configured with the following features:
|
||||
- Text to Speech (TTS)
|
||||
@ -66,5 +66,5 @@ Explore additional resources and community contributions:
|
||||
- [Run from Container images]({{% relref "docs/getting-started/container-images" %}})
|
||||
- [Examples to try from the CLI]({{% relref "docs/getting-started/try-it-out" %}})
|
||||
- [Build LocalAI and the container image]({{% relref "docs/getting-started/build" %}})
|
||||
- [Run models manually]({{% relref "docs/getting-started/manual" %}})
|
||||
- [Run models manually]({{% relref "docs/getting-started/models" %}})
|
||||
- [Examples](https://github.com/mudler/LocalAI/tree/master/examples#examples)
|
||||
|
@ -17,7 +17,7 @@ After installation, install new models by navigating the model gallery, or by us
|
||||
To install models with the WebUI, see the [Models section]({{%relref "docs/features/model-gallery" %}}).
|
||||
With the CLI you can list the models with `local-ai models list` and install them with `local-ai models install <model-name>`.
|
||||
|
||||
You can also [run models manually]({{%relref "docs/getting-started/manual" %}}) by copying files into the `models` directory.
|
||||
You can also [run models manually]({{%relref "docs/getting-started/models" %}}) by copying files into the `models` directory.
|
||||
{{% /alert %}}
|
||||
|
||||
You can test out the API endpoints using `curl`, few examples are listed below. The models we are referring here (`gpt-4`, `gpt-4-vision-preview`, `tts-1`, `whisper-1`) are the default models that come with the AIO images - you can also use any other model you have installed.
|
||||
|
Loading…
Reference in New Issue
Block a user