mirror of
https://github.com/mudler/LocalAI.git
synced 2024-12-18 20:27:57 +00:00
b6b8ab6c21
* feat(models): pull models from urls When using `run` now we can point directly to hf models via URL, for instance: ```bash local-ai run huggingface://TheBloke/TinyLlama-1.1B-Chat-v0.3-GGUF/tinyllama-1.1b-chat-v0.3.Q2_K.gguf ``` Will pull the gguf model and place it in the models folder - of course this depends on the fact that the gguf file should be automatically detected by our guesser mechanism in order to this to make effective. Similarly now galleries can refer to single files in the API requests. This also changes the download code and `yaml` files now are treated in the same way, so now config files are saved with the appropriate name (and not hashed anymore). Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Adapt tests Signed-off-by: Ettore Di Giacinto <mudler@localai.io> --------- Signed-off-by: Ettore Di Giacinto <mudler@localai.io> |
||
---|---|---|
.. | ||
assets | ||
downloader | ||
functions | ||
grpc | ||
langchain | ||
library | ||
model | ||
oci | ||
stablediffusion | ||
startup | ||
store | ||
templates | ||
tinydream | ||
utils | ||
xsync | ||
xsysinfo |