mirror of
https://github.com/mudler/LocalAI.git
synced 2025-02-20 01:16:14 +00:00
docs: add a note on benchmarks (#2857)
Add a note on LocalAI defaults and benchmarks in our FAQ section. See also https://github.com/mudler/LocalAI/issues/2780 Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
This commit is contained in:
parent
2a2ef49b74
commit
edea2e7c3a
@ -16,6 +16,10 @@ Here are answers to some of the most common questions.
|
||||
|
||||
Most gguf-based models should work, but newer models may require additions to the API. If a model doesn't work, please feel free to open up issues. However, be cautious about downloading models from the internet and directly onto your machine, as there may be security vulnerabilities in lama.cpp or ggml that could be maliciously exploited. Some models can be found on Hugging Face: https://huggingface.co/models?search=gguf, or models from gpt4all are compatible too: https://github.com/nomic-ai/gpt4all.
|
||||
|
||||
### Benchmarking LocalAI and llama.cpp shows different results!
|
||||
|
||||
LocalAI applies a set of defaults when loading models with the llama.cpp backend, one of these is mirostat sampling - while it achieves better results, it slows down the inference. You can disable this by setting `mirostat: 0` in the model config file. See also the advanced section ({{%relref "docs/advanced/advanced-usage" %}}) for more information and [this issue](https://github.com/mudler/LocalAI/issues/2780).
|
||||
|
||||
### What's the difference with Serge, or XXX?
|
||||
|
||||
LocalAI is a multi-model solution that doesn't focus on a specific model type (e.g., llama.cpp or alpaca.cpp), and it handles all of these internally for faster inference, easy to set up locally and deploy to Kubernetes.
|
||||
|
Loading…
x
Reference in New Issue
Block a user