mirror of
https://github.com/mudler/LocalAI.git
synced 2024-12-18 20:27:57 +00:00
8ccf5b2044
**Description** This PR fixes #1013. It adds `draft_model` and `n_draft` to the model YAML config in order to load models with speculative sampling. This should be compatible as well with grammars. example: ```yaml backend: llama context_size: 1024 name: my-model-name parameters: model: foo-bar n_draft: 16 draft_model: model-name ``` --------- Signed-off-by: Ettore Di Giacinto <mudler@localai.io> |
||
---|---|---|
.. | ||
backend | ||
config | ||
localai | ||
openai | ||
options | ||
schema | ||
api_test.go | ||
api.go | ||
apt_suite_test.go |