mirror of
https://github.com/mudler/LocalAI.git
synced 2024-12-22 22:12:23 +00:00
8ccf5b2044
**Description** This PR fixes #1013. It adds `draft_model` and `n_draft` to the model YAML config in order to load models with speculative sampling. This should be compatible as well with grammars. example: ```yaml backend: llama context_size: 1024 name: my-model-name parameters: model: foo-bar n_draft: 16 draft_model: model-name ``` --------- Signed-off-by: Ettore Di Giacinto <mudler@localai.io> |
||
---|---|---|
.. | ||
backend_diffusers.py | ||
backend_pb2_grpc.py | ||
backend_pb2.py |