mirror of
https://github.com/mudler/LocalAI.git
synced 2025-02-28 19:45:46 +00:00
* wip * wip * Make it functional Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * wip * Small fixups * do not inject space on role encoding, encode img at beginning of messages Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Add examples/config defaults * Add include dir of current source dir * cleanup * fixes Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * fixups * Revert "fixups" This reverts commit f1a4731ccadf7226c6589d6d39131376f0811625. * fixes Signed-off-by: Ettore Di Giacinto <mudler@localai.io> --------- Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
20 lines
314 B
YAML
20 lines
314 B
YAML
|
|
context_size: 4096
|
|
f16: true
|
|
threads: 11
|
|
gpu_layers: 90
|
|
name: llava
|
|
mmap: true
|
|
backend: llama-cpp
|
|
roles:
|
|
user: "USER:"
|
|
assistant: "ASSISTANT:"
|
|
system: "SYSTEM:"
|
|
parameters:
|
|
model: ggml-model-q4_k.gguf
|
|
temperature: 0.2
|
|
top_k: 40
|
|
top_p: 0.95
|
|
template:
|
|
chat: chat-simple
|
|
mmproj: mmproj-model-f16.gguf |