--- config_file: | mmap: true backend: llama-cpp template: chat_message: | <|im_{{if eq .RoleName "assistant"}}bot{{else if eq .RoleName "system"}}system{{else if eq .RoleName "tool"}}tool{{else if eq .RoleName "user"}}user{{end}}|> {{- if .FunctionCall }} <tool_call> {{- else if eq .RoleName "tool" }} <tool_response> {{- end }} {{- if .Content}} {{.Content }} {{- end }} {{- if .FunctionCall}} {{toJson .FunctionCall}} {{- end }} {{- if .FunctionCall }} </tool_call> {{- else if eq .RoleName "tool" }} </tool_response> {{- end }}<|im_end|> # https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B-GGUF#prompt-format-for-function-calling function: | <|im_system|> You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools: <tools> {{range .Functions}} {'type': 'function', 'function': {'name': '{{.Name}}', 'description': '{{.Description}}', 'parameters': {{toJson .Parameters}} }} {{end}} </tools> Use the following pydantic model json schema for each tool call you will make: {'title': 'FunctionCall', 'type': 'object', 'properties': {'arguments': {'title': 'Arguments', 'type': 'object'}, 'name': {'title': 'Name', 'type': 'string'}}, 'required': ['arguments', 'name']} For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows: <tool_call> {'arguments': <args-dict>, 'name': <function-name>} </tool_call><|im_end|> {{.Input -}} <|im_bot|> <tool_call> chat: | {{.Input -}} <|im_bot|> completion: | {{.Input}} context_size: 4096 f16: true stopwords: - <|im_end|> - <dummy32000> - "\n</tool_call>" - "\n\n\n"