Use tokenizer.apply_chat_template() in vLLM Signed-off-by: Ludovic LEROUX <ludovic@inpher.io>
* feat(petals): add backend Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * fixups --------- Signed-off-by: Ettore Di Giacinto <mudler@localai.io>