From 3c24a70a1b1eb6d11c89ff04d9d617e7df31f6df Mon Sep 17 00:00:00 2001 From: Ivan Smirnov <46977173+Wansmer@users.noreply.github.com> Date: Fri, 2 Feb 2024 20:18:03 +0300 Subject: [PATCH] fix (docs): fixed broken links `github/` -> `github.com/` (#1672) fix broken links --- docs/content/docs/advanced/fine-tuning.md | 2 +- examples/e2e-fine-tuning/README.md | 2 +- examples/e2e-fine-tuning/notebook.ipynb | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/content/docs/advanced/fine-tuning.md b/docs/content/docs/advanced/fine-tuning.md index 550cb025..0680a279 100644 --- a/docs/content/docs/advanced/fine-tuning.md +++ b/docs/content/docs/advanced/fine-tuning.md @@ -23,7 +23,7 @@ Fine-tuning a language model is a process that requires a lot of computational p Currently LocalAI doesn't support the fine-tuning endpoint as LocalAI but there are are [plans](https://github.com/mudler/LocalAI/issues/596) to support that. For the time being a guide is proposed here to give a simple starting point on how to fine-tune a model and use it with LocalAI (but also with llama.cpp). -There is an e2e example of fine-tuning a LLM model to use with [LocalAI](https://github/mudler/LocalAI) written by [@mudler](https://github.com/mudler) available [here](https://github.com/mudler/LocalAI/tree/master/examples/e2e-fine-tuning/). +There is an e2e example of fine-tuning a LLM model to use with [LocalAI](https://github.com/mudler/LocalAI) written by [@mudler](https://github.com/mudler) available [here](https://github.com/mudler/LocalAI/tree/master/examples/e2e-fine-tuning/). The steps involved are: diff --git a/examples/e2e-fine-tuning/README.md b/examples/e2e-fine-tuning/README.md index 2674b5af..af3ab8a3 100644 --- a/examples/e2e-fine-tuning/README.md +++ b/examples/e2e-fine-tuning/README.md @@ -1,4 +1,4 @@ -This is an example of fine-tuning a LLM model to use with [LocalAI](https://github/mudler/LocalAI) written by [@mudler](https://github.com/mudler). +This is an example of fine-tuning a LLM model to use with [LocalAI](https://github.com/mudler/LocalAI) written by [@mudler](https://github.com/mudler). Specifically, this example shows how to use [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) to fine-tune a LLM model to consume with LocalAI as a `gguf` model. diff --git a/examples/e2e-fine-tuning/notebook.ipynb b/examples/e2e-fine-tuning/notebook.ipynb index 9efb57d2..4996da5d 100644 --- a/examples/e2e-fine-tuning/notebook.ipynb +++ b/examples/e2e-fine-tuning/notebook.ipynb @@ -6,7 +6,7 @@ "source": [ "## Finetuning a model and using it with LocalAI\n", "\n", - "This is an example of fine-tuning a LLM model to use with [LocalAI](https://github/mudler/LocalAI) written by [@mudler](https://github.com/mudler).\n", + "This is an example of fine-tuning a LLM model to use with [LocalAI](https://github.com/mudler/LocalAI) written by [@mudler](https://github.com/mudler).\n", "\n", "Specifically, this example shows how to use [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) to fine-tune a LLM model to consume with LocalAI as a `gguf` model." ]