LocalAI/gallery/index.yaml
Ettore Di Giacinto 9429a53db7
Some checks are pending
Explorer deployment / build-linux (push) Waiting to run
GPU tests / ubuntu-latest (1.21.x) (push) Waiting to run
generate and publish GRPC docker caches / generate_caches (ubuntu:22.04, linux/amd64,linux/arm64, ubuntu-latest) (push) Waiting to run
generate and publish intel docker caches / generate_caches (intel/oneapi-basekit:2025.0.0-0-devel-ubuntu22.04, linux/amd64, ubuntu-latest) (push) Waiting to run
build container images / hipblas-jobs (-aio-gpu-hipblas, rocm/dev-ubuntu-22.04:6.1, hipblas, true, ubuntu:22.04, extras, latest-gpu-hipblas, latest-aio-gpu-hipblas, --jobs=3 --output-sync=target, linux/amd64, arc-runner-set, auto, -hipblas-ffmpeg) (push) Waiting to run
build container images / hipblas-jobs (rocm/dev-ubuntu-22.04:6.1, hipblas, false, ubuntu:22.04, core, --jobs=3 --output-sync=target, linux/amd64, arc-runner-set, false, -hipblas-core) (push) Waiting to run
build container images / hipblas-jobs (rocm/dev-ubuntu-22.04:6.1, hipblas, false, ubuntu:22.04, extras, --jobs=3 --output-sync=target, linux/amd64, arc-runner-set, false, -hipblas) (push) Waiting to run
build container images / hipblas-jobs (rocm/dev-ubuntu-22.04:6.1, hipblas, true, ubuntu:22.04, core, --jobs=3 --output-sync=target, linux/amd64, arc-runner-set, false, -hipblas-ffmpeg-core) (push) Waiting to run
build container images / self-hosted-jobs (-aio-gpu-intel-f16, quay.io/go-skynet/intel-oneapi-base:latest, sycl_f16, true, ubuntu:22.04, extras, latest-gpu-intel-f16, latest-aio-gpu-intel-f16, --jobs=3 --output-sync=target, linux/amd64, arc-runner-set, auto, -sycl-f16-ffmpeg) (push) Waiting to run
build container images / self-hosted-jobs (-aio-gpu-intel-f32, quay.io/go-skynet/intel-oneapi-base:latest, sycl_f32, true, ubuntu:22.04, extras, latest-gpu-intel-f32, latest-aio-gpu-intel-f32, --jobs=3 --output-sync=target, linux/amd64, arc-runner-set, auto, -sycl-f32-ffmpeg) (push) Waiting to run
build container images / self-hosted-jobs (-aio-gpu-nvidia-cuda-11, ubuntu:22.04, cublas, 11, 7, true, extras, latest-gpu-nvidia-cuda-11, latest-aio-gpu-nvidia-cuda-11, --jobs=3 --output-sync=target, linux/amd64, arc-runner-set, auto, -cublas-cuda11-ffmpeg) (push) Waiting to run
build container images / self-hosted-jobs (-aio-gpu-nvidia-cuda-12, ubuntu:22.04, cublas, 12, 0, true, extras, latest-gpu-nvidia-cuda-12, latest-aio-gpu-nvidia-cuda-12, --jobs=3 --output-sync=target, linux/amd64, arc-runner-set, auto, -cublas-cuda12-ffmpeg) (push) Waiting to run
build container images / self-hosted-jobs (quay.io/go-skynet/intel-oneapi-base:latest, sycl_f16, false, ubuntu:22.04, core, --jobs=3 --output-sync=target, linux/amd64, arc-runner-set, false, -sycl-f16-core) (push) Waiting to run
build container images / self-hosted-jobs (quay.io/go-skynet/intel-oneapi-base:latest, sycl_f16, true, ubuntu:22.04, core, --jobs=3 --output-sync=target, linux/amd64, arc-runner-set, false, -sycl-f16-ffmpeg-core) (push) Waiting to run
build container images / self-hosted-jobs (quay.io/go-skynet/intel-oneapi-base:latest, sycl_f32, false, ubuntu:22.04, core, --jobs=3 --output-sync=target, linux/amd64, arc-runner-set, false, -sycl-f32-core) (push) Waiting to run
build container images / self-hosted-jobs (quay.io/go-skynet/intel-oneapi-base:latest, sycl_f32, true, ubuntu:22.04, core, --jobs=3 --output-sync=target, linux/amd64, arc-runner-set, false, -sycl-f32-ffmpeg-core) (push) Waiting to run
build container images / self-hosted-jobs (ubuntu:22.04, , , extras, --jobs=3 --output-sync=target, linux/amd64, arc-runner-set, auto, ) (push) Waiting to run
build container images / self-hosted-jobs (ubuntu:22.04, , true, extras, --jobs=3 --output-sync=target, linux/amd64, arc-runner-set, auto, -ffmpeg) (push) Waiting to run
build container images / self-hosted-jobs (ubuntu:22.04, cublas, 11, 7, , extras, --jobs=3 --output-sync=target, linux/amd64, arc-runner-set, false, -cublas-cuda11) (push) Waiting to run
build container images / self-hosted-jobs (ubuntu:22.04, cublas, 12, 0, , extras, --jobs=3 --output-sync=target, linux/amd64, arc-runner-set, false, -cublas-cuda12) (push) Waiting to run
build container images / core-image-build (-aio-cpu, ubuntu:22.04, , true, core, latest-cpu, latest-aio-cpu, --jobs=4 --output-sync=target, linux/amd64,linux/arm64, arc-runner-set, auto, -ffmpeg-core) (push) Waiting to run
build container images / core-image-build (ubuntu:22.04, cublas, 11, 7, , core, --jobs=4 --output-sync=target, linux/amd64, arc-runner-set, false, -cublas-cuda11-core) (push) Waiting to run
build container images / core-image-build (ubuntu:22.04, cublas, 11, 7, true, core, --jobs=4 --output-sync=target, linux/amd64, arc-runner-set, false, -cublas-cuda11-ffmpeg-core) (push) Waiting to run
build container images / core-image-build (ubuntu:22.04, cublas, 12, 0, , core, --jobs=4 --output-sync=target, linux/amd64, arc-runner-set, false, -cublas-cuda12-core) (push) Waiting to run
build container images / core-image-build (ubuntu:22.04, cublas, 12, 0, true, core, --jobs=4 --output-sync=target, linux/amd64, arc-runner-set, false, -cublas-cuda12-ffmpeg-core) (push) Waiting to run
build container images / core-image-build (ubuntu:22.04, vulkan, true, core, latest-vulkan-ffmpeg-core, --jobs=4 --output-sync=target, linux/amd64, arc-runner-set, false, -vulkan-ffmpeg-core) (push) Waiting to run
Security Scan / tests (push) Waiting to run
Tests extras backends / tests-transformers (push) Waiting to run
Tests extras backends / tests-sentencetransformers (push) Waiting to run
Tests extras backends / tests-rerankers (push) Waiting to run
Tests extras backends / tests-diffusers (push) Waiting to run
Tests extras backends / tests-parler-tts (push) Waiting to run
Tests extras backends / tests-openvoice (push) Waiting to run
Tests extras backends / tests-transformers-musicgen (push) Waiting to run
Tests extras backends / tests-vallex (push) Waiting to run
Tests extras backends / tests-coqui (push) Waiting to run
tests / tests-linux (1.21.x) (push) Waiting to run
tests / tests-aio-container (push) Waiting to run
tests / tests-apple (1.21.x) (push) Waiting to run
chore(model gallery): add neumind-math-7b-instruct (#4388)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-12-15 10:07:56 +01:00

10347 lines
562 KiB
YAML
Raw Blame History

This file contains invisible Unicode characters

This file contains invisible Unicode characters that are indistinguishable to humans but may be processed differently by a computer. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

---
- &intellect1
name: "intellect-1-instruct"
url: "github:mudler/LocalAI/gallery/llama3.1-instruct.yaml@master"
icon: https://huggingface.co/PrimeIntellect/INTELLECT-1-Instruct/resolve/main/intellect-1-map.png
urls:
- https://huggingface.co/PrimeIntellect/INTELLECT-1-Instruct
- https://huggingface.co/bartowski/INTELLECT-1-Instruct-GGUF
tags:
- llm
- gguf
- gpu
- cpu
- intellect
license: apache-2.0
description: |
INTELLECT-1 is the first collaboratively trained 10 billion parameter language model trained from scratch on 1 trillion tokens of English text and code.
This is an instruct model. The base model associated with it is INTELLECT-1.
INTELLECT-1 was trained on up to 14 concurrent nodes distributed across 3 continents, with contributions from 30 independent community contributors providing compute. The training code utilizes the prime framework, a scalable distributed training framework designed for fault-tolerant, dynamically scaling, high-perfomance training on unreliable, globally distributed workers. The key abstraction that allows dynamic scaling is the ElasticDeviceMesh which manages dynamic global process groups for fault-tolerant communication across the internet and local process groups for communication within a node. The model was trained using the DiLoCo algorithms with 100 inner steps. The global all-reduce was done with custom int8 all-reduce kernels to reduce the communication payload required, greatly reducing the communication overhead by a factor 400x.
overrides:
parameters:
model: INTELLECT-1-Instruct-Q4_K_M.gguf
files:
- filename: INTELLECT-1-Instruct-Q4_K_M.gguf
sha256: 5df236fe570e5998d07fb3207788eac811ef3b77dd2a0ad04a2ef5c6361f3030
uri: huggingface://bartowski/INTELLECT-1-Instruct-GGUF/INTELLECT-1-Instruct-Q4_K_M.gguf
- &llama33
url: "github:mudler/LocalAI/gallery/llama3.1-instruct.yaml@master"
icon: https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/aJJxKus1wP5N-euvHEUq7.png
license: llama3.3
description: |
The Meta Llama 3.3 multilingual large language model (LLM) is a pretrained and instruction tuned generative model in 70B (text in/text out). The Llama 3.3 instruction tuned text only model is optimized for multilingual dialogue use cases and outperform many of the available open source and closed chat models on common industry benchmarks.
tags:
- llm
- gguf
- gpu
- cpu
- llama3.3
name: "llama-3.3-70b-instruct"
urls:
- https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct
- https://huggingface.co/MaziyarPanahi/Llama-3.3-70B-Instruct-GGUF
overrides:
parameters:
model: Llama-3.3-70B-Instruct.Q4_K_M.gguf
files:
- filename: Llama-3.3-70B-Instruct.Q4_K_M.gguf
sha256: 4f3b04ecae278bdb0fd545b47c210bc5edf823e5ebf7d41e0b526c81d54b1ff3
uri: huggingface://MaziyarPanahi/Llama-3.3-70B-Instruct-GGUF/Llama-3.3-70B-Instruct.Q4_K_M.gguf
- !!merge <<: *llama33
name: "l3.3-70b-euryale-v2.3"
icon: https://huggingface.co/Sao10K/L3.3-70B-Euryale-v2.3/resolve/main/Eury.png
urls:
- https://huggingface.co/Sao10K/L3.3-70B-Euryale-v2.3
- https://huggingface.co/bartowski/L3.3-70B-Euryale-v2.3-GGUF
description: |
A direct replacement / successor to Euryale v2.2, not Hanami-x1, though it is slightly better than them in my opinion.
overrides:
parameters:
model: L3.3-70B-Euryale-v2.3-Q4_K_M.gguf
files:
- filename: L3.3-70B-Euryale-v2.3-Q4_K_M.gguf
sha256: 4e78bb0e65886bfcff89b829f6d38aa6f6846988bb8291857e387e3f60b3217b
uri: huggingface://bartowski/L3.3-70B-Euryale-v2.3-GGUF/L3.3-70B-Euryale-v2.3-Q4_K_M.gguf
- !!merge <<: *llama33
name: "l3.3-ms-evayale-70b"
icon: https://cdn-uploads.huggingface.co/production/uploads/64545af5ec40bbbd01242ca6/HFCaVzRpiE05Y46p41qRy.webp
urls:
- https://huggingface.co/Steelskull/L3.3-MS-Evayale-70B
- https://huggingface.co/bartowski/L3.3-MS-Evayale-70B-GGUF
description: |
This model was created as I liked the storytelling of EVA but the prose and details of scenes from EURYALE, my goal is to merge the robust storytelling of both models while attempting to maintain the positives of both models.
overrides:
parameters:
model: L3.3-MS-Evayale-70B-Q4_K_M.gguf
files:
- filename: L3.3-MS-Evayale-70B-Q4_K_M.gguf
sha256: f941d88870fec8343946517a1802d159d23f3971eeea50b6cf12295330bd29cc
uri: huggingface://bartowski/L3.3-MS-Evayale-70B-GGUF/L3.3-MS-Evayale-70B-Q4_K_M.gguf
- &rwkv
url: "github:mudler/LocalAI/gallery/rwkv.yaml@master"
name: "rwkv-6-world-7b"
license: apache-2.0
urls:
- https://huggingface.co/RWKV/rwkv-6-world-7b
- https://huggingface.co/bartowski/rwkv-6-world-7b-GGUF
tags:
- llm
- rwkv
- cpu
- gpu
- rnn
description: |
RWKV (pronounced RwaKuv) is an RNN with GPT-level LLM performance, and can also be directly trained like a GPT transformer (parallelizable). We are at RWKV-7.
So it's combining the best of RNN and transformer - great performance, fast inference, fast training, saves VRAM, "infinite" ctxlen, and free text embedding. Moreover it's 100% attention-free, and a Linux Foundation AI project.
overrides:
parameters:
model: rwkv-6-world-7b-Q4_K_M.gguf
files:
- filename: rwkv-6-world-7b-Q4_K_M.gguf
sha256: f74574186fa4584f405e92198605680db6ad00fd77974ffa14bf02073bb90273
uri: huggingface://bartowski/rwkv-6-world-7b-GGUF/rwkv-6-world-7b-Q4_K_M.gguf
- &qwen25coder
name: "qwen2.5-coder-14b"
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
license: apache-2.0
tags:
- llm
- gguf
- gpu
- qwen
- qwen2.5
- cpu
urls:
- https://huggingface.co/Qwen/Qwen2.5-Coder-14B
- https://huggingface.co/mradermacher/Qwen2.5-Coder-14B-GGUF
description: |
Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7, 14, 32 billion parameters, to meet the needs of different developers. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5:
Significantly improvements in code generation, code reasoning and code fixing. Base on the strong Qwen2.5, we scale up the training tokens into 5.5 trillion including source code, text-code grounding, Synthetic data, etc. Qwen2.5-Coder-32B has become the current state-of-the-art open-source codeLLM, with its coding abilities matching those of GPT-4o.
A more comprehensive foundation for real-world applications such as Code Agents. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies.
Long-context Support up to 128K tokens.
overrides:
parameters:
model: Qwen2.5-Coder-14B.Q4_K_M.gguf
files:
- filename: Qwen2.5-Coder-14B.Q4_K_M.gguf
sha256: 94f277a9ac7caf117140b2fff4e1ccf4bc9f35395b0112f0d0d7c82c6f8d860e
uri: huggingface://mradermacher/Qwen2.5-Coder-14B-GGUF/Qwen2.5-Coder-14B.Q4_K_M.gguf
- !!merge <<: *qwen25coder
name: "qwen2.5-coder-3b-instruct"
urls:
- https://huggingface.co/Qwen/Qwen2.5-Coder-3B-Instruct
- https://huggingface.co/bartowski/Qwen2.5-Coder-3B-Instruct-GGUF
overrides:
parameters:
model: Qwen2.5-Coder-3B-Instruct-Q4_K_M.gguf
files:
- filename: Qwen2.5-Coder-3B-Instruct-Q4_K_M.gguf
sha256: 3da3afe6cf5c674ac195803ea0dd6fee7e1c228c2105c1ce8c66890d1d4ab460
uri: huggingface://bartowski/Qwen2.5-Coder-3B-Instruct-GGUF/Qwen2.5-Coder-3B-Instruct-Q4_K_M.gguf
- !!merge <<: *qwen25coder
name: "qwen2.5-coder-32b-instruct"
urls:
- https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct
- https://huggingface.co/bartowski/Qwen2.5-Coder-32B-Instruct-GGUF
overrides:
parameters:
model: Qwen2.5-Coder-32B-Instruct-Q4_K_M.gguf
files:
- filename: Qwen2.5-Coder-32B-Instruct-Q4_K_M.gguf
sha256: 8e2fd78ff55e7cdf577fda257bac2776feb7d73d922613caf35468073807e815
uri: huggingface://bartowski/Qwen2.5-Coder-32B-Instruct-GGUF/Qwen2.5-Coder-32B-Instruct-Q4_K_M.gguf
- !!merge <<: *qwen25coder
name: "qwen2.5-coder-14b-instruct"
urls:
- https://huggingface.co/Qwen/Qwen2.5-Coder-14B-Instruct
- https://huggingface.co/bartowski/Qwen2.5-Coder-14B-Instruct-GGUF
overrides:
parameters:
model: Qwen2.5-Coder-14B-Instruct-Q4_K_M.gguf
files:
- filename: Qwen2.5-Coder-14B-Instruct-Q4_K_M.gguf
sha256: 2946d28c9e1bb2bcae6d42e8678863a31775df6f740315c7d7e6d6b6411f5937
uri: huggingface://bartowski/Qwen2.5-Coder-14B-Instruct-GGUF/Qwen2.5-Coder-14B-Instruct-Q4_K_M.gguf
- !!merge <<: *qwen25coder
name: "qwen2.5-coder-1.5b-instruct"
urls:
- https://huggingface.co/Qwen/Qwen2.5-Coder-1.5B-Instruct
- https://huggingface.co/bartowski/Qwen2.5-Coder-1.5B-Instruct-GGUF
overrides:
parameters:
model: Qwen2.5-Coder-1.5B-Instruct-Q4_K_M.gguf
files:
- filename: Qwen2.5-Coder-1.5B-Instruct-Q4_K_M.gguf
sha256: f530705d447660a4336c329981af164b471b60b974b1d808d57e8ec9fe23b239
uri: huggingface://bartowski/Qwen2.5-Coder-1.5B-Instruct-GGUF/Qwen2.5-Coder-1.5B-Instruct-Q4_K_M.gguf
- !!merge <<: *qwen25coder
name: "qwen2.5-coder-7b-instruct"
urls:
- https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct
- https://huggingface.co/bartowski/Qwen2.5-Coder-7B-Instruct-GGUF
overrides:
parameters:
model: Qwen2.5-Coder-7B-Instruct-Q4_K_M.gguf
files:
- filename: Qwen2.5-Coder-7B-Instruct-Q4_K_M.gguf
sha256: 1664fccab734674a50763490a8c6931b70e3f2f8ec10031b54806d30e5f956b6
uri: huggingface://bartowski/Qwen2.5-Coder-7B-Instruct-GGUF/Qwen2.5-Coder-7B-Instruct-Q4_K_M.gguf
- !!merge <<: *qwen25coder
name: "qwen2.5-coder-7b-3x-instruct-ties-v1.2-i1"
urls:
- https://huggingface.co/BenevolenceMessiah/Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.2
- https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.2-i1-GGUF
description: |
The following models were included in the merge:
BenevolenceMessiah/Qwen2.5-Coder-7B-Chat-Instruct-TIES-v1.2
MadeAgents/Hammer2.0-7b
huihui-ai/Qwen2.5-Coder-7B-Instruct-abliterated
overrides:
parameters:
model: Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.2.i1-Q4_K_M.gguf
files:
- filename: Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.2.i1-Q4_K_M.gguf
sha256: c28a4da700f634f1277f02391d81fa3c0ba783fa4b02886bd4bfe5f13b6605ef
uri: huggingface://mradermacher/Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.2-i1-GGUF/Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.2.i1-Q4_K_M.gguf
- !!merge <<: *qwen25coder
name: "qwen2.5-coder-7b-instruct-abliterated-i1"
urls:
- https://huggingface.co/huihui-ai/Qwen2.5-Coder-7B-Instruct-abliterated
- https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-Instruct-abliterated-i1-GGUF
description: |
This is an uncensored version of Qwen2.5-Coder-7B-Instruct created with abliteration (see this article to know more about it).
Special thanks to @FailSpy for the original code and technique. Please follow him if you're interested in abliterated models.
overrides:
parameters:
model: Qwen2.5-Coder-7B-Instruct-abliterated.i1-Q4_K_M.gguf
files:
- filename: Qwen2.5-Coder-7B-Instruct-abliterated.i1-Q4_K_M.gguf
sha256: 9100ccd9e8167cefda98bd1c97d5d765a21e70e124e4d6b89945fd66ebb481b4
uri: huggingface://mradermacher/Qwen2.5-Coder-7B-Instruct-abliterated-i1-GGUF/Qwen2.5-Coder-7B-Instruct-abliterated.i1-Q4_K_M.gguf
- !!merge <<: *qwen25coder
name: "rombos-coder-v2.5-qwen-7b"
icon: https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/QErypCEKD5OZLxUcSmYaR.jpeg
urls:
- https://huggingface.co/rombodawg/Rombos-Coder-V2.5-Qwen-7b
- https://huggingface.co/bartowski/Rombos-Coder-V2.5-Qwen-7b-GGUF
- https://docs.google.com/document/d/1OjbjU5AOz4Ftn9xHQrX3oFQGhQ6RDUuXQipnQ9gn6tU/edit?usp=sharing
description: |
Rombos-Coder-V2.5-Qwen-7b is a continues finetuned version of Qwen2.5-Coder-7B-Instruct. I took it upon myself to merge the instruct model with the base model myself using the * Ties* merge method as demonstrated in my own "Continuous Finetuning" method (link available).
This version of the model shows higher performance than the original instruct and base models.
overrides:
parameters:
model: Rombos-Coder-V2.5-Qwen-7b-Q4_K_M.gguf
files:
- filename: Rombos-Coder-V2.5-Qwen-7b-Q4_K_M.gguf
sha256: ca16a550f1be00b7e92f94c0c18ea6af1e5c158d5d1cb3994f9f0a0d13922272
uri: huggingface://bartowski/Rombos-Coder-V2.5-Qwen-7b-GGUF/Rombos-Coder-V2.5-Qwen-7b-Q4_K_M.gguf
- !!merge <<: *qwen25coder
name: "rombos-coder-v2.5-qwen-32b"
icon: https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/QErypCEKD5OZLxUcSmYaR.jpeg
urls:
- https://huggingface.co/rombodawg/Rombos-Coder-V2.5-Qwen-32b
- https://huggingface.co/bartowski/Rombos-Coder-V2.5-Qwen-32b-GGUF
- https://docs.google.com/document/d/1OjbjU5AOz4Ftn9xHQrX3oFQGhQ6RDUuXQipnQ9gn6tU/edit?usp=sharing
description: |
Rombos-Coder-V2.5-Qwen-32b is a continues finetuned version of Qwen2.5-Coder-32B-Instruct. I took it upon myself to merge the instruct model with the base model myself using the Ties merge method as demonstrated in my own "Continuous Finetuning" method (link available).
This version of the model shows higher performance than the original instruct and base models.
overrides:
parameters:
model: Rombos-Coder-V2.5-Qwen-32b-Q4_K_M.gguf
files:
- filename: Rombos-Coder-V2.5-Qwen-32b-Q4_K_M.gguf
sha256: 821ea2a13d96354db1368986700b1189938fbbc56ca6bb9d0c39f752580de71a
uri: huggingface://bartowski/Rombos-Coder-V2.5-Qwen-32b-GGUF/Rombos-Coder-V2.5-Qwen-32b-Q4_K_M.gguf
- !!merge <<: *qwen25coder
name: "rombos-coder-v2.5-qwen-14b"
icon: https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/QErypCEKD5OZLxUcSmYaR.jpeg
urls:
- https://huggingface.co/rombodawg/Rombos-Coder-V2.5-Qwen-14b
- https://huggingface.co/bartowski/Rombos-Coder-V2.5-Qwen-14b-GGUF
- https://docs.google.com/document/d/1OjbjU5AOz4Ftn9xHQrX3oFQGhQ6RDUuXQipnQ9gn6tU/edit?usp=sharing
description: |
Rombos-Coder-V2.5-Qwen-14b is a continues finetuned version of Qwen2.5-Coder-14B-Instruct. I took it upon myself to merge the instruct model with the base model myself using the Ties merge method as demonstrated in my own "Continuous Finetuning" method (link available).
This version of the model shows higher performance than the original instruct and base models.
overrides:
parameters:
model: Rombos-Coder-V2.5-Qwen-14b-Q4_K_M.gguf
files:
- filename: Rombos-Coder-V2.5-Qwen-14b-Q4_K_M.gguf
sha256: 7ef044e1fee206a039f56538f94332030e99ec63915c74f4d1bdec0e601ee968
uri: huggingface://bartowski/Rombos-Coder-V2.5-Qwen-14b-GGUF/Rombos-Coder-V2.5-Qwen-14b-Q4_K_M.gguf
- !!merge <<: *qwen25coder
name: "qwen2.5-coder-32b-instruct-uncensored-i1"
urls:
- https://huggingface.co/thirdeyeai/Qwen2.5-Coder-32B-Instruct-Uncensored
- https://huggingface.co/mradermacher/Qwen2.5-Coder-32B-Instruct-Uncensored-i1-GGUF
description: |
The LLM model is based on sloshywings/Qwen2.5-Coder-32B-Instruct-Uncensored. It is a large language model with 32B parameters that has been fine-tuned on coding tasks and instructions.
overrides:
parameters:
model: Qwen2.5-Coder-32B-Instruct-Uncensored.i1-Q4_K_M.gguf
files:
- filename: Qwen2.5-Coder-32B-Instruct-Uncensored.i1-Q4_K_M.gguf
sha256: 86ac8efb86daf241792ac3d5d35b7da92c54901b4208a6f2829bd03d8f273c9c
uri: huggingface://mraWdermacher/Qwen2.5-Coder-32B-Instruct-Uncensored-i1-GGUF/Qwen2.5-Coder-32B-Instruct-Uncensored.i1-Q4_K_M.gguf
- &opencoder
name: "opencoder-8b-base"
icon: https://github.com/OpenCoder-llm/opencoder-llm.github.io/blob/main/static/images/opencoder_icon.jpg?raw=true
url: "github:mudler/LocalAI/gallery/codellama.yaml@master"
urls:
- https://huggingface.co/infly/OpenCoder-8B-Base
- https://huggingface.co/QuantFactory/OpenCoder-8B-Base-GGUF
tags:
- llm
- gguf
- gpu
- cpu
- code
license: inf
description: |
The model is a quantized version of infly/OpenCoder-8B-Base created using llama.cpp. It is part of the OpenCoder LLM family which includes 1.5B and 8B base and chat models, supporting both English and Chinese languages. The original OpenCoder model was pretrained on 2.5 trillion tokens composed of 90% raw code and 10% code-related web data, and supervised finetuned on over 4.5M high-quality SFT examples. It achieves high performance across multiple language model benchmarks and is one of the most comprehensively open-sourced models available.
overrides:
parameters:
model: OpenCoder-8B-Base.Q4_K_M.gguf
files:
- filename: OpenCoder-8B-Base.Q4_K_M.gguf
sha256: ed158a6f72a40cf4f3f4569f649b365f5851e93f03b56252af3906515fab94ec
uri: huggingface://QuantFactory/OpenCoder-8B-Base-GGUF/OpenCoder-8B-Base.Q4_K_M.gguf
- !!merge <<: *opencoder
url: "github:mudler/LocalAI/gallery/hermes-2-pro-mistral.yaml@master"
name: "opencoder-8b-instruct"
urls:
- https://huggingface.co/infly/OpenCoder-8B-Instruct
- https://huggingface.co/QuantFactory/OpenCoder-8B-Instruct-GGUF
description: |
The LLM model is QuantFactory/OpenCoder-8B-Instruct-GGUF, which is a quantized version of infly/OpenCoder-8B-Instruct. It is created using llama.cpp and supports both English and Chinese languages. The original model, infly/OpenCoder-8B-Instruct, is pretrained on 2.5 trillion tokens composed of 90% raw code and 10% code-related web data, and supervised finetuned on over 4.5M high-quality SFT examples. It achieves high performance across multiple language model benchmarks and is one of the leading open-source models for code.
overrides:
parameters:
model: OpenCoder-8B-Instruct.Q4_K_M.gguf
files:
- filename: OpenCoder-8B-Instruct.Q4_K_M.gguf
sha256: ae642656f127e339fcb9566e6039a73cc55d34e3bf59e067d58ad40742f49f00
uri: huggingface://QuantFactory/OpenCoder-8B-Instruct-GGUF/OpenCoder-8B-Instruct.Q4_K_M.gguf
- !!merge <<: *opencoder
name: "opencoder-1.5b-base"
urls:
- https://huggingface.co/infly/OpenCoder-1.5B-Base
- https://huggingface.co/QuantFactory/OpenCoder-1.5B-Base-GGUF
description: |
The model is a large language model with 1.5 billion parameters, trained on 2.5 trillion tokens of code-related data. It supports both English and Chinese languages and is part of the OpenCoder LLM family which also includes 8B base and chat models. The model achieves high performance across multiple language model benchmarks and is one of the most comprehensively open-sourced models available.
overrides:
parameters:
model: OpenCoder-1.5B-Base.Q4_K_M.gguf
files:
- filename: OpenCoder-1.5B-Base.Q4_K_M.gguf
sha256: fb69a2849971b69f3fa1e64a17d1e4d3e1d0d3733d43ae8645299d07ab855af5
uri: huggingface://QuantFactory/OpenCoder-1.5B-Base-GGUF/OpenCoder-1.5B-Base.Q4_K_M.gguf
- !!merge <<: *opencoder
name: "opencoder-1.5b-instruct"
url: "github:mudler/LocalAI/gallery/hermes-2-pro-mistral.yaml@master"
urls:
- https://huggingface.co/QuantFactory/OpenCoder-1.5B-Instruct-GGUF
description: |
The model is a quantized version of [infly/OpenCoder-1.5B-Instruct](https://huggingface.co/infly/OpenCoder-1.5B-Instruct) created using llama.cpp. The original model, infly/OpenCoder-1.5B-Instruct, is an open and reproducible code LLM family which includes 1.5B and 8B base and chat models, supporting both English and Chinese languages. The model is pretrained on 2.5 trillion tokens composed of 90% raw code and 10% code-related web data, and supervised finetuned on over 4.5M high-quality SFT examples. It achieves high performance across multiple language model benchmarks, positioning it among the leading open-source models for code.
overrides:
parameters:
model: OpenCoder-1.5B-Instruct.Q4_K_M.gguf
files:
- filename: OpenCoder-1.5B-Instruct.Q4_K_M.gguf
sha256: a34128fac79e05a3a92c3fd2245cfce7c3876c60241ec2565c24e74b36f48d56
uri: huggingface://QuantFactory/OpenCoder-1.5B-Instruct-GGUF/OpenCoder-1.5B-Instruct.Q4_K_M.gguf
- &granite3
name: "granite-3.0-1b-a400m-instruct"
urls:
- https://huggingface.co/ibm-granite/granite-3.0-1b-a400m-instruct
- https://huggingface.co/QuantFactory/granite-3.0-1b-a400m-instruct-GGUF
overrides:
parameters:
model: granite-3.0-1b-a400m-instruct.Q4_K_M.gguf
files:
- filename: granite-3.0-1b-a400m-instruct.Q4_K_M.gguf
sha256: 9571b5fc9676ebb59def3377dc848584463fb7f09ed59ebbff3b9f72fd7bd38a
uri: huggingface://QuantFactory/granite-3.0-1b-a400m-instruct-GGUF/granite-3.0-1b-a400m-instruct.Q4_K_M.gguf
url: "github:mudler/LocalAI/gallery/granite.yaml@master"
description: |
Granite 3.0 language models are a new set of lightweight state-of-the-art, open foundation models that natively support multilinguality, coding, reasoning, and tool usage, including the potential to be run on constrained compute resources. All the models are publicly released under an Apache 2.0 license for both research and commercial use. The models' data curation and training procedure were designed for enterprise usage and customization in mind, with a process that evaluates datasets for governance, risk and compliance (GRC) criteria, in addition to IBM's standard data clearance process and document quality checks.
Granite 3.0 includes 4 different models of varying sizes:
Dense Models: 2B and 8B parameter models, trained on 12 trillion tokens in total.
Mixture-of-Expert (MoE) Models: Sparse 1B and 3B MoE models, with 400M and 800M activated parameters respectively, trained on 10 trillion tokens in total.
Accordingly, these options provide a range of models with different compute requirements to choose from, with appropriate trade-offs with their performance on downstream tasks. At each scale, we release a base model — checkpoints of models after pretraining, as well as instruct checkpoints — models finetuned for dialogue, instruction-following, helpfulness, and safety.
tags:
- llm
- gguf
- gpu
- cpu
- moe
- granite
- !!merge <<: *granite3
name: "moe-girl-800ma-3bt"
icon: https://huggingface.co/allura-org/MoE-Girl-800MA-3BT/resolve/main/moe-girl-800-3.png
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
urls:
- https://huggingface.co/allura-org/MoE-Girl-800MA-3BT
- https://huggingface.co/mradermacher/MoE-Girl-800MA-3BT-GGUF
description: |
A roleplay-centric finetune of IBM's Granite 3.0 3B-A800M. LoRA finetune trained locally, whereas the others were FFT; while this results in less uptake of training data, it should also mean less degradation in Granite's core abilities, making it potentially easier to use for general-purpose tasks.
Disclaimer
PLEASE do not expect godliness out of this, it's a model with 800 million active parameters. Expect something more akin to GPT-3 (the original, not GPT-3.5.) (Furthermore, this version is by a less experienced tuner; it's my first finetune that actually has decent-looking graphs, I don't really know what I'm doing yet!)
overrides:
parameters:
model: MoE-Girl-800MA-3BT.Q4_K_M.gguf
files:
- filename: MoE-Girl-800MA-3BT.Q4_K_M.gguf
sha256: 4c3cb57c27aadabd05573a1a01d6c7aee0f21620db919c7704f758d172e0bfa3
uri: huggingface://mradermacher/MoE-Girl-800MA-3BT-GGUF/MoE-Girl-800MA-3BT.Q4_K_M.gguf
- name: "moe-girl-1ba-7bt-i1"
icon: https://cdn-uploads.huggingface.co/production/uploads/634262af8d8089ebaefd410e/kTXXSSSqpb21rfyOX7FUa.jpeg
# chatml
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
urls:
- https://huggingface.co/allura-org/MoE-Girl-1BA-7BT
- https://huggingface.co/mradermacher/MoE-Girl-1BA-7BT-i1-GGUF
description: |
A finetune of OLMoE by AllenAI designed for roleplaying (and maybe general usecases if you try hard enough).
PLEASE do not expect godliness out of this, it's a model with 1 billion active parameters. Expect something more akin to Gemma 2 2B, not Llama 3 8B.
overrides:
parameters:
model: MoE-Girl-1BA-7BT.i1-Q4_K_M.gguf
files:
- filename: MoE-Girl-1BA-7BT.i1-Q4_K_M.gguf
sha256: e6ef9c311c73573b243de6ff7538b386f430af30b2be0a96a5745c17137ad432
uri: huggingface://mradermacher/MoE-Girl-1BA-7BT-i1-GGUF/MoE-Girl-1BA-7BT.i1-Q4_K_M.gguf
- name: "salamandra-7b-instruct"
icon: https://huggingface.co/BSC-LT/salamandra-7b-instruct/resolve/main/images/salamandra_header.png
# Uses chatml
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
license: apache-2.0
urls:
- https://huggingface.co/BSC-LT/salamandra-7b-instruct
- https://huggingface.co/cstr/salamandra-7b-instruct-GGUF
tags:
- llm
- gguf
- gpu
- cpu
- salamandra
description: |
Transformer-based decoder-only language model that has been pre-trained on 7.8 trillion tokens of highly curated data. The pre-training corpus contains text in 35 European languages and code.
Salamandra comes in three different sizes — 2B, 7B and 40B parameters — with their respective base and instruction-tuned variants. This model card corresponds to the 7B instructed version.
overrides:
parameters:
model: salamandra-7b-instruct.Q4_K_M-f32.gguf
files:
- filename: salamandra-7b-instruct.Q4_K_M-f32.gguf
sha256: bac8e8c1d1d9d53cbdb148b8ff9ad378ddb392429207099e85b5aae3a43bff3d
uri: huggingface://cstr/salamandra-7b-instruct-GGUF/salamandra-7b-instruct.Q4_K_M-f32.gguf
- &llama32
## llama3.2
url: "github:mudler/LocalAI/gallery/llama3.1-instruct.yaml@master"
icon: https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/aJJxKus1wP5N-euvHEUq7.png
license: llama3.2
description: |
The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.
Model Developer: Meta
Model Architecture: Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
tags:
- llm
- gguf
- gpu
- cpu
- llama3.2
name: "llama-3.2-1b-instruct:q4_k_m"
urls:
- https://huggingface.co/hugging-quants/Llama-3.2-1B-Instruct-Q4_K_M-GGUF
overrides:
embeddings: true
parameters:
model: llama-3.2-1b-instruct-q4_k_m.gguf
files:
- filename: llama-3.2-1b-instruct-q4_k_m.gguf
sha256: 1d0e9419ec4e12aef73ccf4ffd122703e94c48344a96bc7c5f0f2772c2152ce3
uri: huggingface://hugging-quants/Llama-3.2-1B-Instruct-Q4_K_M-GGUF/llama-3.2-1b-instruct-q4_k_m.gguf
- !!merge <<: *llama32
name: "llama-3.2-3b-instruct:q4_k_m"
urls:
- https://huggingface.co/hugging-quants/Llama-3.2-3B-Instruct-Q4_K_M-GGUF
overrides:
parameters:
model: llama-3.2-3b-instruct-q4_k_m.gguf
files:
- filename: llama-3.2-3b-instruct-q4_k_m.gguf
sha256: c55a83bfb6396799337853ca69918a0b9bbb2917621078c34570bc17d20fd7a1
uri: huggingface://hugging-quants/Llama-3.2-3B-Instruct-Q4_K_M-GGUF/llama-3.2-3b-instruct-q4_k_m.gguf
- !!merge <<: *llama32
name: "llama-3.2-3b-instruct:q8_0"
urls:
- https://huggingface.co/hugging-quants/Llama-3.2-3B-Instruct-Q8_0-GGUF
overrides:
parameters:
model: llama-3.2-3b-instruct-q8_0.gguf
files:
- filename: llama-3.2-3b-instruct-q8_0.gguf
sha256: 51725f77f997a5080c3d8dd66e073da22ddf48ab5264f21f05ded9b202c3680e
uri: huggingface://hugging-quants/Llama-3.2-3B-Instruct-Q8_0-GGUF/llama-3.2-3b-instruct-q8_0.gguf
- !!merge <<: *llama32
name: "llama-3.2-1b-instruct:q8_0"
urls:
- https://huggingface.co/hugging-quants/Llama-3.2-1B-Instruct-Q8_0-GGUF
overrides:
parameters:
model: llama-3.2-1b-instruct-q8_0.gguf
files:
- filename: llama-3.2-1b-instruct-q8_0.gguf
sha256: ba345c83bf5cc679c653b853c46517eea5a34f03ed2205449db77184d9ae62a9
uri: huggingface://hugging-quants/Llama-3.2-1B-Instruct-Q8_0-GGUF/llama-3.2-1b-instruct-q8_0.gguf
## Uncensored
- !!merge <<: *llama32
icon: https://cdn-uploads.huggingface.co/production/uploads/66c9d7a26f2335ba288810a4/4YDg-rcEXCK0fdTS1fBzE.webp
name: "versatillama-llama-3.2-3b-instruct-abliterated"
urls:
- https://huggingface.co/QuantFactory/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated-GGUF
description: |
Small but Smart Fine-Tuned on Vast dataset of Conversations. Able to Generate Human like text with high performance within its size. It is Very Versatile when compared for it's size and Parameters and offers capability almost as good as Llama 3.1 8B Instruct.
overrides:
parameters:
model: VersatiLlama-Llama-3.2-3B-Instruct-Abliterated.Q4_K_M.gguf
files:
- filename: VersatiLlama-Llama-3.2-3B-Instruct-Abliterated.Q4_K_M.gguf
sha256: 15b9e4a987f50d7594d030815c7166a996e20db46fe1e20da03e96955020312c
uri: huggingface://QuantFactory/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated-GGUF/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated.Q4_K_M.gguf
- !!merge <<: *llama32
name: "llama3.2-3b-enigma"
icon: https://cdn-uploads.huggingface.co/production/uploads/64f267a8a4f79a118e0fcc89/it7MY5MyLCLpFQev5dUis.jpeg
urls:
- https://huggingface.co/QuantFactory/Llama3.2-3B-Enigma-GGUF
description: |
Enigma is a code-instruct model built on Llama 3.2 3b. It is a high quality code instruct model with the Llama 3.2 Instruct chat format. The model is finetuned on synthetic code-instruct data generated with Llama 3.1 405b and supplemented with generalist synthetic data. It uses the Llama 3.2 Instruct prompt format.
overrides:
parameters:
model: Llama3.2-3B-Enigma.Q4_K_M.gguf
files:
- filename: Llama3.2-3B-Enigma.Q4_K_M.gguf
sha256: 4304e6ee1e348b228470700ec1e9423f5972333d376295195ce6cd5c70cae5e4
uri: huggingface://QuantFactory/Llama3.2-3B-Enigma-GGUF/Llama3.2-3B-Enigma.Q4_K_M.gguf
- !!merge <<: *llama32
name: "llama3.2-3b-esper2"
icon: https://cdn-uploads.huggingface.co/production/uploads/64f267a8a4f79a118e0fcc89/4I6oK8DG0so4VD8GroFsd.jpeg
urls:
- https://huggingface.co/QuantFactory/Llama3.2-3B-Esper2-GGUF
description: |
Esper 2 is a DevOps and cloud architecture code specialist built on Llama 3.2 3b. It is an AI assistant focused on AWS, Azure, GCP, Terraform, Dockerfiles, pipelines, shell scripts and more, with real world problem solving and high quality code instruct performance within the Llama 3.2 Instruct chat format. Finetuned on synthetic DevOps-instruct and code-instruct data generated with Llama 3.1 405b and supplemented with generalist chat data.
overrides:
parameters:
model: Llama3.2-3B-Esper2.Q4_K_M.gguf
files:
- filename: Llama3.2-3B-Esper2.Q4_K_M.gguf
sha256: 11d2bd674aa22a71a59ec49ad29b695000d14bc275b0195b8d7089bfc7582fc7
uri: huggingface://QuantFactory/Llama3.2-3B-Esper2-GGUF/Llama3.2-3B-Esper2.Q4_K_M.gguf
- !!merge <<: *llama32
name: "llama-3.2-3b-agent007"
urls:
- https://huggingface.co/QuantFactory/Llama-3.2-3B-Agent007-GGUF
description: |
The model is a quantized version of EpistemeAI/Llama-3.2-3B-Agent007, developed by EpistemeAI and fine-tuned from unsloth/llama-3.2-3b-instruct-bnb-4bit. It was trained 2x faster with Unsloth and Huggingface's TRL library. Fine tuned with Agent datasets.
overrides:
parameters:
model: Llama-3.2-3B-Agent007.Q4_K_M.gguf
files:
- filename: Llama-3.2-3B-Agent007.Q4_K_M.gguf
sha256: 7a2543a69b116f2a059e2e445e5d362bb7df4a51b97e83d8785c1803dc9d687f
uri: huggingface://QuantFactory/Llama-3.2-3B-Agent007-GGUF/Llama-3.2-3B-Agent007.Q4_K_M.gguf
- !!merge <<: *llama32
name: "llama-3.2-3b-agent007-coder"
urls:
- https://huggingface.co/QuantFactory/Llama-3.2-3B-Agent007-Coder-GGUF
description: |
The Llama-3.2-3B-Agent007-Coder-GGUF is a quantized version of the EpistemeAI/Llama-3.2-3B-Agent007-Coder model, which is a fine-tuned version of the unsloth/llama-3.2-3b-instruct-bnb-4bit model. It is created using llama.cpp and trained with additional datasets such as the Agent dataset, Code Alpaca 20K, and magpie ultra 0.1. This model is optimized for multilingual dialogue use cases and agentic retrieval and summarization tasks. The model is available for commercial and research use in multiple languages and is best used with the transformers library.
overrides:
parameters:
model: Llama-3.2-3B-Agent007-Coder.Q4_K_M.gguf
files:
- filename: Llama-3.2-3B-Agent007-Coder.Q4_K_M.gguf
sha256: 49a4861c094d94ef5faa33f69b02cd132bb0167f1c3ca59059404f85f61e1d12
uri: huggingface://QuantFactory/Llama-3.2-3B-Agent007-Coder-GGUF/Llama-3.2-3B-Agent007-Coder.Q4_K_M.gguf
- !!merge <<: *llama32
name: "fireball-meta-llama-3.2-8b-instruct-agent-003-128k-code-dpo"
urls:
- https://huggingface.co/QuantFactory/Fireball-Meta-Llama-3.2-8B-Instruct-agent-003-128k-code-DPO-GGUF
description: |
The LLM model is a quantized version of EpistemeAI/Fireball-Meta-Llama-3.2-8B-Instruct-agent-003-128k-code-DPO, which is an experimental and revolutionary fine-tune with DPO dataset to allow LLama 3.1 8B to be an agentic coder. It has some built-in agent features such as search, calculator, and ReAct. Other noticeable features include self-learning using unsloth, RAG applications, and memory. The context window of the model is 128K. It can be integrated into projects using popular libraries like Transformers and vLLM. The model is suitable for use with Langchain or LLamaIndex. The model is developed by EpistemeAI and licensed under the Apache 2.0 license.
overrides:
parameters:
model: Fireball-Meta-Llama-3.2-8B-Instruct-agent-003-128k-code-DPO.Q4_K_M.gguf
files:
- filename: Fireball-Meta-Llama-3.2-8B-Instruct-agent-003-128k-code-DPO.Q4_K_M.gguf
sha256: 7f45fa79bc6c9847ef9fbad08c3bb5a0f2dbb56d2e2200a5d37b260a57274e55
uri: huggingface://QuantFactory/Fireball-Meta-Llama-3.2-8B-Instruct-agent-003-128k-code-DPO-GGUF/Fireball-Meta-Llama-3.2-8B-Instruct-agent-003-128k-code-DPO.Q4_K_M.gguf
- !!merge <<: *llama32
name: "llama-3.2-chibi-3b"
icon: https://huggingface.co/AELLM/Llama-3.2-Chibi-3B/resolve/main/chibi.jpg
urls:
- https://huggingface.co/AELLM/Llama-3.2-Chibi-3B
- https://huggingface.co/mradermacher/Llama-3.2-Chibi-3B-GGUF
description: |
Small parameter LLMs are ideal for navigating the complexities of the Japanese language, which involves multiple character systems like kanji, hiragana, and katakana, along with subtle social cues. Despite their smaller size, these models are capable of delivering highly accurate and context-aware results, making them perfect for use in environments where resources are constrained. Whether deployed on mobile devices with limited processing power or in edge computing scenarios where fast, real-time responses are needed, these models strike the perfect balance between performance and efficiency, without sacrificing quality or speed.
overrides:
parameters:
model: Llama-3.2-Chibi-3B.Q4_K_M.gguf
files:
- filename: Llama-3.2-Chibi-3B.Q4_K_M.gguf
sha256: 4b594cd5f66181202713f1cf97ce2f86d0acfa1b862a64930d5f512c45640a2f
uri: huggingface://mradermacher/Llama-3.2-Chibi-3B-GGUF/Llama-3.2-Chibi-3B.Q4_K_M.gguf
- !!merge <<: *llama32
name: "llama-3.2-3b-reasoning-time"
urls:
- https://huggingface.co/mradermacher/Llama-3.2-3B-Reasoning-Time-GGUF
description: |
Lyte/Llama-3.2-3B-Reasoning-Time is a large language model with 3.2 billion parameters, designed for reasoning and time-based tasks in English. It is based on the Llama architecture and has been quantized using the GGUF format by mradermacher.
overrides:
parameters:
model: Llama-3.2-3B-Reasoning-Time.Q4_K_M.gguf
files:
- filename: Llama-3.2-3B-Reasoning-Time.Q4_K_M.gguf
sha256: 80b10e1a5c6e27f6d8cf08c3472af2b15a9f63ebf8385eedfe8615f85116c73f
uri: huggingface://mradermacher/Llama-3.2-3B-Reasoning-Time-GGUF/Llama-3.2-3B-Reasoning-Time.Q4_K_M.gguf
- !!merge <<: *llama32
name: "llama-3.2-sun-2.5b-chat"
icon: https://i.ibb.co/PF0TdMJ/imagine-image-9a56cee7-0f4f-4cc2-b265-a5b8d04f266b.png
urls:
- https://huggingface.co/meditsolutions/Llama-3.2-SUN-2.5B-chat
- https://huggingface.co/mradermacher/Llama-3.2-SUN-2.5B-chat-GGUF
description: |
Base Model
Llama 3.2 1B
Extended Size
1B to 2.5B parameters
Extension Method
Proprietary technique developed by MedIT Solutions
Fine-tuning
Open (or open subsets allowing for commercial use) open datasets from HF
Open (or open subsets allowing for commercial use) SFT datasets from HF
Training Status
Current version: chat-1.0.0
Key Features
Built on Llama 3.2 architecture
Expanded from 1B to 2.47B parameters
Optimized for open-ended conversations
Incorporates supervised fine-tuning for improved performance
Use Case
General conversation and task-oriented interactions
overrides:
parameters:
model: Llama-3.2-SUN-2.5B-chat.Q4_K_M.gguf
files:
- filename: Llama-3.2-SUN-2.5B-chat.Q4_K_M.gguf
sha256: 4cd1796806200662500e1393ae8e0a32306fab2b6679a746ee53ad2130e5f3a2
uri: huggingface://mradermacher/Llama-3.2-SUN-2.5B-chat-GGUF/Llama-3.2-SUN-2.5B-chat.Q4_K_M.gguf
- !!merge <<: *llama32
name: "llama-3.2-3b-instruct-uncensored"
icon: https://i.imgur.com/JOePyAN.png
urls:
- https://huggingface.co/bartowski/Llama-3.2-3B-Instruct-uncensored-GGUF
- https://huggingface.co/chuanli11/Llama-3.2-3B-Instruct-uncensored
description: |
This is an uncensored version of the original Llama-3.2-3B-Instruct, created using mlabonne's script, which builds on FailSpy's notebook and the original work from Andy Arditi et al..
overrides:
parameters:
model: Llama-3.2-3B-Instruct-uncensored-Q4_K_M.gguf
files:
- filename: Llama-3.2-3B-Instruct-uncensored-Q4_K_M.gguf
sha256: 80f532552e3d56e366226f428395de8285a671f2da1d5fd68563741181b77a95
uri: huggingface://bartowski/Llama-3.2-3B-Instruct-uncensored-GGUF/Llama-3.2-3B-Instruct-uncensored-Q4_K_M.gguf
- !!merge <<: *llama32
name: "calme-3.3-llamaloi-3b"
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
icon: https://huggingface.co/MaziyarPanahi/calme-3.3-llamaloi-3b/resolve/main/calme_3.png
urls:
- https://huggingface.co/MaziyarPanahi/calme-3.3-llamaloi-3b
- https://huggingface.co/MaziyarPanahi/calme-3.3-llamaloi-3b-GGUF
description: |
This model is an advanced iteration of the powerful meta-llama/Llama-3.2-3B, specifically fine-tuned to enhance its capabilities in French Legal domain.
overrides:
parameters:
model: calme-3.3-llamaloi-3b.Q5_K_M.gguf
files:
- filename: calme-3.3-llamaloi-3b.Q5_K_M.gguf
sha256: d3b9d47faa9e968a93a8f52bd4cdc938e5a612facb963088367ca871063ef302
uri: huggingface://MaziyarPanahi/calme-3.3-llamaloi-3b-GGUF/calme-3.3-llamaloi-3b.Q5_K_M.gguf
- !!merge <<: *llama32
name: "calme-3.2-llamaloi-3b"
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
icon: https://huggingface.co/MaziyarPanahi/calme-3.3-llamaloi-3b/resolve/main/calme_3.png
urls:
- https://huggingface.co/MaziyarPanahi/calme-3.2-llamaloi-3b
- https://huggingface.co/MaziyarPanahi/calme-3.2-llamaloi-3b-GGUF
description: |
This model is an advanced iteration of the powerful meta-llama/Llama-3.2-3B, specifically fine-tuned to enhance its capabilities in French Legal domain.
overrides:
parameters:
model: calme-3.2-llamaloi-3b.Q5_K_M.gguf
files:
- filename: calme-3.2-llamaloi-3b.Q5_K_M.gguf
sha256: bd11e6a717008d0603b6da5faab2fa2ba18b376c5589245735340cfb0a8dabb9
uri: huggingface://MaziyarPanahi/calme-3.2-llamaloi-3b-GGUF/calme-3.2-llamaloi-3b.Q5_K_M.gguf
- !!merge <<: *llama32
name: "calme-3.1-llamaloi-3b"
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
icon: https://huggingface.co/MaziyarPanahi/calme-3.3-llamaloi-3b/resolve/main/calme_3.png
urls:
- https://huggingface.co/MaziyarPanahi/calme-3.1-llamaloi-3b
- https://huggingface.co/MaziyarPanahi/calme-3.1-llamaloi-3b-GGUF
description: |
This model is an advanced iteration of the powerful meta-llama/Llama-3.2-3B, specifically fine-tuned to enhance its capabilities in French Legal domain.
overrides:
parameters:
model: calme-3.1-llamaloi-3b.Q5_K_M.gguf
files:
- filename: calme-3.1-llamaloi-3b.Q5_K_M.gguf
sha256: 06b900c7252423329ca57a02a8b8d18a1294934709861d09af96e74694c9a3f1
uri: huggingface://MaziyarPanahi/calme-3.1-llamaloi-3b-GGUF/calme-3.1-llamaloi-3b.Q5_K_M.gguf
- !!merge <<: *llama32
name: "llama3.2-3b-enigma"
icon: https://cdn-uploads.huggingface.co/production/uploads/64f267a8a4f79a118e0fcc89/it7MY5MyLCLpFQev5dUis.jpeg
urls:
- https://huggingface.co/QuantFactory/Llama3.2-3B-Enigma-GGUF
- https://huggingface.co/QuantFactory/Llama3.2-3B-Enigma-GGUF
description: |
ValiantLabs/Llama3.2-3B-Enigma is an Enigma model built on Llama 3.2 3b. It is a high-quality code-instruct model with the Llama 3.2 Instruct chat format. The model is finetuned on synthetic code-instruct data generated using Llama 3.1 405b and supplemented with generalist synthetic data. This model is suitable for both code-instruct and general chat applications.
overrides:
parameters:
model: Llama3.2-3B-Enigma.Q4_K_M.gguf
files:
- filename: Llama3.2-3B-Enigma.Q4_K_M.gguf
sha256: 4304e6ee1e348b228470700ec1e9423f5972333d376295195ce6cd5c70cae5e4
uri: huggingface://QuantFactory/Llama3.2-3B-Enigma-GGUF/Llama3.2-3B-Enigma.Q4_K_M.gguf
- !!merge <<: *llama32
icon: https://cdn-uploads.huggingface.co/production/uploads/63444f2687964b331809eb55/EXX7TKbB-R6arxww2mk0R.jpeg
name: "llama3.2-3b-shiningvaliant2-i1"
urls:
- https://huggingface.co/ValiantLabs/Llama3.2-3B-ShiningValiant2
- https://huggingface.co/mradermacher/Llama3.2-3B-ShiningValiant2-i1-GGUF
description: |
Shining Valiant 2 is a chat model built on Llama 3.2 3b, finetuned on our data for friendship, insight, knowledge and enthusiasm.
Finetuned on meta-llama/Llama-3.2-3B-Instruct for best available general performance
Trained on a variety of high quality data; focused on science, engineering, technical knowledge, and structured reasoning
Also available for Llama 3.1 70b and Llama 3.1 8b!
Version
This is the 2024-09-27 release of Shining Valiant 2 for Llama 3.2 3b.
overrides:
parameters:
model: Llama3.2-3B-ShiningValiant2.i1-Q4_K_M.gguf
files:
- filename: Llama3.2-3B-ShiningValiant2.i1-Q4_K_M.gguf
sha256: 700521dc6a8a50e2d0bb5ccde12399209004155f9c68751aeac7feccf2cd4957
uri: huggingface://mradermacher/Llama3.2-3B-ShiningValiant2-i1-GGUF/Llama3.2-3B-ShiningValiant2.i1-Q4_K_M.gguf
- !!merge <<: *llama32
name: "llama-doctor-3.2-3b-instruct"
urls:
- https://huggingface.co/prithivMLmods/Llama-Doctor-3.2-3B-Instruct
- https://huggingface.co/bartowski/Llama-Doctor-3.2-3B-Instruct-GGUF
description: |
The Llama-Doctor-3.2-3B-Instruct model is designed for text generation tasks, particularly in contexts where instruction-following capabilities are needed. This model is a fine-tuned version of the base Llama-3.2-3B-Instruct model and is optimized for understanding and responding to user-provided instructions or prompts. The model has been trained on a specialized dataset, avaliev/chat_doctor, to enhance its performance in providing conversational or advisory responses, especially in medical or technical fields.
overrides:
parameters:
model: Llama-Doctor-3.2-3B-Instruct-Q4_K_M.gguf
files:
- filename: Llama-Doctor-3.2-3B-Instruct-Q4_K_M.gguf
sha256: 38fd1423e055564e9fa3d37003a62bf9db79acd348a90fa0b051a1f2c9d7cb53
uri: huggingface://bartowski/Llama-Doctor-3.2-3B-Instruct-GGUF/Llama-Doctor-3.2-3B-Instruct-Q4_K_M.gguf
- !!merge <<: *llama32
name: "onellm-doey-v1-llama-3.2-3b"
urls:
- https://huggingface.co/DoeyLLM/OneLLM-Doey-V1-Llama-3.2-3B
- https://huggingface.co/QuantFactory/OneLLM-Doey-V1-Llama-3.2-3B-GGUF
description: |
This model is a fine-tuned version of LLaMA 3.2-3B, optimized using LoRA (Low-Rank Adaptation) on the NVIDIA ChatQA-Training-Data. It is tailored for conversational AI, question answering, and other instruction-following tasks, with support for sequences up to 1024 tokens.
overrides:
parameters:
model: OneLLM-Doey-V1-Llama-3.2-3B.Q4_K_M.gguf
files:
- filename: OneLLM-Doey-V1-Llama-3.2-3B.Q4_K_M.gguf
sha256: 57e93584bfb708a9841edffd70635c21f27955d8a1b4e346a72edc8163394a97
uri: huggingface://QuantFactory/OneLLM-Doey-V1-Llama-3.2-3B-GGUF/OneLLM-Doey-V1-Llama-3.2-3B.Q4_K_M.gguf
- !!merge <<: *llama32
name: "llama-sentient-3.2-3b-instruct"
urls:
- https://huggingface.co/prithivMLmods/Llama-Sentient-3.2-3B-Instruct
- https://huggingface.co/QuantFactory/Llama-Sentient-3.2-3B-Instruct-GGUF
description: |
The Llama-Sentient-3.2-3B-Instruct model is a fine-tuned version of the Llama-3.2-3B-Instruct model, optimized for text generation tasks, particularly where instruction-following abilities are critical. This model is trained on the mlabonne/lmsys-arena-human-preference-55k-sharegpt dataset, which enhances its performance in conversational and advisory contexts, making it suitable for a wide range of applications.
overrides:
parameters:
model: Llama-Sentient-3.2-3B-Instruct.Q4_K_M.gguf
files:
- filename: Llama-Sentient-3.2-3B-Instruct.Q4_K_M.gguf
uri: huggingface://QuantFactory/Llama-Sentient-3.2-3B-Instruct-GGUF/Llama-Sentient-3.2-3B-Instruct.Q4_K_M.gguf
sha256: 3f855ce0522bfdc39fc826162ba6d89f15cc3740c5207da10e70baa3348b7812
- !!merge <<: *llama32
name: "llama-smoltalk-3.2-1b-instruct"
urls:
- https://huggingface.co/prithivMLmods/Llama-SmolTalk-3.2-1B-Instruct
- https://huggingface.co/mradermacher/Llama-SmolTalk-3.2-1B-Instruct-GGUF
description: |
The Llama-SmolTalk-3.2-1B-Instruct model is a lightweight, instruction-tuned model designed for efficient text generation and conversational AI tasks. With a 1B parameter architecture, this model strikes a balance between performance and resource efficiency, making it ideal for applications requiring concise, contextually relevant outputs. The model has been fine-tuned to deliver robust instruction-following capabilities, catering to both structured and open-ended queries.
Key Features:
Instruction-Tuned Performance: Optimized to understand and execute user-provided instructions across diverse domains.
Lightweight Architecture: With just 1 billion parameters, the model provides efficient computation and storage without compromising output quality.
Versatile Use Cases: Suitable for tasks like content generation, conversational interfaces, and basic problem-solving.
Intended Applications:
Conversational AI: Engage users with dynamic and contextually aware dialogue.
Content Generation: Produce summaries, explanations, or other creative text outputs efficiently.
Instruction Execution: Follow user commands to generate precise and relevant responses.
overrides:
parameters:
model: Llama-SmolTalk-3.2-1B-Instruct.Q4_K_M.gguf
files:
- filename: Llama-SmolTalk-3.2-1B-Instruct.Q4_K_M.gguf
sha256: 03d8d05e3821f4caa65defa82baaff658484d4405b66546431528153ceef4d9e
uri: huggingface://mradermacher/Llama-SmolTalk-3.2-1B-Instruct-GGUF/Llama-SmolTalk-3.2-1B-Instruct.Q4_K_M.gguf
- !!merge <<: *llama32
name: "fusechat-llama-3.2-3b-instruct"
urls:
- https://huggingface.co/FuseAI/FuseChat-Llama-3.2-3B-Instruct
- https://huggingface.co/bartowski/FuseChat-Llama-3.2-3B-Instruct-GGUF
description: |
We present FuseChat-3.0, a series of models crafted to enhance performance by integrating the strengths of multiple source LLMs into more compact target LLMs. To achieve this fusion, we utilized four powerful source LLMs: Gemma-2-27B-It, Mistral-Large-Instruct-2407, Qwen-2.5-72B-Instruct, and Llama-3.1-70B-Instruct. For the target LLMs, we employed three widely-used smaller models—Llama-3.1-8B-Instruct, Gemma-2-9B-It, and Qwen-2.5-7B-Instruct—along with two even more compact models—Llama-3.2-3B-Instruct and Llama-3.2-1B-Instruct. The implicit model fusion process involves a two-stage training pipeline comprising Supervised Fine-Tuning (SFT) to mitigate distribution discrepancies between target and source LLMs, and Direct Preference Optimization (DPO) for learning preferences from multiple source LLMs. The resulting FuseChat-3.0 models demonstrated substantial improvements in tasks related to general conversation, instruction following, mathematics, and coding. Notably, when Llama-3.1-8B-Instruct served as the target LLM, our fusion approach achieved an average improvement of 6.8 points across 14 benchmarks. Moreover, it showed significant improvements of 37.1 and 30.1 points on instruction-following test sets AlpacaEval-2 and Arena-Hard respectively. We have released the FuseChat-3.0 models on Huggingface, stay tuned for the forthcoming dataset and code.
overrides:
parameters:
model: FuseChat-Llama-3.2-3B-Instruct-Q4_K_M.gguf
files:
- filename: FuseChat-Llama-3.2-3B-Instruct-Q4_K_M.gguf
sha256: a4f0e9a905b74886b79b72622c06a3219d6812818a564a53c39fc49032d7f842
uri: huggingface://bartowski/FuseChat-Llama-3.2-3B-Instruct-GGUF/FuseChat-Llama-3.2-3B-Instruct-Q4_K_M.gguf
- &qwen25
## Qwen2.5
name: "qwen2.5-14b-instruct"
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
license: apache-2.0
description: |
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters.
tags:
- llm
- gguf
- gpu
- qwen
- qwen2.5
- cpu
urls:
- https://huggingface.co/bartowski/Qwen2.5-14B-Instruct-GGUF
- https://huggingface.co/Qwen/Qwen2.5-7B-Instruct
overrides:
parameters:
model: Qwen2.5-14B-Instruct-Q4_K_M.gguf
files:
- filename: Qwen2.5-14B-Instruct-Q4_K_M.gguf
sha256: e47ad95dad6ff848b431053b375adb5d39321290ea2c638682577dafca87c008
uri: huggingface://bartowski/Qwen2.5-14B-Instruct-GGUF/Qwen2.5-14B-Instruct-Q4_K_M.gguf
- !!merge <<: *qwen25
name: "qwen2.5-math-7b-instruct"
urls:
- https://huggingface.co/bartowski/Qwen2.5-Math-7B-Instruct-GGUF
- https://huggingface.co/Qwen/Qwen2.5-Math-7B-Instruct
description: |
In August 2024, we released the first series of mathematical LLMs - Qwen2-Math - of our Qwen family. A month later, we have upgraded it and open-sourced Qwen2.5-Math series, including base models Qwen2.5-Math-1.5B/7B/72B, instruction-tuned models Qwen2.5-Math-1.5B/7B/72B-Instruct, and mathematical reward model Qwen2.5-Math-RM-72B.
Unlike Qwen2-Math series which only supports using Chain-of-Thught (CoT) to solve English math problems, Qwen2.5-Math series is expanded to support using both CoT and Tool-integrated Reasoning (TIR) to solve math problems in both Chinese and English. The Qwen2.5-Math series models have achieved significant performance improvements compared to the Qwen2-Math series models on the Chinese and English mathematics benchmarks with CoT.
The base models of Qwen2-Math are initialized with Qwen2-1.5B/7B/72B, and then pretrained on a meticulously designed Mathematics-specific Corpus. This corpus contains large-scale high-quality mathematical web texts, books, codes, exam questions, and mathematical pre-training data synthesized by Qwen2.
overrides:
parameters:
model: Qwen2.5-Math-7B-Instruct-Q4_K_M.gguf
files:
- filename: Qwen2.5-Math-7B-Instruct-Q4_K_M.gguf
sha256: 7e03cee8c65b9ebf9ca14ddb010aca27b6b18e6c70f2779e94e7451d9529c091
uri: huggingface://bartowski/Qwen2.5-Math-7B-Instruct-GGUF/Qwen2.5-Math-7B-Instruct-Q4_K_M.gguf
- !!merge <<: *qwen25
name: "qwen2.5-14b_uncencored"
icon: https://huggingface.co/SicariusSicariiStuff/Phi-3.5-mini-instruct_Uncensored/resolve/main/Misc/Uncensored.png
urls:
- https://huggingface.co/SicariusSicariiStuff/Qwen2.5-14B_Uncencored
- https://huggingface.co/bartowski/Qwen2.5-14B_Uncencored-GGUF
description: |
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters.
Uncensored qwen2.5
tags:
- llm
- gguf
- gpu
- qwen
- qwen2.5
- cpu
- uncensored
overrides:
parameters:
model: Qwen2.5-14B_Uncencored-Q4_K_M.gguf
files:
- filename: Qwen2.5-14B_Uncencored-Q4_K_M.gguf
sha256: 066b9341b67e0fd0956de3576a3b7988574a5b9a0028aef2b9c8edeadd6dbbd1
uri: huggingface://bartowski/Qwen2.5-14B_Uncencored-GGUF/Qwen2.5-14B_Uncencored-Q4_K_M.gguf
- !!merge <<: *qwen25
name: "qwen2.5-coder-7b-instruct"
urls:
- https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct
- https://huggingface.co/bartowski/Qwen2.5-Coder-7B-Instruct-GGUF
description: |
Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). For Qwen2.5-Coder, we release three base language models and instruction-tuned language models, 1.5, 7 and 32 (coming soon) billion parameters. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5:
Significantly improvements in code generation, code reasoning and code fixing. Base on the strong Qwen2.5, we scale up the training tokens into 5.5 trillion including source code, text-code grounding, Synthetic data, etc.
A more comprehensive foundation for real-world applications such as Code Agents. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies.
Long-context Support up to 128K tokens.
overrides:
parameters:
model: Qwen2.5-Coder-7B-Instruct-Q4_K_M.gguf
files:
- filename: Qwen2.5-Coder-7B-Instruct-Q4_K_M.gguf
sha256: 1664fccab734674a50763490a8c6931b70e3f2f8ec10031b54806d30e5f956b6
uri: huggingface://bartowski/Qwen2.5-Coder-7B-Instruct-GGUF/Qwen2.5-Coder-7B-Instruct-Q4_K_M.gguf
- !!merge <<: *qwen25
name: "qwen2.5-math-72b-instruct"
icon: http://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2.5/qwen2.5-math-pipeline.jpeg
urls:
- https://huggingface.co/Qwen/Qwen2.5-Math-72B-Instruct
- https://huggingface.co/bartowski/Qwen2.5-Math-72B-Instruct-GGUF
description: |
In August 2024, we released the first series of mathematical LLMs - Qwen2-Math - of our Qwen family. A month later, we have upgraded it and open-sourced Qwen2.5-Math series, including base models Qwen2.5-Math-1.5B/7B/72B, instruction-tuned models Qwen2.5-Math-1.5B/7B/72B-Instruct, and mathematical reward model Qwen2.5-Math-RM-72B.
Unlike Qwen2-Math series which only supports using Chain-of-Thught (CoT) to solve English math problems, Qwen2.5-Math series is expanded to support using both CoT and Tool-integrated Reasoning (TIR) to solve math problems in both Chinese and English. The Qwen2.5-Math series models have achieved significant performance improvements compared to the Qwen2-Math series models on the Chinese and English mathematics benchmarks with CoT
overrides:
parameters:
model: Qwen2.5-Math-72B-Instruct-Q4_K_M.gguf
files:
- filename: Qwen2.5-Math-72B-Instruct-Q4_K_M.gguf
sha256: 5dee8a6e21d555577712b4f65565a3c3737a0d5d92f5a82970728c6d8e237f17
uri: huggingface://bartowski/Qwen2.5-Math-72B-Instruct-GGUF/Qwen2.5-Math-72B-Instruct-Q4_K_M.gguf
- !!merge <<: *qwen25
name: "qwen2.5-0.5b-instruct"
urls:
- https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct
- https://huggingface.co/bartowski/Qwen2.5-0.5B-Instruct-GGUF
overrides:
parameters:
model: Qwen2.5-0.5B-Instruct-Q4_K_M.gguf
files:
- filename: Qwen2.5-0.5B-Instruct-Q4_K_M.gguf
sha256: 6eb923e7d26e9cea28811e1a8e852009b21242fb157b26149d3b188f3a8c8653
uri: huggingface://bartowski/Qwen2.5-0.5B-Instruct-GGUF/Qwen2.5-0.5B-Instruct-Q4_K_M.gguf
- !!merge <<: *qwen25
name: "qwen2.5-1.5b-instruct"
urls:
- https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct
- https://huggingface.co/bartowski/Qwen2.5-1.5B-Instruct-GGUF
overrides:
parameters:
model: Qwen2.5-1.5B-Instruct-Q4_K_M.gguf
files:
- filename: Qwen2.5-1.5B-Instruct-Q4_K_M.gguf
sha256: 1adf0b11065d8ad2e8123ea110d1ec956dab4ab038eab665614adba04b6c3370
uri: huggingface://bartowski/Qwen2.5-1.5B-Instruct-GGUF/Qwen2.5-1.5B-Instruct-Q4_K_M.gguf
- !!merge <<: *qwen25
name: "qwen2.5-32b"
urls:
- https://huggingface.co/Qwen/Qwen2.5-32B
- https://huggingface.co/mradermacher/Qwen2.5-32B-GGUF
overrides:
parameters:
model: Qwen2.5-32B.Q4_K_M.gguf
files:
- filename: Qwen2.5-32B.Q4_K_M.gguf
uri: huggingface://mradermacher/Qwen2.5-32B-GGUF/Qwen2.5-32B.Q4_K_M.gguf
sha256: fa42a4067e3630929202b6bb1ef5cebc43c1898494aedfd567b7d53c7a9d84a6
- !!merge <<: *qwen25
name: "qwen2.5-32b-instruct"
urls:
- https://huggingface.co/Qwen/Qwen2.5-32B-Instruct
- https://huggingface.co/bartowski/Qwen2.5-32B-Instruct-GGUF
overrides:
parameters:
model: Qwen2.5-32B-Instruct-Q4_K_M.gguf
files:
- filename: Qwen2.5-32B-Instruct-Q4_K_M.gguf
sha256: 2e5f6daea180dbc59f65a40641e94d3973b5dbaa32b3c0acf54647fa874e519e
uri: huggingface://bartowski/Qwen2.5-32B-Instruct-GGUF/Qwen2.5-32B-Instruct-Q4_K_M.gguf
- !!merge <<: *qwen25
name: "qwen2.5-72b-instruct"
urls:
- https://huggingface.co/Qwen/Qwen2.5-72B-Instruct
- https://huggingface.co/bartowski/Qwen2.5-72B-Instruct-GGUF
overrides:
parameters:
model: Qwen2.5-72B-Instruct-Q4_K_M.gguf
files:
- filename: Qwen2.5-72B-Instruct-Q4_K_M.gguf
sha256: e4c8fad16946be8cf0bbf67eb8f4e18fc7415a5a6d2854b4cda453edb4082545
uri: huggingface://bartowski/Qwen2.5-72B-Instruct-GGUF/Qwen2.5-72B-Instruct-Q4_K_M.gguf
- !!merge <<: *qwen25
name: "bigqwen2.5-52b-instruct"
icon: https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/98GiKtmH1AtHHbIbOUH4Y.jpeg
urls:
- https://huggingface.co/mlabonne/BigQwen2.5-52B-Instruct
- https://huggingface.co/bartowski/BigQwen2.5-52B-Instruct-GGUF
description: |
BigQwen2.5-52B-Instruct is a Qwen/Qwen2-32B-Instruct self-merge made with MergeKit.
It applies the mlabonne/Meta-Llama-3-120B-Instruct recipe.
overrides:
parameters:
model: BigQwen2.5-52B-Instruct-Q4_K_M.gguf
files:
- filename: BigQwen2.5-52B-Instruct-Q4_K_M.gguf
sha256: 9c939f08e366b51b07096eb2ecb5cc2a82894ac7baf639e446237ad39889c896
uri: huggingface://bartowski/BigQwen2.5-52B-Instruct-GGUF/BigQwen2.5-52B-Instruct-Q4_K_M.gguf
- !!merge <<: *qwen25
name: "replete-llm-v2.5-qwen-14b"
icon: https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/ihnWXDEgV-ZKN_B036U1J.png
urls:
- https://huggingface.co/Replete-AI/Replete-LLM-V2.5-Qwen-14b
- https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-14b-GGUF
description: |
Replete-LLM-V2.5-Qwen-14b is a continues finetuned version of Qwen2.5-14B. I noticed recently that the Qwen team did not learn from my methods of continuous finetuning, the great benefits, and no downsides of it. So I took it upon myself to merge the instruct model with the base model myself using the Ties merge method
This version of the model shows higher performance than the original instruct and base models.
overrides:
parameters:
model: Replete-LLM-V2.5-Qwen-14b-Q4_K_M.gguf
files:
- filename: Replete-LLM-V2.5-Qwen-14b-Q4_K_M.gguf
sha256: 17d0792ff5e3062aecb965629f66e679ceb407e4542e8045993dcfe9e7e14d9d
uri: huggingface://bartowski/Replete-LLM-V2.5-Qwen-14b-GGUF/Replete-LLM-V2.5-Qwen-14b-Q4_K_M.gguf
- !!merge <<: *qwen25
name: "replete-llm-v2.5-qwen-7b"
icon: https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/ihnWXDEgV-ZKN_B036U1J.png
urls:
- https://huggingface.co/Replete-AI/Replete-LLM-V2.5-Qwen-7b
- https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-7b-GGUF
description: |
Replete-LLM-V2.5-Qwen-7b is a continues finetuned version of Qwen2.5-14B. I noticed recently that the Qwen team did not learn from my methods of continuous finetuning, the great benefits, and no downsides of it. So I took it upon myself to merge the instruct model with the base model myself using the Ties merge method
This version of the model shows higher performance than the original instruct and base models.
overrides:
parameters:
model: Replete-LLM-V2.5-Qwen-7b-Q4_K_M.gguf
files:
- filename: Replete-LLM-V2.5-Qwen-7b-Q4_K_M.gguf
sha256: 054d54972259c0398b4e0af3f408f608e1166837b1d7535d08fc440d1daf8639
uri: huggingface://bartowski/Replete-LLM-V2.5-Qwen-7b-GGUF/Replete-LLM-V2.5-Qwen-7b-Q4_K_M.gguf
- !!merge <<: *qwen25
name: "calme-2.2-qwen2.5-72b-i1"
icon: https://huggingface.co/MaziyarPanahi/calme-2.2-qwen2.5-72b/resolve/main/calme-2.webp
urls:
- https://huggingface.co/MaziyarPanahi/calme-2.2-qwen2.5-72b
- https://huggingface.co/mradermacher/calme-2.2-qwen2.5-72b-i1-GGUF
description: |
This model is a fine-tuned version of the powerful Qwen/Qwen2.5-72B-Instruct, pushing the boundaries of natural language understanding and generation even further. My goal was to create a versatile and robust model that excels across a wide range of benchmarks and real-world applications.
Use Cases
This model is suitable for a wide range of applications, including but not limited to:
Advanced question-answering systems
Intelligent chatbots and virtual assistants
Content generation and summarization
Code generation and analysis
Complex problem-solving and decision support
overrides:
parameters:
model: calme-2.2-qwen2.5-72b.i1-Q4_K_M.gguf
files:
- filename: calme-2.2-qwen2.5-72b.i1-Q4_K_M.gguf
sha256: 5fdfa599724d7c78502c477ced1d294e92781b91d3265bd0748fbf15a6fefde6
uri: huggingface://mradermacher/calme-2.2-qwen2.5-72b-i1-GGUF/calme-2.2-qwen2.5-72b.i1-Q4_K_M.gguf
- !!merge <<: *qwen25
name: "t.e-8.1-iq-imatrix-request"
# chatml
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
icon: https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/K1aNPf32z-6tYZdcSQBzF.png
urls:
- https://huggingface.co/Cran-May/T.E-8.1
- https://huggingface.co/Lewdiculous/T.E-8.1-GGUF-IQ-Imatrix-Request
description: |
Trained for roleplay uses.
overrides:
parameters:
model: T.E-8.1-Q4_K_M-imat.gguf
files:
- filename: T.E-8.1-Q4_K_M-imat.gguf
sha256: 1b7892b82c01ea4cbebe34cd00f9836cbbc369fc3247c1f44a92842201e7ec0b
uri: huggingface://Lewdiculous/T.E-8.1-GGUF-IQ-Imatrix-Request/T.E-8.1-Q4_K_M-imat.gguf
- !!merge <<: *qwen25
name: "rombos-llm-v2.5.1-qwen-3b"
icon: https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/pNDtgE5FDkxxvbG4qiZ1A.jpeg
urls:
- https://huggingface.co/QuantFactory/Rombos-LLM-V2.5.1-Qwen-3b-GGUF
description: |
Rombos-LLM-V2.5.1-Qwen-3b is a little experiment that merges a high-quality LLM, arcee-ai/raspberry-3B, using the last step of the Continuous Finetuning method outlined in a Google document. The merge is done using the mergekit with the following parameters:
- Models: Qwen2.5-3B-Instruct, raspberry-3B
- Merge method: ties
- Base model: Qwen2.5-3B
- Parameters: weight=1, density=1, normalize=true, int8_mask=true
- Dtype: bfloat16
The model has been evaluated on various tasks and datasets, and the results are available on the Open LLM Leaderboard. The model has shown promising performance across different benchmarks.
overrides:
parameters:
model: Rombos-LLM-V2.5.1-Qwen-3b.Q4_K_M.gguf
files:
- filename: Rombos-LLM-V2.5.1-Qwen-3b.Q4_K_M.gguf
sha256: 656c342a2921cac8912e0123fc295c3bb3d631a85c671c12a3843a957e46d30d
uri: huggingface://QuantFactory/Rombos-LLM-V2.5.1-Qwen-3b-GGUF/Rombos-LLM-V2.5.1-Qwen-3b.Q4_K_M.gguf
- !!merge <<: *qwen25
name: "qwen2.5-7b-ins-v3"
urls:
- https://huggingface.co/happzy2633/qwen2.5-7b-ins-v3
- https://huggingface.co/bartowski/qwen2.5-7b-ins-v3-GGUF
description: |
Qwen 2.5 fine-tuned on CoT to match o1 performance. An attempt to build an Open o1 mathcing OpenAI o1 model
Demo: https://huggingface.co/spaces/happzy2633/open-o1
overrides:
parameters:
model: qwen2.5-7b-ins-v3-Q4_K_M.gguf
files:
- filename: qwen2.5-7b-ins-v3-Q4_K_M.gguf
sha256: 9c23734072714a4886c0386ae0ff07a5e940d67ad52278e2ed689fec44e1e0c8
uri: huggingface://bartowski/qwen2.5-7b-ins-v3-GGUF/qwen2.5-7b-ins-v3-Q4_K_M.gguf
- !!merge <<: *qwen25
name: "supernova-medius"
urls:
- https://huggingface.co/arcee-ai/SuperNova-Medius-GGUF
description: |
Arcee-SuperNova-Medius is a 14B parameter language model developed by Arcee.ai, built on the Qwen2.5-14B-Instruct architecture. This unique model is the result of a cross-architecture distillation pipeline, combining knowledge from both the Qwen2.5-72B-Instruct model and the Llama-3.1-405B-Instruct model. By leveraging the strengths of these two distinct architectures, SuperNova-Medius achieves high-quality instruction-following and complex reasoning capabilities in a mid-sized, resource-efficient form.
SuperNova-Medius is designed to excel in a variety of business use cases, including customer support, content creation, and technical assistance, while maintaining compatibility with smaller hardware configurations. Its an ideal solution for organizations looking for advanced capabilities without the high resource requirements of larger models like our SuperNova-70B.
overrides:
parameters:
model: SuperNova-Medius-Q4_K_M.gguf
files:
- filename: SuperNova-Medius-Q4_K_M.gguf
sha256: aaa4bf3451bc900f186fd4b6b3a6a26bfd40c85908f605db76b92e58aadcc864
uri: huggingface://arcee-ai/SuperNova-Medius-GGUF/SuperNova-Medius-Q4_K_M.gguf
- !!merge <<: *qwen25
name: "eva-qwen2.5-14b-v0.1-i1"
urls:
- https://huggingface.co/EVA-UNIT-01/EVA-Qwen2.5-14B-v0.1
- https://huggingface.co/mradermacher/EVA-Qwen2.5-14B-v0.1-i1-GGUF
description: |
A RP/storywriting specialist model, full-parameter finetune of Qwen2.5-14B on mixture of synthetic and natural data.
It uses Celeste 70B 0.1 data mixture, greatly expanding it to improve versatility, creativity and "flavor" of the resulting model.
overrides:
parameters:
model: EVA-Qwen2.5-14B-v0.1.i1-Q4_K_M.gguf
files:
- filename: EVA-Qwen2.5-14B-v0.1.i1-Q4_K_M.gguf
sha256: 4e9665d4f83cd97efb42c8427f9c09be93b72e23a0364c91ad0b5de8056f2795
uri: huggingface://mradermacher/EVA-Qwen2.5-14B-v0.1-i1-GGUF/EVA-Qwen2.5-14B-v0.1.i1-Q4_K_M.gguf
- !!merge <<: *qwen25
name: "cursorcore-qw2.5-7b-i1"
urls:
- https://huggingface.co/TechxGenus/CursorCore-QW2.5-7B
- https://huggingface.co/mradermacher/CursorCore-QW2.5-7B-i1-GGUF
description: |
CursorCore is a series of open-source models designed for AI-assisted programming. It aims to support features such as automated editing and inline chat, replicating the core abilities of closed-source AI-assisted programming tools like Cursor. This is achieved by aligning data generated through Programming-Instruct. Please read our paper to learn more.
overrides:
parameters:
model: CursorCore-QW2.5-7B.i1-Q4_K_M.gguf
files:
- filename: CursorCore-QW2.5-7B.i1-Q4_K_M.gguf
sha256: 81868f4edb4ec1a61debde1dbdebc02b407930ee19a6d946ff801afba840a102
uri: huggingface://mradermacher/CursorCore-QW2.5-7B-i1-GGUF/CursorCore-QW2.5-7B.i1-Q4_K_M.gguf
- !!merge <<: *qwen25
name: "cursorcore-qw2.5-1.5b-lc-i1"
urls:
- https://huggingface.co/TechxGenus/CursorCore-QW2.5-1.5B-LC
- https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-LC-i1-GGUF
description: |
CursorCore is a series of open-source models designed for AI-assisted programming. It aims to support features such as automated editing and inline chat, replicating the core abilities of closed-source AI-assisted programming tools like Cursor. This is achieved by aligning data generated through Programming-Instruct. Please read our paper to learn more.
overrides:
parameters:
model: CursorCore-QW2.5-1.5B-LC.i1-Q4_K_M.gguf
files:
- filename: CursorCore-QW2.5-1.5B-LC.i1-Q4_K_M.gguf
sha256: 185d720c810f7345ef861ad8eef1199bb15afa8e4f3c03bd5ffd476cfa465127
uri: huggingface://mradermacher/CursorCore-QW2.5-1.5B-LC-i1-GGUF/CursorCore-QW2.5-1.5B-LC.i1-Q4_K_M.gguf
- !!merge <<: *qwen25
name: "edgerunner-command-nested-i1"
urls:
- https://huggingface.co/edgerunner-ai/EdgeRunner-Command-Nested
- https://huggingface.co/mradermacher/EdgeRunner-Command-Nested-i1-GGUF
description: |
EdgeRunner-Command-Nested is an advanced large language model designed specifically for handling complex nested function calls. Initialized from Qwen2.5-7B-Instruct, further enhanced by the integration of the Hermes function call template and additional training on a specialized dataset (based on TinyAgent). This extra dataset focuses on personal domain applications, providing the model with a robust understanding of nested function scenarios that are typical in complex user interactions.
overrides:
parameters:
model: EdgeRunner-Command-Nested.i1-Q4_K_M.gguf
files:
- filename: EdgeRunner-Command-Nested.i1-Q4_K_M.gguf
sha256: a1cc4d2b601dc20e58cbb549bd3e9bc460995840c0aaf1cd3c1cb5414c900ac7
uri: huggingface://mradermacher/EdgeRunner-Command-Nested-i1-GGUF/EdgeRunner-Command-Nested.i1-Q4_K_M.gguf
- !!merge <<: *qwen25
name: "tsunami-0.5x-7b-instruct-i1"
icon: https://huggingface.co/Tsunami-th/Tsunami-0.5x-7B-Instruct/resolve/main/Tsunami.webp
urls:
- https://huggingface.co/Tsunami-th/Tsunami-0.5x-7B-Instruct
- https://huggingface.co/mradermacher/Tsunami-0.5x-7B-Instruct-i1-GGUF
description: |
TSUNAMI: Transformative Semantic Understanding and Natural Augmentation Model for Intelligence.
TSUNAMI full name was created by ChatGPT.
infomation
Tsunami-0.5x-7B-Instruct is Thai Large Language Model that fine-tuned from Qwen2.5-7B around 100,000 rows in Thai dataset.
overrides:
parameters:
model: Tsunami-0.5x-7B-Instruct.i1-Q4_K_M.gguf
files:
- filename: Tsunami-0.5x-7B-Instruct.i1-Q4_K_M.gguf
sha256: 22e2003ecec7f1e91f2e9aaec334613c0f37fb3000d0e628b5a9980e53322fa7
uri: huggingface://mradermacher/Tsunami-0.5x-7B-Instruct-i1-GGUF/Tsunami-0.5x-7B-Instruct.i1-Q4_K_M.gguf
- !!merge <<: *qwen25
name: "qevacot-7b-v2"
urls:
- https://huggingface.co/bunnycore/Qevacot-7B-v2
- https://huggingface.co/mradermacher/Qevacot-7B-v2-GGUF
description: |
This model was merged using the TIES merge method using Qwen/Qwen2.5-7B as a base.
The following models were included in the merge:
c10x/CoT-2.5
EVA-UNIT-01/EVA-Qwen2.5-7B-v0.1
huihui-ai/Qwen2.5-7B-Instruct-abliterated-v2
Cran-May/T.E-8.1
overrides:
parameters:
model: Qevacot-7B-v2.Q4_K_M.gguf
files:
- filename: Qevacot-7B-v2.Q4_K_M.gguf
sha256: a45b3d3b74bc68a5c7ac07d251cdeff671e64085d1816cd86fca6cfb7eab204e
uri: huggingface://mradermacher/Qevacot-7B-v2-GGUF/Qevacot-7B-v2.Q4_K_M.gguf
- !!merge <<: *qwen25
name: "meissa-qwen2.5-7b-instruct"
icon: https://huggingface.co/Orion-zhen/Meissa-Qwen2.5-7B-Instruct/resolve/main/meissa.jpg
urls:
- https://huggingface.co/Orion-zhen/Meissa-Qwen2.5-7B-Instruct
- https://huggingface.co/QuantFactory/Meissa-Qwen2.5-7B-Instruct-GGUF
description: |
Meissa is designated Lambda Orionis, forms Orion's head, and is a multiple star with a combined apparent magnitude of 3.33. Its name means the "shining one".
This model is fine tuned over writing and role playing datasets (maybe the first on qwen2.5-7b), aiming to enhance model's performance in novel writing and roleplaying.
The model is fine-tuned over Orion-zhen/Qwen2.5-7B-Instruct-Uncensored
overrides:
parameters:
model: Meissa-Qwen2.5-7B-Instruct.Q4_K_M.gguf
files:
- filename: Meissa-Qwen2.5-7B-Instruct.Q4_K_M.gguf
sha256: 632b10d5c0e98bc8d53295886da2d57772a54bb6f6fa01d458e9e8c7fa9c905a
uri: huggingface://QuantFactory/Meissa-Qwen2.5-7B-Instruct-GGUF/Meissa-Qwen2.5-7B-Instruct.Q4_K_M.gguf
- !!merge <<: *qwen25
name: "thebeagle-v2beta-32b-mgs"
urls:
- https://huggingface.co/fblgit/TheBeagle-v2beta-32B-MGS
- https://huggingface.co/bartowski/TheBeagle-v2beta-32B-MGS-GGUF
description: |
This model is an experimental version of our latest innovation: MGS. Its up to you to figure out what does it means, but its very explicit. We didn't applied our known UNA algorithm to the forward pass, but they are entirely compatible and operates in different parts of the neural network and in different ways, tho they both can be seen as a regularization technique.
Updated tokenizer_config.json (from the base_model)
Regenerated Quants (being uploaded)
Re-submitted Leaderboard Evaluation, MATH & IFeval have relevant updates
Aligned LICENSE with Qwen terms.
MGS stands for... Many-Geeks-Searching... and thats it. Hint: 1+1 is 2, and 1+1 is not 3
We still believe on 1-Epoch should be enough, so we just did 1 Epoch only.
Dataset
Used here the first decent (corpora & size) dataset on the hub: Magpie-Align/Magpie-Pro-300K-Filtered Kudos to the Magpie team to contribute with some decent stuff that I personally think is very good to ablate.
It achieves the following results on the evaluation set:
Loss: 0.5378 (1 Epoch), outperforming the baseline model.
overrides:
parameters:
model: TheBeagle-v2beta-32B-MGS-Q4_K_M.gguf
files:
- filename: TheBeagle-v2beta-32B-MGS-Q4_K_M.gguf
sha256: db0d3b3c5341d2d51115794bf5da6552b5c0714b041de9b82065cc0c982dd4f7
uri: huggingface://bartowski/TheBeagle-v2beta-32B-MGS-GGUF/TheBeagle-v2beta-32B-MGS-Q4_K_M.gguf
- !!merge <<: *qwen25
name: "meraj-mini"
icon: https://i.ibb.co/CmPSSpq/Screenshot-2024-10-06-at-9-45-06-PM.png
urls:
- https://huggingface.co/arcee-ai/Meraj-Mini
- https://huggingface.co/QuantFactory/Meraj-Mini-GGUF
description: |
Arcee Meraj Mini is a quantized version of the Meraj-Mini model, created using llama.cpp. It is an open-source model that is fine-tuned from the Qwen2.5-7B-Instruct model and is designed for both Arabic and English languages. The model has undergone evaluations across multiple benchmarks in both languages and demonstrates top-tier performance in Arabic and competitive results in English. The key stages in its development include data preparation, initial training, iterative training and post-training, evaluation, and final model creation. The model is capable of solving a wide range of language tasks and is suitable for various applications such as education, mathematics and coding, customer service, and content creation. The Arcee Meraj Mini model consistently outperforms state-of-the-art models on most benchmarks of the Open Arabic LLM Leaderboard (OALL), highlighting its improvements and effectiveness in Arabic language content.
overrides:
parameters:
model: Meraj-Mini.Q4_K_M.gguf
files:
- filename: Meraj-Mini.Q4_K_M.gguf
sha256: f8f3923eb924b8f8e8f530a5bf07fcbd5b3dd10dd478d229d6f4377e31eb3938
uri: huggingface://QuantFactory/Meraj-Mini-GGUF/Meraj-Mini.Q4_K_M.gguf
- !!merge <<: *qwen25
name: "spiral-da-hyah-qwen2.5-72b-i1"
icon: https://cdn-uploads.huggingface.co/production/uploads/633e85093a17ab61de8d9073/toQiofo5ujXDGI4Gh3ciH.png
urls:
- https://huggingface.co/KaraKaraWitch/spiral-da-HYAH-Qwen2.5-72b
- https://huggingface.co/mradermacher/spiral-da-HYAH-Qwen2.5-72b-i1-GGUF
description: |
Model stock merge for fun.
This model was merged using the Model Stock merge method using rombodawg/Rombos-LLM-V2.5-Qwen-72b as a base.
The following models were included in the merge:
- anthracite-org/magnum-v4-72b
- AXCXEPT/EZO-Qwen2.5-72B-Instruct
overrides:
parameters:
model: spiral-da-HYAH-Qwen2.5-72b.i1-Q4_K_M.gguf
files:
- filename: spiral-da-HYAH-Qwen2.5-72b.i1-Q4_K_M.gguf
sha256: 6119e89cadae0bc01a0909f5d9776610dfc4cdcd1600f334c3afb0d0ece011a8
uri: huggingface://mradermacher/spiral-da-HYAH-Qwen2.5-72b-i1-GGUF/spiral-da-HYAH-Qwen2.5-72b.i1-Q4_K_M.gguf
- !!merge <<: *qwen25
name: "whiterabbitneo-2.5-qwen-2.5-coder-7b"
icon: https://huggingface.co/WhiteRabbitNeo/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B/resolve/main/whiterabbitneo-logo-defcon.png
urls:
- https://huggingface.co/WhiteRabbitNeo/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B
- https://huggingface.co/bartowski/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B-GGUF
description: |
WhiteRabbitNeo is a model series that can be used for offensive and defensive cybersecurity.
Models are now getting released as a public preview of its capabilities, and also to assess the societal impact of such an AI.
overrides:
parameters:
model: WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B-Q4_K_M.gguf
files:
- filename: WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B-Q4_K_M.gguf
sha256: 3790b0bf2c505fcbd144b6b69354fe45a83ac09238a87469db0082027c127de4
uri: huggingface://bartowski/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B-GGUF/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B-Q4_K_M.gguf
- !!merge <<: *qwen25
name: "cybertron-v4-qw7b-mgs"
icon: https://huggingface.co/fblgit/cybertron-v4-qw7B-MGS/resolve/main/cybertron_v4MGS.png
urls:
- https://huggingface.co/fblgit/cybertron-v4-qw7B-MGS
- https://huggingface.co/QuantFactory/cybertron-v4-qw7B-MGS-GGUF
description: |
Here we use our novel approach called MGS. Its up to you to figure out what it means.
Cybertron V4 went thru SFT over Magpie-Align/Magpie-Qwen2.5-Pro-1M-v0.1
overrides:
parameters:
model: cybertron-v4-qw7B-MGS.Q4_K_M.gguf
files:
- filename: cybertron-v4-qw7B-MGS.Q4_K_M.gguf
sha256: 32ed4174bad90bb7a2cdcd48b76b3b5924677a4160b762d5e5d95c93fe5205db
uri: huggingface://QuantFactory/cybertron-v4-qw7B-MGS-GGUF/cybertron-v4-qw7B-MGS.Q4_K_M.gguf
- !!merge <<: *qwen25
name: "q25-1.5b-veolu"
icon: https://huggingface.co/Alfitaria/Q25-1.5B-VeoLu/resolve/main/veolu.png
urls:
- https://huggingface.co/Alfitaria/Q25-1.5B-VeoLu
- https://huggingface.co/bartowski/Q25-1.5B-VeoLu-GGUF
description: |
Q25-1.5B-Veo Lu is a tiny General-Purpose Creative model, made up of a merge of bespoke finetunes on Qwen 2.5-1.5B-Instruct.
Inspired by the success of MN-12B-Mag Mell and MS-Meadowlark-22B, Veo Lu was trained on a healthy, balanced diet of of Internet fiction, roleplaying, adventuring, and reasoning/general knowledge.
The components of Veo Lu are:
Bard (pretrain, writing): Fujin (Cleaned/extended Rosier)
Scribe (pretrain, roleplay): Creative Writing Multiturn
Cartographer (pretrain, adventuring): SpringDragon
Alchemist (SFT, science/reasoning): ScienceQA, MedquadQA, Orca Math Word Problems
This model is capable of carrying on a scene without going completely off the rails. That being said, it only has 1.5B parameters. So please, for the love of God, manage your expectations. Since it's Qwen, use ChatML formatting. Turn the temperature down to ~0.7-0.8 and try a dash of rep-pen.
overrides:
parameters:
model: Q25-1.5B-VeoLu-Q4_K_M.gguf
files:
- filename: Q25-1.5B-VeoLu-Q4_K_M.gguf
sha256: bbfb3691b6cabceb49ea1feacfa2eb2651312b8cc6caaf893b46375097e2f026
uri: huggingface://bartowski/Q25-1.5B-VeoLu-GGUF/Q25-1.5B-VeoLu-Q4_K_M.gguf
- !!merge <<: *qwen25
name: "llenn-v0.75-qwen2.5-72b-i1"
icon: https://cdn-uploads.huggingface.co/production/uploads/633e85093a17ab61de8d9073/mYiG-Ndxzqu8ofaBGbOIZ.png
urls:
- https://huggingface.co/KaraKaraWitch/LLENN-v0.75-Qwen2.5-72b
- https://huggingface.co/mradermacher/LLENN-v0.75-Qwen2.5-72b-i1-GGUF
description: |
The following models were included in the merge:
rombodawg/Rombos-LLM-V2.5-Qwen-72b
abacusai/Dracarys2-72B-Instruct
EVA-UNIT-01/EVA-Qwen2.5-72B-v0.0
ZeusLabs/Chronos-Platinum-72B
m8than/banana-2-b-72b
overrides:
parameters:
model: LLENN-v0.75-Qwen2.5-72b.i1-Q4_K_M.gguf
files:
- filename: LLENN-v0.75-Qwen2.5-72b.i1-Q4_K_M.gguf
sha256: 38990136bb48fc9422b0e477bed6d9c40c00c270806d3bd3f58e426badfa0d4d
uri: huggingface://mradermacher/LLENN-v0.75-Qwen2.5-72b-i1-GGUF/LLENN-v0.75-Qwen2.5-72b.i1-Q4_K_M.gguf
- !!merge <<: *qwen25
name: "eva-qwen2.5-14b-v0.2"
urls:
- https://huggingface.co/EVA-UNIT-01/EVA-Qwen2.5-14B-v0.2
- https://huggingface.co/bartowski/EVA-Qwen2.5-14B-v0.2-GGUF
description: |
A RP/storywriting specialist model, full-parameter finetune of Qwen2.5-14B on mixture of synthetic and natural data.
It uses Celeste 70B 0.1 data mixture, greatly expanding it to improve versatility, creativity and "flavor" of the resulting model.
Version notes for 0.2: Now using the refined dataset from 32B 0.2. Major improvements in coherence, instruction following and long-context comprehension over 14B v0.1.
Prompt format is ChatML.
overrides:
parameters:
model: EVA-Qwen2.5-14B-v0.2-Q4_K_M.gguf
files:
- filename: EVA-Qwen2.5-14B-v0.2-Q4_K_M.gguf
sha256: 5d79bc8bf486c48c6430621a5bc5d3032227532dae436a27aa23aaf3e618e009
uri: huggingface://bartowski/EVA-Qwen2.5-14B-v0.2-GGUF/EVA-Qwen2.5-14B-v0.2-Q4_K_M.gguf
- !!merge <<: *qwen25
name: "tissint-14b-128k-rp"
urls:
- https://huggingface.co/Ttimofeyka/Tissint-14B-128k-RP
- https://huggingface.co/mradermacher/Tissint-14B-128k-RP-GGUF
description: |
The model is based on SuperNova-Medius (as the current best 14B model) with a 128k context with an emphasis on creativity, including NSFW and multi-turn conversations.
overrides:
parameters:
model: Tissint-14B-128k-RP.Q4_K_M.gguf
files:
- filename: Tissint-14B-128k-RP.Q4_K_M.gguf
sha256: 374c02f69fae47e7d78ffed9fad4e405250d31031a6bc1539b136c4b1cfc85c2
uri: huggingface://mradermacher/Tissint-14B-128k-RP-GGUF/Tissint-14B-128k-RP.Q4_K_M.gguf
- !!merge <<: *qwen25
name: "tq2.5-14b-sugarquill-v1"
icon: https://huggingface.co/allura-org/TQ2.5-14B-Sugarquill-v1/resolve/main/card_img.png
urls:
- https://huggingface.co/allura-org/TQ2.5-14B-Sugarquill-v1
- https://huggingface.co/bartowski/TQ2.5-14B-Sugarquill-v1-GGUF
description: |
A continued pretrain of SuperNova-Medius on assorted short story data from the web. Supernova already had a nice prose, but diversifying it a bit definitely doesn't hurt. Also, finally a storywriter model with enough context for something more than a short story, that's also nice.
It's a fair bit more temperamental than Gemma, but can be tamed with some sampling. Instruction following also stayed rather strong, so it works for both RP and storywriting, both in chat mode via back-and-forth co-writing and on raw completion.
Overall, I'd say it successfully transfers the essence of what I liked about Gemma Sugarquill. I will also make a Qwen version of Aletheia, but with a brand new LoRA, based on a brand new RP dataset that's in the making right now.
Model was trained by Auri.
overrides:
parameters:
model: TQ2.5-14B-Sugarquill-v1-Q4_K_M.gguf
files:
- filename: TQ2.5-14B-Sugarquill-v1-Q4_K_M.gguf
sha256: a654fe3f41e963d8ea6753fb9a06b9dd76893714ebf02605ef67827944a4025e
uri: huggingface://bartowski/TQ2.5-14B-Sugarquill-v1-GGUF/TQ2.5-14B-Sugarquill-v1-Q4_K_M.gguf
- !!merge <<: *qwen25
name: "calme-3.3-baguette-3b"
icon: https://huggingface.co/MaziyarPanahi/calme-3.1-baguette-3b/resolve/main/calme_3.png
urls:
- https://huggingface.co/MaziyarPanahi/calme-3.3-baguette-3b
- https://huggingface.co/MaziyarPanahi/calme-3.3-baguette-3b-GGUF
description: |
This model is an advanced iteration of the powerful Qwen/Qwen2.5-3B, fine-tuned specifically to enhance its capabilities across general domains in both French and English.
overrides:
parameters:
model: calme-3.3-baguette-3b.Q5_K_M.gguf
files:
- filename: calme-3.3-baguette-3b.Q5_K_M.gguf
sha256: 9e75b76e8cda215ef5c9ad79edfc6e5deee2f9e01ecf605ee6a557b1b5c9ef85
uri: huggingface://MaziyarPanahi/calme-3.3-baguette-3b-GGUF/calme-3.3-baguette-3b.Q5_K_M.gguf
- !!merge <<: *qwen25
name: "calme-3.2-baguette-3b"
icon: https://huggingface.co/MaziyarPanahi/calme-3.1-baguette-3b/resolve/main/calme_3.png
urls:
- https://huggingface.co/MaziyarPanahi/calme-3.2-baguette-3b
- https://huggingface.co/MaziyarPanahi/calme-3.2-baguette-3b-GGUF
description: |
This model is an advanced iteration of the powerful Qwen/Qwen2.5-3B, fine-tuned specifically to enhance its capabilities across general domains in both French and English.
overrides:
parameters:
model: calme-3.2-baguette-3b.Q4_K_M.gguf
files:
- filename: calme-3.2-baguette-3b.Q4_K_M.gguf
uri: huggingface://MaziyarPanahi/calme-3.2-baguette-3b-GGUF/calme-3.2-baguette-3b.Q4_K_M.gguf
sha256: 4e62fe0108643bbfd842add5a1bf199e9b81b0181309b15f483e1f07c2b5fbb2
- !!merge <<: *qwen25
icon: https://huggingface.co/MaziyarPanahi/calme-3.1-baguette-3b/resolve/main/calme_3.png
name: "calme-3.1-baguette-3b"
urls:
- https://huggingface.co/MaziyarPanahi/calme-3.1-baguette-3b
- https://huggingface.co/MaziyarPanahi/calme-3.1-baguette-3b-GGUF
description: |
This model is an advanced iteration of the powerful Qwen/Qwen2.5-3B, fine-tuned specifically to enhance its capabilities across general domains in both French and English.
overrides:
parameters:
model: calme-3.1-baguette-3b.Q4_K_M.gguf
files:
- filename: calme-3.1-baguette-3b.Q4_K_M.gguf
uri: huggingface://MaziyarPanahi/calme-3.1-baguette-3b-GGUF/calme-3.1-baguette-3b.Q4_K_M.gguf
sha256: 351058680d633749fa64efde205bd5f3d942aacada3204c594d9acfab2fc8774
- !!merge <<: *qwen25
name: "calme-3.3-qwenloi-3b"
icon: https://huggingface.co/MaziyarPanahi/calme-3.3-qwenloi-3b/resolve/main/calme_3.png
urls:
- https://huggingface.co/MaziyarPanahi/calme-3.3-qwenloi-3b
- https://huggingface.co/MaziyarPanahi/calme-3.3-qwenloi-3b-GGUF
description: |
This model is an advanced iteration of the powerful Qwen/Qwen2.5-3B, specifically fine-tuned to enhance its capabilities in French Legal domain.
overrides:
parameters:
model: calme-3.3-qwenloi-3b.Q5_K_M.gguf
files:
- filename: calme-3.3-qwenloi-3b.Q5_K_M.gguf
sha256: 9592e186a00c70552365d85ccabddae87acc8d812634a6145da8d460b57b70f9
uri: huggingface://MaziyarPanahi/calme-3.3-qwenloi-3b-GGUF/calme-3.3-qwenloi-3b.Q5_K_M.gguf
- !!merge <<: *qwen25
name: "calme-3.2-qwenloi-3b"
icon: https://huggingface.co/MaziyarPanahi/calme-3.3-qwenloi-3b/resolve/main/calme_3.png
urls:
- https://huggingface.co/MaziyarPanahi/calme-3.2-qwenloi-3b
- https://huggingface.co/MaziyarPanahi/calme-3.2-qwenloi-3b-GGUF
description: |
This model is an advanced iteration of the powerful Qwen/Qwen2.5-3B, specifically fine-tuned to enhance its capabilities in French Legal domain.
overrides:
parameters:
model: calme-3.2-qwenloi-3b.Q5_K_M.gguf
files:
- filename: calme-3.2-qwenloi-3b.Q5_K_M.gguf
sha256: 61be0c2f221262523dcd00a9147fe590aba797c89a1c5849bd4f66e7df2ad272
uri: huggingface://MaziyarPanahi/calme-3.2-qwenloi-3b-GGUF/calme-3.2-qwenloi-3b.Q5_K_M.gguf
- !!merge <<: *qwen25
name: "calme-3.1-qwenloi-3b"
icon: https://huggingface.co/MaziyarPanahi/calme-3.3-qwenloi-3b/resolve/main/calme_3.png
urls:
- https://huggingface.co/MaziyarPanahi/calme-3.1-qwenloi-3b
- https://huggingface.co/MaziyarPanahi/calme-3.1-qwenloi-3b-GGUF
description: |
This model is an advanced iteration of the powerful Qwen/Qwen2.5-3B, specifically fine-tuned to enhance its capabilities in French Legal domain.
overrides:
parameters:
model: calme-3.1-qwenloi-3b.Q5_K_M.gguf
files:
- filename: calme-3.1-qwenloi-3b.Q5_K_M.gguf
sha256: 8962a8d1704979039063b5c69fafdb38b545c26143419ec4c574f37f2d6dd7b2
uri: huggingface://MaziyarPanahi/calme-3.1-qwenloi-3b-GGUF/calme-3.1-qwenloi-3b.Q5_K_M.gguf
- !!merge <<: *qwen25
name: "eva-qwen2.5-72b-v0.1-i1"
urls:
- https://huggingface.co/EVA-UNIT-01/EVA-Qwen2.5-72B-v0.1
- https://huggingface.co/mradermacher/EVA-Qwen2.5-72B-v0.1-i1-GGUF
description: |
A RP/storywriting specialist model, full-parameter finetune of Qwen2.5-72B on mixture of synthetic and natural data.
It uses Celeste 70B 0.1 data mixture, greatly expanding it to improve versatility, creativity and "flavor" of the resulting model.
Dedicated to Nev.
Version notes for 0.1: Reprocessed dataset (via Cahvay for 32B 0.2, used here as well), readjusted training config for 8xH100 SXM. Significant improvements in instruction following, long context understanding and overall coherence over v0.0.
overrides:
parameters:
model: EVA-Qwen2.5-72B-v0.1.i1-Q4_K_M.gguf
files:
- filename: EVA-Qwen2.5-72B-v0.1.i1-Q4_K_M.gguf
sha256: b05dbc02eeb286c41122b103ac31431fc8dcbd80b8979422541a05cda53df61b
uri: huggingface://mradermacher/EVA-Qwen2.5-72B-v0.1-i1-GGUF/EVA-Qwen2.5-72B-v0.1.i1-Q4_K_M.gguf
- !!merge <<: *qwen25
name: "celestial-harmony-14b-v1.0-experimental-1016-i1"
urls:
- https://huggingface.co/ProdeusUnity/Celestial-Harmony-14b-v1.0-Experimental-1016
- https://huggingface.co/mradermacher/Celestial-Harmony-14b-v1.0-Experimental-1016-i1-GGUF
description: |
Yet Another merge, this one for AuriAetherwiing, at their request.
This is a merge of pre-trained language models created using mergekit.
The following models were included in the merge:
EVA-UNIT-01/EVA-Qwen2.5-14B-v0.1
v000000/Qwen2.5-Lumen-14B
arcee-ai/SuperNova-Medius
overrides:
parameters:
model: Celestial-Harmony-14b-v1.0-Experimental-1016.i1-Q4_K_M.gguf
files:
- filename: Celestial-Harmony-14b-v1.0-Experimental-1016.i1-Q4_K_M.gguf
sha256: 536a6d98e30e9d52f91672daf49eeb7efe076e161a5da8beaca204adedd76864
uri: huggingface://mradermacher/Celestial-Harmony-14b-v1.0-Experimental-1016-i1-GGUF/Celestial-Harmony-14b-v1.0-Experimental-1016.i1-Q4_K_M.gguf
- !!merge <<: *qwen25
name: "qwen2.5-32b-arliai-rpmax-v1.3"
urls:
- https://huggingface.co/ArliAI/Qwen2.5-32B-ArliAI-RPMax-v1.3
- https://huggingface.co/bartowski/Qwen2.5-32B-ArliAI-RPMax-v1.3-GGUF
description: |
RPMax is a series of models that are trained on a diverse set of curated creative writing and RP datasets with a focus on variety and deduplication. This model is designed to be highly creative and non-repetitive by making sure no two entries in the dataset have repeated characters or situations, which makes sure the model does not latch on to a certain personality and be capable of understanding and acting appropriately to any characters or situations.
Many RPMax users mentioned that these models does not feel like any other RP models, having a different writing style and generally doesn't feel in-bred.
overrides:
parameters:
model: Qwen2.5-32B-ArliAI-RPMax-v1.3-Q4_K_M.gguf
files:
- filename: Qwen2.5-32B-ArliAI-RPMax-v1.3-Q4_K_M.gguf
sha256: 51b369068b124165b1b8c253371b88b573af9dd350e331ce93d7e47b6b710003
uri: huggingface://bartowski/Qwen2.5-32B-ArliAI-RPMax-v1.3-GGUF/Qwen2.5-32B-ArliAI-RPMax-v1.3-Q4_K_M.gguf
- !!merge <<: *qwen25
name: "q2.5-ms-mistoria-72b-i1"
icon: https://cdn-uploads.huggingface.co/production/uploads/64545af5ec40bbbd01242ca6/5LOvUFYiMMw6pcEsOhmo2.webp
urls:
- https://huggingface.co/Steelskull/Q2.5-MS-Mistoria-72b
- https://huggingface.co/mradermacher/Q2.5-MS-Mistoria-72b-i1-GGUF
description: |
This model is my fist attempt at a 72b model as usual my goal is to merge the robust storytelling of mutiple models while attempting to maintain intelligence.
Merge of:
- model: EVA-UNIT-01/EVA-Qwen2.5-72B-v0.1
- model: ZeusLabs/Chronos-Platinum-72B
- model: shuttleai/shuttle-3
overrides:
parameters:
model: Q2.5-MS-Mistoria-72b.i1-Q4_K_M.gguf
files:
- filename: Q2.5-MS-Mistoria-72b.i1-Q4_K_M.gguf
sha256: f51ac3db855259c0132070e7bb9f58b67538103ffb3c716880ceef3bb09d43d9
uri: huggingface://mradermacher/Q2.5-MS-Mistoria-72b-i1-GGUF/Q2.5-MS-Mistoria-72b.i1-Q4_K_M.gguf
- !!merge <<: *qwen25
name: "athene-v2-agent"
icon: https://huggingface.co/Nexusflow/Athene-V2-Agent/resolve/main/agent.png
urls:
- https://huggingface.co/Nexusflow/Athene-V2-Agent
- https://huggingface.co/bartowski/Athene-V2-Agent-GGUF
description: "Athene-V2-Agent is an open-source Agent LLM that surpasses the state-of-the-art in function calling and agentic capabilities.\n\n\U0001F4AA Versatile Agent Capability: Athene-V2-Agent is an agent model, capable of operating in environments with deeply nested dependencies with the environment. It is capable of reasoning and doing planning for trajectories with many tool calls necessary to answer a single query.\n\n\U0001F4CA Performance Highlights: Athene-V2-Agent surpasses GPT-4o in single FC tasks by 18% in function calling success rates, and by 17% in Agentic success rates.\n\n\U0001F527 Generalization to the Unseen: Athene-V2-Agent has never been trained on the functions or agentic settings used in evaluation.\n"
overrides:
parameters:
model: Athene-V2-Agent-Q4_K_M.gguf
files:
- filename: Athene-V2-Agent-Q4_K_M.gguf
sha256: 2829d205519da34852c374286d42a4403f3be012ea56424e88ebcb8dc89676ad
uri: huggingface://bartowski/Athene-V2-Agent-GGUF/Athene-V2-Agent-Q4_K_M.gguf
- !!merge <<: *qwen25
name: "athene-v2-chat"
urls:
- https://huggingface.co/Nexusflow/Athene-V2-Chat
- https://huggingface.co/bartowski/Athene-V2-Chat-GGUF
description: |
We introduce Athene-V2-Chat-72B, an open-weights LLM on-par with GPT-4o across benchmarks. It is trained through RLHF with Qwen-2.5-72B-Instruct as base model. Athene-V2-Chat-72B excels in chat, math, and coding. Its sister model, Athene-V2-Agent-72B, surpasses GPT-4o in complex function calling and agentic applications.
overrides:
parameters:
model: Athene-V2-Chat-Q4_K_M.gguf
files:
- filename: Athene-V2-Chat-Q4_K_M.gguf
sha256: bda8b784ad55982891e5aa69b08ce4030c91a2e28ad9c4c35284d45d3c7aeb16
uri: huggingface://bartowski/Athene-V2-Chat-GGUF/Athene-V2-Chat-Q4_K_M.gguf
- !!merge <<: *qwen25
name: "qwen2.5-7b-nerd-uncensored-v1.7"
urls:
- https://huggingface.co/jeffmeloy/Qwen2.5-7B-nerd-uncensored-v1.7
- https://huggingface.co/mradermacher/Qwen2.5-7B-nerd-uncensored-v1.7-GGUF
description: |
Model created by analyzing and selecting the optimal layers from other Qwen2.5-7B models based on their dimensional utilization efficiency, measured by the Normalized Effective Rank (NER). Computed like:
Input: Weight matrix for each model layer
Compute singular values σᵢ where σᵢ ≥ 0 # σᵢ represents the importance of each dimension
Filter values above numerical threshold (>1e-12)
Sum all singular values: S = Σσᵢ # S acts as normalization factor
Create probability distribution: pᵢ = σᵢ/S # converts singular values to probabilities summing to 1
Compute Shannon entropy: H = -Σ(pᵢ * log₂(pᵢ)) # measures information content
Calculate maximum possible entropy: H_max = log₂(n)
Final NER score = H/H_max # normalizes score to [0,1] range
Results in value between 0 and 1 for each model layer
overrides:
parameters:
model: Qwen2.5-7B-nerd-uncensored-v1.7.Q4_K_M.gguf
files:
- filename: Qwen2.5-7B-nerd-uncensored-v1.7.Q4_K_M.gguf
sha256: 42cf7a96784dc8f25c61c2404620c3e6548a024caa8dff6e435d7c86400d7ab8
uri: huggingface://mradermacher/Qwen2.5-7B-nerd-uncensored-v1.7-GGUF/Qwen2.5-7B-nerd-uncensored-v1.7.Q4_K_M.gguf
- !!merge <<: *qwen25
icon: https://i.imgur.com/OxX2Usi.png
name: "evathene-v1.0"
urls:
- https://huggingface.co/sophosympatheia/Evathene-v1.0
- https://huggingface.co/bartowski/Evathene-v1.0-GGUF
description: |
This 72B parameter model is a merge of Nexusflow/Athene-V2-Chat with EVA-UNIT-01/EVA-Qwen2.5-72B-v0.1. See the merge recipe below for details.
This model is uncensored. You are responsible for whatever you do with it.
This model was designed for roleplaying and storytelling and I think it does well at both. It may also perform well at other tasks but I have not tested its performance in other areas.
overrides:
parameters:
model: Evathene-v1.0-Q4_K_M.gguf
files:
- filename: Evathene-v1.0-Q4_K_M.gguf
sha256: 96401ba9d798faa8a01f579b54523c5f75277e91bf1f0eee93db285f76f61e7e
uri: huggingface://bartowski/Evathene-v1.0-GGUF/Evathene-v1.0-Q4_K_M.gguf
- !!merge <<: *qwen25
name: "miniclaus-qw1.5b-unamgs"
icon: https://huggingface.co/fblgit/miniclaus-qw1.5B-UNAMGS/resolve/main/miniclaus_qw15-UNAMGS.png
urls:
- https://huggingface.co/fblgit/miniclaus-qw1.5B-UNAMGS
- https://huggingface.co/bartowski/miniclaus-qw1.5B-UNAMGS-GGUF
description: |
Trained with Magpie-Align/Magpie-Pro-MT-300K-v0.1
Using MGS & UNA (MLP) on this tiny but powerful model.
overrides:
parameters:
model: miniclaus-qw1.5B-UNAMGS-Q4_K_M.gguf
files:
- filename: miniclaus-qw1.5B-UNAMGS-Q4_K_M.gguf
sha256: a0dadd7147cc4a8e8df59659556e4d824ef5c26fd2f39381fe467b2ff9cc1289
uri: huggingface://bartowski/miniclaus-qw1.5B-UNAMGS-GGUF/miniclaus-qw1.5B-UNAMGS-Q4_K_M.gguf
- !!merge <<: *qwen25
name: "qwen2.5-3b-smart-i1"
urls:
- https://huggingface.co/bunnycore/Qwen2.5-3B-Smart
- https://huggingface.co/mradermacher/Qwen2.5-3B-Smart-i1-GGUF
description: |
This model was merged using the passthrough merge method using bunnycore/Qwen2.5-3B-RP-Mix + bunnycore/Qwen2.5-3b-Smart-lora_model as a base.
overrides:
parameters:
model: Qwen2.5-3B-Smart.i1-Q4_K_M.gguf
files:
- filename: Qwen2.5-3B-Smart.i1-Q4_K_M.gguf
sha256: 4cfffa4478191b3ac5f54b0e2c5c3f60883322cf705d74f9651715b70f3779f4
uri: huggingface://mradermacher/Qwen2.5-3B-Smart-i1-GGUF/Qwen2.5-3B-Smart.i1-Q4_K_M.gguf
- !!merge <<: *qwen25
name: "steyrcannon-0.2-qwen2.5-72b"
urls:
- https://huggingface.co/KaraKaraWitch/SteyrCannon-0.2-Qwen2.5-72b
- https://huggingface.co/mradermacher/SteyrCannon-0.2-Qwen2.5-72b-GGUF
description: |
SteyrCannon-0.2 is an updated revision from the original SteyrCannon. This uses EVA-Qwen2.5-72B-v0.2. Nothing else has changed.This model was merged using the TIES merge method using EVA-UNIT-01/EVA-Qwen2.5-72B-v0.2 as a base.
The following models were included in the merge:
anthracite-org/magnum-v4-72b
EVA-UNIT-01/EVA-Qwen2.5-72B-v0.2
overrides:
parameters:
model: SteyrCannon-0.2-Qwen2.5-72b.Q4_K_M.gguf
files:
- filename: SteyrCannon-0.2-Qwen2.5-72b.Q4_K_M.gguf
sha256: b34c08b77ffd25ccb0ca50b167f2215e784689205c93a0903fa9435b6cc187f0
uri: huggingface://mradermacher/SteyrCannon-0.2-Qwen2.5-72b-GGUF/SteyrCannon-0.2-Qwen2.5-72b.Q4_K_M.gguf
- !!merge <<: *qwen25
name: "q2.5-ms-mistoria-72b-v2"
icon: https://cdn-uploads.huggingface.co/production/uploads/64545af5ec40bbbd01242ca6/5LOvUFYiMMw6pcEsOhmo2.webp
urls:
- https://huggingface.co/Steelskull/Q2.5-MS-Mistoria-72b-v2
- https://huggingface.co/bartowski/Q2.5-MS-Mistoria-72b-v2-GGUF
description: |
This model is my second attempt at a 72b model, as usual, my goal is to merge the robust storytelling of mutiple models while attempting to maintain intelligence.
models:
- model: EVA-UNIT-01/EVA-Qwen2.5-72B-v0.2
- model: ZeusLabs/Chronos-Platinum-72B
- model: shuttleai/shuttle-3
overrides:
parameters:
model: Q2.5-MS-Mistoria-72b-v2-Q4_K_M.gguf
files:
- filename: Q2.5-MS-Mistoria-72b-v2-Q4_K_M.gguf
sha256: 33df8aac5a790d1c286fe0fc4f9d340311f282eca19b78db6f7abb845923425c
uri: huggingface://bartowski/Q2.5-MS-Mistoria-72b-v2-GGUF/Q2.5-MS-Mistoria-72b-v2-Q4_K_M.gguf
- !!merge <<: *qwen25
name: "eva-qwen2.5-72b-v0.2"
urls:
- https://huggingface.co/EVA-UNIT-01/EVA-Qwen2.5-72B-v0.2
- https://huggingface.co/bartowski/EVA-Qwen2.5-72B-v0.2-GGUF
description: |
A RP/storywriting specialist model, full-parameter finetune of Qwen2.5-72B on mixture of synthetic and natural data.
It uses Celeste 70B 0.1 data mixture, greatly expanding it to improve versatility, creativity and "flavor" of the resulting model.
Version notes for 0.2: Optimized training hyperparameters and increased sequence length. Better instruction following deeper into context and less repetition.
overrides:
parameters:
model: EVA-Qwen2.5-72B-v0.2-Q4_K_M.gguf
files:
- filename: EVA-Qwen2.5-72B-v0.2-Q4_K_M.gguf
sha256: 03ea0ecac3ee24a332ca43cf925b669c58714b9754be0f4bc232bd996681ef4b
uri: huggingface://bartowski/EVA-Qwen2.5-72B-v0.2-GGUF/EVA-Qwen2.5-72B-v0.2-Q4_K_M.gguf
- !!merge <<: *qwen25
name: "qwq-32b-preview"
urls:
- https://huggingface.co/Qwen/QwQ-32B-Preview
- https://huggingface.co/bartowski/QwQ-32B-Preview-GGUF
description: |
QwQ-32B-Preview is an experimental research model developed by the Qwen Team, focused on advancing AI reasoning capabilities. As a preview release, it demonstrates promising analytical abilities while having several important limitations:
Language Mixing and Code-Switching: The model may mix languages or switch between them unexpectedly, affecting response clarity.
Recursive Reasoning Loops: The model may enter circular reasoning patterns, leading to lengthy responses without a conclusive answer.
Safety and Ethical Considerations: The model requires enhanced safety measures to ensure reliable and secure performance, and users should exercise caution when deploying it.
Performance and Benchmark Limitations: The model excels in math and coding but has room for improvement in other areas, such as common sense reasoning and nuanced language understanding.
overrides:
parameters:
model: QwQ-32B-Preview-Q4_K_M.gguf
files:
- filename: QwQ-32B-Preview-Q4_K_M.gguf
sha256: c499801e682e2379528090c50e106837ca1d69dc3bf3ff3a9af830a0eb49cdf6
uri: huggingface://bartowski/QwQ-32B-Preview-GGUF/QwQ-32B-Preview-Q4_K_M.gguf
- !!merge <<: *qwen25
name: "q2.5-32b-slush-i1"
urls:
- https://huggingface.co/crestf411/Q2.5-32B-Slush
- https://huggingface.co/mradermacher/Q2.5-32B-Slush-i1-GGUF
description: |
Slush is a two-stage model trained with high LoRA dropout, where stage 1 is a pretraining continuation on the base model, aimed at boosting the model's creativity and writing capabilities. This is then merged into the instruction tune model, and stage 2 is a fine tuning step on top of this to further enhance its roleplaying capabilities and/or to repair any damage caused in the stage 1 merge.
This is still early stage. As always, feedback is welcome, and begone if you demand perfection.
The second stage, like the Sunfall series, follows the Silly Tavern preset (ChatML), so ymmv in particular if you use some other tool and/or preset.
overrides:
parameters:
model: Q2.5-32B-Slush.i1-Q4_K_M.gguf
files:
- filename: Q2.5-32B-Slush.i1-Q4_K_M.gguf
sha256: 95aecaf43077dabc72d3b556923ede2563325e1c89863800229cfa8b7f1c9659
uri: huggingface://mradermacher/Q2.5-32B-Slush-i1-GGUF/Q2.5-32B-Slush.i1-Q4_K_M.gguf
- !!merge <<: *qwen25
name: "qwestion-24b"
urls:
- https://huggingface.co/CultriX/Qwestion-14B
- https://huggingface.co/mradermacher/Qwestion-24B-GGUF
description: |
This model was merged using the DARE TIES merge method using Qwen/Qwen2.5-14B as a base.
The following models were included in the merge:
allknowingroger/Qwenslerp2-14B
rombodawg/Rombos-LLM-V2.6-Qwen-14b
VAGOsolutions/SauerkrautLM-v2-14b-DPO
CultriX/Qwen2.5-14B-Wernicke
overrides:
parameters:
model: Qwestion-24B.Q4_K_M.gguf
files:
- filename: Qwestion-24B.Q4_K_M.gguf
sha256: 5d493bd81cfeef66d80101260145ab1d1d0428ef2191edce62b58391bd0fff0e
uri: huggingface://mradermacher/Qwestion-24B-GGUF/Qwestion-24B.Q4_K_M.gguf
- !!merge <<: *qwen25
name: "teleut-7b"
icon: https://cdn-uploads.huggingface.co/production/uploads/634262af8d8089ebaefd410e/UqIi8eztdptvt52Mak_1K.png
urls:
- https://huggingface.co/allura-org/Teleut-7b
- https://huggingface.co/QuantFactory/Teleut-7b-GGUF
description: |
A replication attempt of Tulu 3 on the Qwen 2.5 base models.
overrides:
parameters:
model: Teleut-7b.Q4_K_M.gguf
files:
- filename: Teleut-7b.Q4_K_M.gguf
sha256: 844a633ea01d793c638e99f2e07413606b3812b759e9264fbaf69c8d94eaa093
uri: huggingface://QuantFactory/Teleut-7b-GGUF/Teleut-7b.Q4_K_M.gguf
- !!merge <<: *qwen25
name: "qwen2.5-7b-homercreative-mix"
urls:
- https://huggingface.co/ZeroXClem/Qwen2.5-7B-HomerCreative-Mix
- https://huggingface.co/QuantFactory/Qwen2.5-7B-HomerCreative-Mix-GGUF
description: |
ZeroXClem/Qwen2.5-7B-HomerCreative-Mix is an advanced language model meticulously crafted by merging four pre-trained models using the powerful mergekit framework. This fusion leverages the Model Stock merge method to combine the creative prowess of Qandora, the instructive capabilities of Qwen-Instruct-Fusion, the sophisticated blending of HomerSlerp1, and the foundational conversational strengths of Homer-v0.5-Qwen2.5-7B. The resulting model excels in creative text generation, contextual understanding, and dynamic conversational interactions.
🚀 Merged Models
This model merge incorporates the following:
bunnycore/Qandora-2.5-7B-Creative: Specializes in creative text generation, enhancing the model's ability to produce imaginative and diverse content.
bunnycore/Qwen2.5-7B-Instruct-Fusion: Focuses on instruction-following capabilities, improving the model's performance in understanding and executing user commands.
allknowingroger/HomerSlerp1-7B: Utilizes spherical linear interpolation (SLERP) to blend model weights smoothly, ensuring a harmonious integration of different model attributes.
newsbang/Homer-v0.5-Qwen2.5-7B: Acts as the foundational conversational model, providing robust language comprehension and generation capabilities.
overrides:
parameters:
model: Qwen2.5-7B-HomerCreative-Mix.Q4_K_M.gguf
files:
- filename: Qwen2.5-7B-HomerCreative-Mix.Q4_K_M.gguf
sha256: fc3fdb41e068646592f89a8ae62d7b330f2bd4e97bf615aef2977930977c8ba5
uri: huggingface://QuantFactory/Qwen2.5-7B-HomerCreative-Mix-GGUF/Qwen2.5-7B-HomerCreative-Mix.Q4_K_M.gguf
- !!merge <<: *qwen25
name: "cybercore-qwen-2.1-7b"
urls:
- https://huggingface.co/bunnycore/CyberCore-Qwen-2.1-7B
- https://huggingface.co/QuantFactory/CyberCore-Qwen-2.1-7B-GGUF
description: |
This model was merged using the TIES merge method using rombodawg/Rombos-LLM-V2.5-Qwen-7b as a base.
Models Merged
fblgit/cybertron-v4-qw7B-UNAMGS + bunnycore/Qwen-2.1-7b-Persona-lora_model
fblgit/cybertron-v4-qw7B-MGS + bunnycore/Qwen-2.1-7b-Persona-lora_model
overrides:
parameters:
model: CyberCore-Qwen-2.1-7B.Q4_K_M.gguf
files:
- filename: CyberCore-Qwen-2.1-7B.Q4_K_M.gguf
sha256: 726042707a4cec29ca0355b4dc7c53a807b307d08aa8a3d4a9e76aefbbbcaadf
uri: huggingface://QuantFactory/CyberCore-Qwen-2.1-7B-GGUF/CyberCore-Qwen-2.1-7B.Q4_K_M.gguf
- !!merge <<: *qwen25
name: "homercreativeanvita-mix-qw7b"
icon: https://huggingface.co/suayptalha/HomerCreativeAnvita-Mix-Qw7B/resolve/main/HomerCreativeAnvita.jpeg
urls:
- https://huggingface.co/suayptalha/HomerCreativeAnvita-Mix-Qw7B
- https://huggingface.co/QuantFactory/HomerCreativeAnvita-Mix-Qw7B-GGUF
description: |
This model is currently ranked #1 on the Open LLM Leaderboard among models up to 13B parameters!
Merge Method
This model was merged using the SLERP merge method.
Models Merged
The following models were included in the merge:
ZeroXClem/Qwen2.5-7B-HomerAnvita-NerdMix
ZeroXClem/Qwen2.5-7B-HomerCreative-Mix
overrides:
parameters:
model: HomerCreativeAnvita-Mix-Qw7B.Q4_K_M.gguf
files:
- filename: HomerCreativeAnvita-Mix-Qw7B.Q4_K_M.gguf
sha256: a356f279a104bff0bbc2ef7ec136c1e774153de8893bf988083e96fb7f4bc053
uri: huggingface://QuantFactory/HomerCreativeAnvita-Mix-Qw7B-GGUF/HomerCreativeAnvita-Mix-Qw7B.Q4_K_M.gguf
- !!merge <<: *qwen25
name: "math-iio-7b-instruct"
icon: https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/faLfR-doaWP_BLUkOQrbq.png
urls:
- https://huggingface.co/prithivMLmods/Math-IIO-7B-Instruct
- https://huggingface.co/QuantFactory/Math-IIO-7B-Instruct-GGUF
description: |
The Math IIO 7B Instruct is a fine-tuned language model based on the robust Qwen2.5-7B-Instruct architecture. This model has been specifically trained to excel in single-shot mathematical reasoning and instruction-based tasks, making it a reliable choice for educational, analytical, and problem-solving applications.
Key Features:
Math-Optimized Capabilities:
The model is designed to handle complex mathematical problems, step-by-step calculations, and reasoning tasks.
Instruction-Tuned:
Fine-tuned for better adherence to structured queries and task-oriented prompts, enabling clear and concise outputs.
Large Vocabulary:
Equipped with an extensive tokenizer configuration and custom tokens to ensure precise mathematical notation support.
overrides:
parameters:
model: Math-IIO-7B-Instruct.Q4_K_M.gguf
files:
- filename: Math-IIO-7B-Instruct.Q4_K_M.gguf
sha256: 8ffda0b6a43eb9997dfd7db48fe3bd0970fd1b9b86fb68f082c38622a48b58f4
uri: huggingface://QuantFactory/Math-IIO-7B-Instruct-GGUF/Math-IIO-7B-Instruct.Q4_K_M.gguf
- !!merge <<: *qwen25
name: "virtuoso-small"
icon: https://i.ibb.co/pXD6Bcv/SW2-U-g-QQLSH1-ZAbxhs-Iu-A.webp
urls:
- https://huggingface.co/arcee-ai/Virtuoso-Small-GGUF
description: |
Virtuoso-Small is the debut public release of the Virtuoso series of models by Arcee.ai, designed to bring cutting-edge generative AI capabilities to organizations and developers in a compact, efficient form. With 14 billion parameters, Virtuoso-Small is an accessible entry point for high-quality instruction-following, complex reasoning, and business-oriented generative AI tasks.
overrides:
parameters:
model: Virtuoso-Small-Q4_K_M.gguf
files:
- filename: Virtuoso-Small-Q4_K_M.gguf
sha256: 07db215cdfcb05036567017fe20e50e60cb2da28d1f9a8251cc4f18c8caa247f
uri: huggingface://arcee-ai/Virtuoso-Small-GGUF/Virtuoso-Small-Q4_K_M.gguf
- !!merge <<: *qwen25
name: "qwen2.5-7b-homeranvita-nerdmix"
urls:
- https://huggingface.co/ZeroXClem/Qwen2.5-7B-HomerAnvita-NerdMix
- https://huggingface.co/QuantFactory/Qwen2.5-7B-HomerAnvita-NerdMix-GGUF
description: |
ZeroXClem/Qwen2.5-7B-HomerAnvita-NerdMix is an advanced language model meticulously crafted by merging five pre-trained models using the powerful mergekit framework. This fusion leverages the Model Stock merge method to combine the creative prowess of Qandora, the instructive capabilities of Qwen-Instruct-Fusion, the sophisticated blending of HomerSlerp1, the mathematical precision of Cybertron-MGS, and the uncensored expertise of Qwen-Nerd. The resulting model excels in creative text generation, contextual understanding, technical reasoning, and dynamic conversational interactions.
overrides:
parameters:
model: Qwen2.5-7B-HomerAnvita-NerdMix.Q4_K_M.gguf
files:
- filename: Qwen2.5-7B-HomerAnvita-NerdMix.Q4_K_M.gguf
sha256: 73db2ca3ab50e8627352078988cd173e7447c5e8199a7db9e554602da1362e5f
uri: huggingface://QuantFactory/Qwen2.5-7B-HomerAnvita-NerdMix-GGUF/Qwen2.5-7B-HomerAnvita-NerdMix.Q4_K_M.gguf
- !!merge <<: *qwen25
name: "qwen2.5-math-14b-instruct"
urls:
- https://huggingface.co/qingy2024/Qwen2.5-Math-14B-Instruct-Preview
- https://huggingface.co/QuantFactory/Qwen2.5-Math-14B-Instruct-GGUF
description: |
This Qwen 2.5 model was trained 2x faster with Unsloth and Huggingface's TRL library.
Fine-tuned it for 400 steps on garage-bAInd/Open-Platypus with a batch size of 3.
overrides:
parameters:
model: Qwen2.5-Math-14B-Instruct.Q4_K_M.gguf
files:
- filename: Qwen2.5-Math-14B-Instruct.Q4_K_M.gguf
sha256: 14e672394738a7d9f14a6cb16fd9a649b113a19a8b4934f9c18299fc4e286ab6
uri: huggingface://QuantFactory/Qwen2.5-Math-14B-Instruct-GGUF/Qwen2.5-Math-14B-Instruct.Q4_K_M.gguf
- !!merge <<: *qwen25
name: "sailor2-1b-chat"
icon: https://huggingface.co/sail/Sailor2-1B-Chat/resolve/main/sailor2_banner.jpg
urls:
- https://huggingface.co/sail/Sailor2-1B-Chat
- https://huggingface.co/bartowski/Sailor2-1B-Chat-GGUF
description: |
Sailor2 is a community-driven initiative that brings cutting-edge multilingual language models to South-East Asia (SEA). Our research highlights a strong demand for models in the 8B and 20B parameter range for production use, alongside 1B models for specialized applications, such as speculative decoding and research purposes. These models, released under the Apache 2.0 license, provide enhanced accessibility to advanced language technologies across the region.
Sailor2 builds upon the foundation of the awesome multilingual model Qwen 2.5 and is continuously pre-trained on 500B tokens to support 15 languages better with a unified model. These languages include English, Chinese, Burmese, Cebuano, Ilocano, Indonesian, Javanese, Khmer, Lao, Malay, Sundanese, Tagalog, Thai, Vietnamese, and Waray. By addressing the growing demand for diverse, robust, and accessible language models, Sailor2 seeks to serve the underserved in SEA areas with open, inclusive, and accessible multilingual LLMs. The Sailor2 model comes in three sizes, 1B, 8B, and 20B, which are expanded from the Qwen2.5 base models of 0.5B, 7B, and 14B, respectively.
overrides:
parameters:
model: Sailor2-1B-Chat-Q4_K_M.gguf
files:
- filename: Sailor2-1B-Chat-Q4_K_M.gguf
sha256: 782e8abed13d51a2083eadfb2f6d94c2cd77940532f612a99e6f6bec9b3501d4
uri: huggingface://bartowski/Sailor2-1B-Chat-GGUF/Sailor2-1B-Chat-Q4_K_M.gguf
- !!merge <<: *qwen25
icon: https://huggingface.co/sail/Sailor2-1B-Chat/resolve/main/sailor2_banner.jpg
name: "sailor2-8b-chat"
urls:
- https://huggingface.co/bartowski/Sailor2-8B-Chat-GGUF
description: |
Sailor2 is a community-driven initiative that brings cutting-edge multilingual language models to South-East Asia (SEA). Our research highlights a strong demand for models in the 8B and 20B parameter range for production use, alongside 1B models for specialized applications, such as speculative decoding and research purposes. These models, released under the Apache 2.0 license, provide enhanced accessibility to advanced language technologies across the region.
Sailor2 builds upon the foundation of the awesome multilingual model Qwen 2.5 and is continuously pre-trained on 500B tokens to support 15 languages better with a unified model. These languages include English, Chinese, Burmese, Cebuano, Ilocano, Indonesian, Javanese, Khmer, Lao, Malay, Sundanese, Tagalog, Thai, Vietnamese, and Waray. By addressing the growing demand for diverse, robust, and accessible language models, Sailor2 seeks to serve the underserved in SEA areas with open, inclusive, and accessible multilingual LLMs. The Sailor2 model comes in three sizes, 1B, 8B, and 20B, which are expanded from the Qwen2.5 base models of 0.5B, 7B, and 14B, respectively.
overrides:
parameters:
model: Sailor2-8B-Chat-Q4_K_M.gguf
files:
- filename: Sailor2-8B-Chat-Q4_K_M.gguf
sha256: 1a6aaadd6f6ef9c2290d66b348ebcbd6fdec542834cde622498fbd467d966103
uri: huggingface://bartowski/Sailor2-8B-Chat-GGUF/Sailor2-8B-Chat-Q4_K_M.gguf
- !!merge <<: *qwen25
name: "sailor2-20b-chat"
icon: https://huggingface.co/sail/Sailor2-1B-Chat/resolve/main/sailor2_banner.jpg
urls:
- https://huggingface.co/bartowski/Sailor2-20B-Chat-GGUF
description: |
Sailor2 is a community-driven initiative that brings cutting-edge multilingual language models to South-East Asia (SEA). Our research highlights a strong demand for models in the 8B and 20B parameter range for production use, alongside 1B models for specialized applications, such as speculative decoding and research purposes. These models, released under the Apache 2.0 license, provide enhanced accessibility to advanced language technologies across the region.
Sailor2 builds upon the foundation of the awesome multilingual model Qwen 2.5 and is continuously pre-trained on 500B tokens to support 15 languages better with a unified model. These languages include English, Chinese, Burmese, Cebuano, Ilocano, Indonesian, Javanese, Khmer, Lao, Malay, Sundanese, Tagalog, Thai, Vietnamese, and Waray. By addressing the growing demand for diverse, robust, and accessible language models, Sailor2 seeks to serve the underserved in SEA areas with open, inclusive, and accessible multilingual LLMs. The Sailor2 model comes in three sizes, 1B, 8B, and 20B, which are expanded from the Qwen2.5 base models of 0.5B, 7B, and 14B, respectively.
overrides:
parameters:
model: Sailor2-20B-Chat-Q4_K_M.gguf
files:
- filename: Sailor2-20B-Chat-Q4_K_M.gguf
sha256: 0cf8fcd367accee19702ef15ee964bddd5035bde034afddd838f818e7655534a
uri: huggingface://bartowski/Sailor2-20B-Chat-GGUF/Sailor2-20B-Chat-Q4_K_M.gguf
- !!merge <<: *qwen25
name: "72b-qwen2.5-kunou-v1"
icon: https://huggingface.co/Sao10K/72B-Qwen2.5-Kunou-v1/resolve/main/knn.png
urls:
- https://huggingface.co/Sao10K/72B-Qwen2.5-Kunou-v1
- https://huggingface.co/bartowski/72B-Qwen2.5-Kunou-v1-GGUF
description: |
I do not really have anything planned for this model other than it being a generalist, and Roleplay Model? It was just something made and planned in minutes.
Same with the 14 and 32B version.
Kunou's the name of an OC I worked on for a couple of years, for a... fanfic. mmm...
A kind-of successor to L3-70B-Euryale-v2.2 in all but name? I'm keeping Stheno/Euryale lineage to Llama series for now.
I had a version made on top of Nemotron, a supposed Euryale 2.4 but that flopped hard, it was not my cup of tea.
This version is basically a better, more cleaned up Dataset used on Euryale and Stheno.
overrides:
parameters:
model: 72B-Qwen2.5-Kunou-v1-Q4_K_M.gguf
files:
- filename: 72B-Qwen2.5-Kunou-v1-Q4_K_M.gguf
sha256: 91907f29746625a62885793475956220b81d8a5a34b53686a1acd1d03fd403ea
uri: huggingface://bartowski/72B-Qwen2.5-Kunou-v1-GGUF/72B-Qwen2.5-Kunou-v1-Q4_K_M.gguf
- !!merge <<: *qwen25
icon: https://i.imgur.com/OxX2Usi.png
name: "evathene-v1.3"
urls:
- https://huggingface.co/sophosympatheia/Evathene-v1.3
- https://huggingface.co/bartowski/Evathene-v1.3-GGUF
description: |
This 72B parameter model is a merge of sophosympatheia/Evathene-v1.1 and sophosympatheia/Evathene-v1.2. See the merge recipe below for details.
overrides:
parameters:
model: Evathene-v1.3-Q4_K_M.gguf
files:
- filename: Evathene-v1.3-Q4_K_M.gguf
sha256: 0f54909b3ddca514994ee16417da8750f56e7bd59581b46ac47625c230e29d1f
uri: huggingface://bartowski/Evathene-v1.3-GGUF/Evathene-v1.3-Q4_K_M.gguf
- !!merge <<: *qwen25
name: "fusechat-qwen-2.5-7b-instruct"
icon: https://huggingface.co/FuseAI/FuseChat-Qwen-2.5-7B-Instruct/resolve/main/FuseChat-3.0.png
urls:
- https://huggingface.co/FuseAI/FuseChat-Qwen-2.5-7B-Instruct
- https://huggingface.co/bartowski/FuseChat-Qwen-2.5-7B-Instruct-GGUF
description: |
We present FuseChat-3.0, a series of models crafted to enhance performance by integrating the strengths of multiple source LLMs into more compact target LLMs. To achieve this fusion, we utilized four powerful source LLMs: Gemma-2-27B-It, Mistral-Large-Instruct-2407, Qwen-2.5-72B-Instruct, and Llama-3.1-70B-Instruct. For the target LLMs, we employed three widely-used smaller models—Llama-3.1-8B-Instruct, Gemma-2-9B-It, and Qwen-2.5-7B-Instruct—along with two even more compact models—Llama-3.2-3B-Instruct and Llama-3.2-1B-Instruct. The implicit model fusion process involves a two-stage training pipeline comprising Supervised Fine-Tuning (SFT) to mitigate distribution discrepancies between target and source LLMs, and Direct Preference Optimization (DPO) for learning preferences from multiple source LLMs. The resulting FuseChat-3.0 models demonstrated substantial improvements in tasks related to general conversation, instruction following, mathematics, and coding. Notably, when Llama-3.1-8B-Instruct served as the target LLM, our fusion approach achieved an average improvement of 6.8 points across 14 benchmarks. Moreover, it showed significant improvements of 37.1 and 30.1 points on instruction-following test sets AlpacaEval-2 and Arena-Hard respectively. We have released the FuseChat-3.0 models on Huggingface, stay tuned for the forthcoming dataset and code.
overrides:
parameters:
model: FuseChat-Qwen-2.5-7B-Instruct-Q4_K_M.gguf
files:
- filename: FuseChat-Qwen-2.5-7B-Instruct-Q4_K_M.gguf
sha256: 8cd8c317769f03125ac753c836ac92c5a76ee0b35502811d0e65bcbb8df9d55c
uri: huggingface://bartowski/FuseChat-Qwen-2.5-7B-Instruct-GGUF/FuseChat-Qwen-2.5-7B-Instruct-Q4_K_M.gguf
- !!merge <<: *qwen25
name: "neumind-math-7b-instruct"
urls:
- https://huggingface.co/prithivMLmods/Neumind-Math-7B-Instruct
- https://huggingface.co/QuantFactory/Neumind-Math-7B-Instruct-GGUF
description: |
The Neumind-Math-7B-Instruct is a fine-tuned model based on Qwen2.5-7B-Instruct, optimized for mathematical reasoning, step-by-step problem-solving, and instruction-based tasks in the mathematics domain. The model is designed for applications requiring structured reasoning, numerical computations, and mathematical proof generation.
overrides:
parameters:
model: Neumind-Math-7B-Instruct.Q4_K_M.gguf
files:
- filename: Neumind-Math-7B-Instruct.Q4_K_M.gguf
sha256: 3250abadeae4234e06dfaf7cf86fe871fe021e6c2dfcb4542c2a4f412d71e28c
uri: huggingface://QuantFactory/Neumind-Math-7B-Instruct-GGUF/Neumind-Math-7B-Instruct.Q4_K_M.gguf
- &archfunct
license: apache-2.0
tags:
- llm
- gguf
- gpu
- qwen
- qwen2.5
- cpu
- function-calling
name: "arch-function-1.5b"
uri: "github:mudler/LocalAI/gallery/arch-function.yaml@master"
urls:
- https://huggingface.co/katanemolabs/Arch-Function-1.5B
- https://huggingface.co/mradermacher/Arch-Function-1.5B-GGUF
description: |
The Katanemo Arch-Function collection of large language models (LLMs) is a collection state-of-the-art (SOTA) LLMs specifically designed for function calling tasks. The models are designed to understand complex function signatures, identify required parameters, and produce accurate function call outputs based on natural language prompts. Achieving performance on par with GPT-4, these models set a new benchmark in the domain of function-oriented tasks, making them suitable for scenarios where automated API interaction and function execution is crucial.
In summary, the Katanemo Arch-Function collection demonstrates:
State-of-the-art performance in function calling
Accurate parameter identification and suggestion, even in ambiguous or incomplete inputs
High generalization across multiple function calling use cases, from API interactions to automated backend tasks.
Optimized low-latency, high-throughput performance, making it suitable for real-time, production environments.
overrides:
parameters:
model: Arch-Function-1.5B.Q4_K_M.gguf
files:
- filename: Arch-Function-1.5B.Q4_K_M.gguf
sha256: 5ac54d2d50cca0ee0335ca2c9b688204c0829cd3a73de3ee3fda108281ad9691
uri: huggingface://mradermacher/Arch-Function-1.5B-GGUF/Arch-Function-1.5B.Q4_K_M.gguf
- !!merge <<: *archfunct
name: "arch-function-7b"
urls:
- https://huggingface.co/katanemolabs/Arch-Function-7B
- https://huggingface.co/mradermacher/Arch-Function-7B-GGUF
overrides:
parameters:
model: Arch-Function-7B.Q4_K_M.gguf
files:
- filename: Arch-Function-7B.Q4_K_M.gguf
sha256: 6e38661321d79d02b8cf57c79d97c6c0e19adb9ffa66083cc440c24e257234b6
uri: huggingface://mradermacher/Arch-Function-7B-GGUF/Arch-Function-7B.Q4_K_M.gguf
- !!merge <<: *archfunct
name: "arch-function-3b"
urls:
- https://huggingface.co/katanemolabs/Arch-Function-3B
- https://huggingface.co/mradermacher/Arch-Function-3B-GGUF
overrides:
parameters:
model: Arch-Function-3B.Q4_K_M.gguf
files:
- filename: Arch-Function-3B.Q4_K_M.gguf
sha256: 9945cb8d070498d163e5df90c1987f591d35e4fd2222a6c51bcfff848c4b573b
uri: huggingface://mradermacher/Arch-Function-3B-GGUF/Arch-Function-3B.Q4_K_M.gguf
- &smollm
## SmolLM
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
name: "smollm-1.7b-instruct"
icon: https://huggingface.co/datasets/HuggingFaceTB/images/resolve/main/banner_smol.png
tags:
- llm
- gguf
- gpu
- smollm
- chatml
- cpu
urls:
- https://huggingface.co/MaziyarPanahi/SmolLM-1.7B-Instruct-GGUF
- https://huggingface.co/HuggingFaceTB/SmolLM-1.7B-Instruct
description: |
SmolLM is a series of small language models available in three sizes: 135M, 360M, and 1.7B parameters.
These models are pre-trained on SmolLM-Corpus, a curated collection of high-quality educational and synthetic data designed for training LLMs. For further details, we refer to our blogpost.
To build SmolLM-Instruct, we finetuned the base models on publicly available datasets.
overrides:
parameters:
model: SmolLM-1.7B-Instruct.Q4_K_M.gguf
files:
- filename: SmolLM-1.7B-Instruct.Q4_K_M.gguf
sha256: 2b07eb2293ed3fc544a9858beda5bfb03dcabda6aa6582d3c85768c95f498d28
uri: huggingface://MaziyarPanahi/SmolLM-1.7B-Instruct-GGUF/SmolLM-1.7B-Instruct.Q4_K_M.gguf
- !!merge <<: *smollm
name: "smollm2-1.7b-instruct"
icon: https://cdn-uploads.huggingface.co/production/uploads/61c141342aac764ce1654e43/y45hIMNREW7w_XpHYB_0q.png
urls:
- https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B-Instruct
- https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B-Instruct-GGUF
description: |
SmolLM2 is a family of compact language models available in three size: 135M, 360M, and 1.7B parameters. They are capable of solving a wide range of tasks while being lightweight enough to run on-device.
The 1.7B variant demonstrates significant advances over its predecessor SmolLM1-1.7B, particularly in instruction following, knowledge, reasoning, and mathematics. It was trained on 11 trillion tokens using a diverse dataset combination: FineWeb-Edu, DCLM, The Stack, along with new mathematics and coding datasets that we curated and will release soon. We developed the instruct version through supervised fine-tuning (SFT) using a combination of public datasets and our own curated datasets. We then applied Direct Preference Optimization (DPO) using UltraFeedback.
overrides:
parameters:
model: smollm2-1.7b-instruct-q4_k_m.gguf
files:
- filename: smollm2-1.7b-instruct-q4_k_m.gguf
sha256: decd2598bc2c8ed08c19adc3c8fdd461ee19ed5708679d1c54ef54a5a30d4f33
uri: huggingface://HuggingFaceTB/SmolLM2-1.7B-Instruct-GGUF/smollm2-1.7b-instruct-q4_k_m.gguf
- &llama31
## LLama3.1
url: "github:mudler/LocalAI/gallery/llama3.1-instruct.yaml@master"
icon: https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/aJJxKus1wP5N-euvHEUq7.png
name: "meta-llama-3.1-8b-instruct"
license: llama3.1
description: |
The Meta Llama 3.1 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction tuned generative models in 8B, 70B and 405B sizes (text in/text out). The Llama 3.1 instruction tuned text only models (8B, 70B, 405B) are optimized for multilingual dialogue use cases and outperform many of the available open source and closed chat models on common industry benchmarks.
Model developer: Meta
Model Architecture: Llama 3.1 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
urls:
- https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct
- https://huggingface.co/MaziyarPanahi/Meta-Llama-3.1-8B-Instruct-GGUF
tags:
- llm
- gguf
- gpu
- cpu
- llama3.1
overrides:
parameters:
model: Meta-Llama-3.1-8B-Instruct.Q4_K_M.gguf
files:
- filename: Meta-Llama-3.1-8B-Instruct.Q4_K_M.gguf
sha256: c2f17f44af962660d1ad4cb1af91a731f219f3b326c2b14441f9df1f347f2815
uri: huggingface://MaziyarPanahi/Meta-Llama-3.1-8B-Instruct-GGUF/Meta-Llama-3.1-8B-Instruct.Q4_K_M.gguf
- !!merge <<: *llama31
name: "meta-llama-3.1-70b-instruct"
urls:
- https://huggingface.co/meta-llama/Meta-Llama-3.1-70B-Instruct
- https://huggingface.co/MaziyarPanahi/Meta-Llama-3.1-70B-Instruct-GGUF
overrides:
parameters:
model: Meta-Llama-3.1-70B-Instruct.Q4_K_M.gguf
files:
- filename: Meta-Llama-3.1-70B-Instruct.Q4_K_M.gguf
sha256: 3f16ab17da4521fe3ed7c5d7beed960d3fe7b5b64421ee9650aa53d6b649ccab
uri: huggingface://MaziyarPanahi/Meta-Llama-3.1-70B-Instruct-GGUF/Meta-Llama-3.1-70B-Instruct.Q4_K_M.gguf
- !!merge <<: *llama31
name: "meta-llama-3.1-8b-instruct:grammar-functioncall"
url: "github:mudler/LocalAI/gallery/llama3.1-instruct-grammar.yaml@master"
urls:
- https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct
- https://huggingface.co/MaziyarPanahi/Meta-Llama-3.1-8B-Instruct-GGUF
description: |
This is the standard Llama 3.1 8B Instruct model with grammar and function call enabled.
When grammars are enabled in LocalAI, the LLM is forced to output valid tools constrained by BNF grammars. This can be useful for ensuring that the model outputs are valid and can be used in a production environment.
For more information on how to use grammars in LocalAI, see https://localai.io/features/openai-functions/#advanced and https://localai.io/features/constrained_grammars/.
overrides:
parameters:
model: Meta-Llama-3.1-8B-Instruct.Q4_K_M.gguf
files:
- filename: Meta-Llama-3.1-8B-Instruct.Q4_K_M.gguf
sha256: c2f17f44af962660d1ad4cb1af91a731f219f3b326c2b14441f9df1f347f2815
uri: huggingface://MaziyarPanahi/Meta-Llama-3.1-8B-Instruct-GGUF/Meta-Llama-3.1-8B-Instruct.Q4_K_M.gguf
- !!merge <<: *llama31
name: "meta-llama-3.1-8b-instruct:Q8_grammar-functioncall"
url: "github:mudler/LocalAI/gallery/llama3.1-instruct-grammar.yaml@master"
urls:
- https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct
- https://huggingface.co/MaziyarPanahi/Meta-Llama-3.1-8B-Instruct-GGUF
description: |
This is the standard Llama 3.1 8B Instruct model with grammar and function call enabled.
When grammars are enabled in LocalAI, the LLM is forced to output valid tools constrained by BNF grammars. This can be useful for ensuring that the model outputs are valid and can be used in a production environment.
For more information on how to use grammars in LocalAI, see https://localai.io/features/openai-functions/#advanced and https://localai.io/features/constrained_grammars/.
overrides:
parameters:
model: Meta-Llama-3.1-8B-Instruct.Q8_0.gguf
files:
- filename: Meta-Llama-3.1-8B-Instruct.Q8_0.gguf
sha256: f8d608c983b83a1bf28229bc9beb4294c91f5d4cbfe2c1829566b4d7c4693eeb
uri: huggingface://MaziyarPanahi/Meta-Llama-3.1-8B-Instruct-GGUF/Meta-Llama-3.1-8B-Instruct.Q8_0.gguf
- !!merge <<: *llama31
name: "meta-llama-3.1-8b-claude-imat"
urls:
- https://huggingface.co/Undi95/Meta-Llama-3.1-8B-Claude
- https://huggingface.co/InferenceIllusionist/Meta-Llama-3.1-8B-Claude-iMat-GGUF
description: |
Meta-Llama-3.1-8B-Claude-iMat-GGUF: Quantized from Meta-Llama-3.1-8B-Claude fp16. Weighted quantizations were creating using fp16 GGUF and groups_merged.txt in 88 chunks and n_ctx=512. Static fp16 will also be included in repo. For a brief rundown of iMatrix quant performance, please see this PR. All quants are verified working prior to uploading to repo for your safety and convenience.
overrides:
parameters:
model: Meta-Llama-3.1-8B-Claude-iMat-Q4_K_M.gguf
files:
- filename: Meta-Llama-3.1-8B-Claude-iMat-Q4_K_M.gguf
uri: huggingface://InferenceIllusionist/Meta-Llama-3.1-8B-Claude-iMat-GGUF/Meta-Llama-3.1-8B-Claude-iMat-Q4_K_M.gguf
sha256: 6d175432f66d10dfed9737f73a5073d513d18e1ee7bd4b9cf2a59deb359f36ff
- !!merge <<: *llama31
name: "meta-llama-3.1-8b-instruct-abliterated"
icon: https://i.imgur.com/KhorYYG.png
urls:
- https://huggingface.co/mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated
- https://huggingface.co/mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated-GGUF
description: |
This is an uncensored version of Llama 3.1 8B Instruct created with abliteration.
overrides:
parameters:
model: meta-llama-3.1-8b-instruct-abliterated.Q4_K_M.gguf
files:
- filename: meta-llama-3.1-8b-instruct-abliterated.Q4_K_M.gguf
uri: huggingface://mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated-GGUF/meta-llama-3.1-8b-instruct-abliterated.Q4_K_M.gguf
sha256: c4735f9efaba8eb2c30113291652e3ffe13bf940b675ed61f6be749608b4f266
- !!merge <<: *llama31
name: "llama-3.1-70b-japanese-instruct-2407"
urls:
- https://huggingface.co/cyberagent/Llama-3.1-70B-Japanese-Instruct-2407
- https://huggingface.co/mmnga/Llama-3.1-70B-Japanese-Instruct-2407-gguf
description: |
The Llama-3.1-70B-Japanese-Instruct-2407-gguf model is a Japanese language model that uses the Instruct prompt tuning method. It is based on the LLaMa-3.1-70B model and has been fine-tuned on the imatrix dataset for Japanese. The model is trained to generate informative and coherent responses to given instructions or prompts. It is available in the gguf format and can be used for a variety of tasks such as question answering, text generation, and more.
overrides:
parameters:
model: Llama-3.1-70B-Japanese-Instruct-2407-Q4_K_M.gguf
files:
- filename: Llama-3.1-70B-Japanese-Instruct-2407-Q4_K_M.gguf
sha256: f2a6f0fb5040d3a28479c9f9fc555a5ea7b906dfb9964539f1a68c0676a9c604
uri: huggingface://mmnga/Llama-3.1-70B-Japanese-Instruct-2407-gguf/Llama-3.1-70B-Japanese-Instruct-2407-Q4_K_M.gguf
- !!merge <<: *llama31
name: "openbuddy-llama3.1-8b-v22.1-131k"
icon: https://raw.githubusercontent.com/OpenBuddy/OpenBuddy/main/media/demo.png
urls:
- https://huggingface.co/sunnyyy/openbuddy-llama3.1-8b-v22.1-131k-Q4_K_M-GGUF
description: |
OpenBuddy - Open Multilingual Chatbot
overrides:
parameters:
model: openbuddy-llama3.1-8b-v22.1-131k-q4_k_m.gguf
files:
- filename: openbuddy-llama3.1-8b-v22.1-131k-q4_k_m.gguf
sha256: c87a273785759f2d044046b7a7b42f05706baed7dc0650ed883a3bee2a097d86
uri: huggingface://sunnyyy/openbuddy-llama3.1-8b-v22.1-131k-Q4_K_M-GGUF/openbuddy-llama3.1-8b-v22.1-131k-q4_k_m.gguf
- !!merge <<: *llama31
name: "llama3.1-8b-fireplace2"
icon: https://cdn-uploads.huggingface.co/production/uploads/64f267a8a4f79a118e0fcc89/JYkaXrk2DqpXhaL9WymKY.jpeg
urls:
- https://huggingface.co/ValiantLabs/Llama3.1-8B-Fireplace2
- https://huggingface.co/mudler/Llama3.1-8B-Fireplace2-Q4_K_M-GGUF
description: |
Fireplace 2 is a chat model, adding helpful structured outputs to Llama 3.1 8b Instruct.
an expansion pack of supplementary outputs - request them at will within your chat:
Inline function calls
SQL queries
JSON objects
Data visualization with matplotlib
Mix normal chat and structured outputs within the same conversation.
Fireplace 2 supplements the existing strengths of Llama 3.1, providing inline capabilities within the Llama 3 Instruct format.
Version
This is the 2024-07-23 release of Fireplace 2 for Llama 3.1 8b.
We're excited to bring further upgrades and releases to Fireplace 2 in the future.
Help us and recommend Fireplace 2 to your friends!
overrides:
parameters:
model: llama3.1-8b-fireplace2-q4_k_m.gguf
files:
- filename: llama3.1-8b-fireplace2-q4_k_m.gguf
sha256: 54527fd2474b576086ea31e759214ab240abe2429ae623a02d7ba825cc8cb13e
uri: huggingface://mudler/Llama3.1-8B-Fireplace2-Q4_K_M-GGUF/llama3.1-8b-fireplace2-q4_k_m.gguf
- !!merge <<: *llama31
name: "sekhmet_aleph-l3.1-8b-v0.1-i1"
icon: https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/SVyiW4mu495ngqszJGWRl.png
urls:
- https://huggingface.co/Nitral-Archive/Sekhmet_Aleph-L3.1-8B-v0.1
- https://huggingface.co/mradermacher/Sekhmet_Aleph-L3.1-8B-v0.1-i1-GGUF
overrides:
parameters:
model: Sekhmet_Aleph-L3.1-8B-v0.1.i1-Q4_K_M.gguf
files:
- filename: Sekhmet_Aleph-L3.1-8B-v0.1.i1-Q4_K_M.gguf
sha256: 5b6f4eaa2091bf13a2b563a54a3f87b22efa7f2862362537c956c70da6e11cea
uri: huggingface://mradermacher/Sekhmet_Aleph-L3.1-8B-v0.1-i1-GGUF/Sekhmet_Aleph-L3.1-8B-v0.1.i1-Q4_K_M.gguf
- !!merge <<: *llama31
name: "l3.1-8b-llamoutcast-i1"
icon: https://files.catbox.moe/ecgn0m.jpg
urls:
- https://huggingface.co/Envoid/L3.1-8B-Llamoutcast
- https://huggingface.co/mradermacher/L3.1-8B-Llamoutcast-i1-GGUF
description: |
Warning: this model is utterly cursed.
Llamoutcast
This model was originally intended to be a DADA finetune of Llama-3.1-8B-Instruct but the results were unsatisfactory. So it received some additional finetuning on a rawtext dataset and now it is utterly cursed.
It responds to Llama-3 Instruct formatting.
overrides:
parameters:
model: L3.1-8B-Llamoutcast.i1-Q4_K_M.gguf
files:
- filename: L3.1-8B-Llamoutcast.i1-Q4_K_M.gguf
sha256: 438ca0a7e9470f5ee40f3b14dc2da41b1cafc4ad4315dead3eb57924109d5cf6
uri: huggingface://mradermacher/L3.1-8B-Llamoutcast-i1-GGUF/L3.1-8B-Llamoutcast.i1-Q4_K_M.gguf
- !!merge <<: *llama31
name: "llama-guard-3-8b"
urls:
- https://huggingface.co/meta-llama/Llama-Guard-3-8B
- https://huggingface.co/QuantFactory/Llama-Guard-3-8B-GGUF
description: |
Llama Guard 3 is a Llama-3.1-8B pretrained model, fine-tuned for content safety classification. Similar to previous versions, it can be used to classify content in both LLM inputs (prompt classification) and in LLM responses (response classification). It acts as an LLM it generates text in its output that indicates whether a given prompt or response is safe or unsafe, and if unsafe, it also lists the content categories violated.
Llama Guard 3 was aligned to safeguard against the MLCommons standardized hazards taxonomy and designed to support Llama 3.1 capabilities. Specifically, it provides content moderation in 8 languages, and was optimized to support safety and security for search and code interpreter tool calls.
overrides:
parameters:
model: Llama-Guard-3-8B.Q4_K_M.gguf
files:
- filename: Llama-Guard-3-8B.Q4_K_M.gguf
sha256: c5ea8760a1e544eea66a8915fcc3fbd2c67357ea2ee6871a9e6a6c33b64d4981
uri: huggingface://QuantFactory/Llama-Guard-3-8B-GGUF/Llama-Guard-3-8B.Q4_K_M.gguf
- !!merge <<: *llama31
name: "genius-llama3.1-i1"
icon: https://github.com/fangyuan-ksgk/GeniusUpload/assets/66006349/7272c93e-9806-461c-a3d0-2e50ef2b7af0
urls:
- https://huggingface.co/Ksgk-fy/Genius-Llama3.1
- https://huggingface.co/mradermacher/Genius-Llama3.1-i1-GGUF
description: |
Finetuned Llama-3.1 base on Lex Fridman's podcast transcript.
overrides:
parameters:
model: Genius-Llama3.1.i1-Q4_K_M.gguf
files:
- filename: Genius-Llama3.1.i1-Q4_K_M.gguf
sha256: a272bb2a6ab7ed565738733fb8af8e345b177eba9e76ce615ea845c25ebf8cd5
uri: huggingface://mradermacher/Genius-Llama3.1-i1-GGUF/Genius-Llama3.1.i1-Q4_K_M.gguf
- !!merge <<: *llama31
name: "llama3.1-8b-chinese-chat"
urls:
- https://huggingface.co/shenzhi-wang/Llama3.1-8B-Chinese-Chat
- https://huggingface.co/QuantFactory/Llama3.1-8B-Chinese-Chat-GGUF
description: |
llama3.1-8B-Chinese-Chat is an instruction-tuned language model for Chinese & English users with various abilities such as roleplaying & tool-using built upon the Meta-Llama-3.1-8B-Instruct model. Developers: [Shenzhi Wang](https://shenzhi-wang.netlify.app)*, [Yaowei Zheng](https://github.com/hiyouga)*, Guoyin Wang (in.ai), Shiji Song, Gao Huang. (*: Equal Contribution) - License: [Llama-3.1 License](https://huggingface.co/meta-llama/Meta-Llla...
m-3.1-8B/blob/main/LICENSE) - Base Model: Meta-Llama-3.1-8B-Instruct - Model Size: 8.03B - Context length: 128K(reported by [Meta-Llama-3.1-8B-Instruct model](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct), untested for our Chinese model)
overrides:
parameters:
model: Llama3.1-8B-Chinese-Chat.Q4_K_M.gguf
files:
- filename: Llama3.1-8B-Chinese-Chat.Q4_K_M.gguf
sha256: 824847b6cca82c4d60107c6a059d80ba975a68543e6effd98880435436ddba06
uri: huggingface://QuantFactory/Llama3.1-8B-Chinese-Chat-GGUF/Llama3.1-8B-Chinese-Chat.Q4_K_M.gguf
- !!merge <<: *llama31
name: "llama3.1-70b-chinese-chat"
urls:
- https://huggingface.co/shenzhi-wang/Llama3.1-70B-Chinese-Chat
- https://huggingface.co/mradermacher/Llama3.1-70B-Chinese-Chat-GGUF
description: |
"Llama3.1-70B-Chinese-Chat" is a 70-billion parameter large language model pre-trained on a large corpus of Chinese text data. It is designed for chat and dialog applications, and can generate human-like responses to various prompts and inputs. The model is based on the Llama3.1 architecture and has been fine-tuned for Chinese language understanding and generation. It can be used for a wide range of natural language processing tasks, including language translation, text summarization, question answering, and more.
overrides:
parameters:
model: Llama3.1-70B-Chinese-Chat.Q4_K_M.gguf
files:
- filename: Llama3.1-70B-Chinese-Chat.Q4_K_M.gguf
sha256: 395cff3cce2b092f840b68eb6e31f4c8b670bc8e3854bbb230df8334369e671d
uri: huggingface://mradermacher/Llama3.1-70B-Chinese-Chat-GGUF/Llama3.1-70B-Chinese-Chat.Q4_K_M.gguf
- !!merge <<: *llama31
name: "meta-llama-3.1-instruct-9.99b-brainstorm-10x-form-3"
urls:
- https://huggingface.co/DavidAU/Meta-Llama-3.1-Instruct-9.99B-BRAINSTORM-10x-FORM-3-GGUF
description: |
The Meta-Llama-3.1-8B Instruct model is a large language model trained on a diverse range of text data, with the goal of generating high-quality and coherent text in response to user input. This model is enhanced through a process called "Brainstorm", which involves expanding and recalibrating the model's reasoning center to improve its creative and generative capabilities. The resulting model is capable of generating detailed, vivid, and nuanced text, with a focus on prose quality, conceptually complex responses, and a deeper understanding of the user's intent. The Brainstorm process is designed to enhance the model's performance in creative writing, roleplaying, and story generation, and to improve its ability to generate coherent and engaging text in a wide range of contexts. The model is based on the Llama3 architecture and has been fine-tuned using the Instruct framework, which provides it with a strong foundation for understanding natural language instructions and generating appropriate responses. The model can be used for a variety of tasks, including creative writing,Generating coherent and detailed text, exploring different perspectives and scenarios, and brainstorming ideas.
overrides:
parameters:
model: Meta-Llama-3.1-8B-Instruct-Instruct-exp10-3-Q4_K_M.gguf
files:
- filename: Meta-Llama-3.1-8B-Instruct-Instruct-exp10-3-Q4_K_M.gguf
sha256: f52ff984100b1ff6acfbd7ed1df770064118274a54ae5d48749400a662113615
uri: huggingface://DavidAU/Meta-Llama-3.1-Instruct-9.99B-BRAINSTORM-10x-FORM-3-GGUF/Meta-Llama-3.1-8B-Instruct-Instruct-exp10-3-Q4_K_M.gguf
- !!merge <<: *llama31
name: "llama-3.1-techne-rp-8b-v1"
icon: https://cdn-uploads.huggingface.co/production/uploads/633a809fa4a8f33508dce32c/BMdwgJ6cHZWbiGL48Q-Wq.png
urls:
- https://huggingface.co/athirdpath/Llama-3.1-Techne-RP-8b-v1
- https://huggingface.co/mradermacher/Llama-3.1-Techne-RP-8b-v1-GGUF
description: |
athirdpath/Llama-3.1-Instruct_NSFW-pretrained_e1-plus_reddit was further trained in the order below:
SFT
Doctor-Shotgun/no-robots-sharegpt
grimulkan/LimaRP-augmented
Inv/c2-logs-cleaned-deslopped
DPO
jondurbin/truthy-dpo-v0.1
Undi95/Weyaxi-humanish-dpo-project-noemoji
athirdpath/DPO_Pairs-Roleplay-Llama3-NSFW
overrides:
parameters:
model: Llama-3.1-Techne-RP-8b-v1.Q4_K_M.gguf
files:
- filename: Llama-3.1-Techne-RP-8b-v1.Q4_K_M.gguf
sha256: 6557c5d5091f2507d19ab1f8bfb9ceb4e1536a755ab70f148b18aeb33741580f
uri: huggingface://mradermacher/Llama-3.1-Techne-RP-8b-v1-GGUF/Llama-3.1-Techne-RP-8b-v1.Q4_K_M.gguf
- !!merge <<: *llama31
icon: https://i.ibb.co/9hwFrvL/BLMs-Wkx-NQf-W-46-FZDg-ILhg.jpg
name: "llama-spark"
urls:
- https://huggingface.co/arcee-ai/Llama-Spark
- https://huggingface.co/arcee-ai/Llama-Spark-GGUF
description: |
Llama-Spark is a powerful conversational AI model developed by Arcee.ai. It's built on the foundation of Llama-3.1-8B and merges the power of our Tome Dataset with Llama-3.1-8B-Instruct, resulting in a remarkable conversationalist that punches well above its 8B parameter weight class.
overrides:
parameters:
model: llama-spark-dpo-v0.3-Q4_K_M.gguf
files:
- filename: llama-spark-dpo-v0.3-Q4_K_M.gguf
sha256: 41367168bbdc4b16eb80efcbee4dacc941781ee8748065940167fe6947b4e4c3
uri: huggingface://arcee-ai/Llama-Spark-GGUF/llama-spark-dpo-v0.3-Q4_K_M.gguf
- !!merge <<: *llama31
name: "l3.1-70b-glitz-v0.2-i1"
icon: https://cdn-uploads.huggingface.co/production/uploads/634262af8d8089ebaefd410e/q2dOUnzc1GRbZp3YfzGXB.png
urls:
- https://huggingface.co/Fizzarolli/L3.1-70b-glitz-v0.2
- https://huggingface.co/mradermacher/L3.1-70b-glitz-v0.2-i1-GGUF
description: |
this is an experimental l3.1 70b finetuning run... that crashed midway through. however, the results are still interesting, so i wanted to publish them :3
overrides:
parameters:
model: L3.1-70b-glitz-v0.2.i1-Q4_K_M.gguf
files:
- filename: L3.1-70b-glitz-v0.2.i1-Q4_K_M.gguf
sha256: 585efc83e7f6893043be2487fc09c914a381fb463ce97942ef2f25ae85103bcd
uri: huggingface://mradermacher/L3.1-70b-glitz-v0.2-i1-GGUF/L3.1-70b-glitz-v0.2.i1-Q4_K_M.gguf
- !!merge <<: *llama31
name: "calme-2.3-legalkit-8b-i1"
icon: https://huggingface.co/MaziyarPanahi/calme-2.3-legalkit-8b/resolve/main/calme-2-legalkit.webp
urls:
- https://huggingface.co/mradermacher/calme-2.3-legalkit-8b-i1-GGUF
- https://huggingface.co/MaziyarPanahi/calme-2.3-legalkit-8b
description: |
This model is an advanced iteration of the powerful meta-llama/Meta-Llama-3.1-8B-Instruct, specifically fine-tuned to enhance its capabilities in the legal domain. The fine-tuning process utilized a synthetically generated dataset derived from the French LegalKit, a comprehensive legal language resource.
To create this specialized dataset, I used the NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO model in conjunction with Hugging Face's Inference Endpoint. This approach allowed for the generation of high-quality, synthetic data that incorporates Chain of Thought (CoT) and advanced reasoning in its responses.
The resulting model combines the robust foundation of Llama-3.1-8B with tailored legal knowledge and enhanced reasoning capabilities. This makes it particularly well-suited for tasks requiring in-depth legal analysis, interpretation, and application of French legal concepts.
overrides:
parameters:
model: calme-2.3-legalkit-8b.i1-Q4_K_M.gguf
files:
- filename: calme-2.3-legalkit-8b.i1-Q4_K_M.gguf
sha256: b71dfea8bbd73b0fbd5793ef462b8540c24e1c52a47b1794561adb88109a9e80
uri: huggingface://mradermacher/calme-2.3-legalkit-8b-i1-GGUF/calme-2.3-legalkit-8b.i1-Q4_K_M.gguf
- !!merge <<: *llama31
name: "fireball-llama-3.11-8b-v1orpo"
icon: https://huggingface.co/EpistemeAI/Fireball-Llama-3.1-8B-v1dpo/resolve/main/fireball-llama.JPG
urls:
- https://huggingface.co/mradermacher/Fireball-Llama-3.11-8B-v1orpo-GGUF
description: |
Developed by: EpistemeAI
License: apache-2.0
Finetuned from model : unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
Finetuned methods: DPO (Direct Preference Optimization) & ORPO (Odds Ratio Preference Optimization)
overrides:
parameters:
model: Fireball-Llama-3.11-8B-v1orpo.Q4_K_M.gguf
files:
- filename: Fireball-Llama-3.11-8B-v1orpo.Q4_K_M.gguf
sha256: c61a1f4ee4f05730ac6af754dc8dfddf34eba4486ffa320864e16620d6527731
uri: huggingface://mradermacher/Fireball-Llama-3.11-8B-v1orpo-GGUF/Fireball-Llama-3.11-8B-v1orpo.Q4_K_M.gguf
- !!merge <<: *llama31
name: "llama-3.1-storm-8b-q4_k_m"
icon: https://cdn-uploads.huggingface.co/production/uploads/64c75c1237333ccfef30a602/tmOlbERGKP7JSODa6T06J.jpeg
urls:
- https://huggingface.co/mudler/Llama-3.1-Storm-8B-Q4_K_M-GGUF
- https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B
description: |
We present the Llama-3.1-Storm-8B model that outperforms Meta AI's Llama-3.1-8B-Instruct and Hermes-3-Llama-3.1-8B models significantly across diverse benchmarks as shown in the performance comparison plot in the next section. Our approach consists of three key steps:
- Self-Curation: We applied two self-curation methods to select approximately 1 million high-quality examples from a pool of about 3 million open-source examples. Our curation criteria focused on educational value and difficulty level, using the same SLM for annotation instead of larger models (e.g. 70B, 405B).
- Targeted fine-tuning: We performed Spectrum-based targeted fine-tuning over the Llama-3.1-8B-Instruct model. The Spectrum method accelerates training by selectively targeting layer modules based on their signal-to-noise ratio (SNR), and freezing the remaining modules. In our work, 50% of layers are frozen.
- Model Merging: We merged our fine-tuned model with the Llama-Spark model using SLERP method. The merging method produces a blended model with characteristics smoothly interpolated from both parent models, ensuring the resultant model captures the essence of both its parents. Llama-3.1-Storm-8B improves Llama-3.1-8B-Instruct across 10 diverse benchmarks. These benchmarks cover areas such as instruction-following, knowledge-driven QA, reasoning, truthful answer generation, and function calling.
overrides:
parameters:
model: llama-3.1-storm-8b-q4_k_m.gguf
files:
- filename: llama-3.1-storm-8b-q4_k_m.gguf
sha256: d714e960211ee0fe6113d3131a6573e438f37debd07e1067d2571298624414a0
uri: huggingface://mudler/Llama-3.1-Storm-8B-Q4_K_M-GGUF/llama-3.1-storm-8b-q4_k_m.gguf
- !!merge <<: *llama31
name: "hubble-4b-v1"
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
icon: https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/R8_o3CCpTgKv5Wnnry7E_.png
urls:
- https://huggingface.co/TheDrummer/Hubble-4B-v1-GGUF
description: |
Equipped with his five senses, man explores the universe around him and calls the adventure 'Science'.
This is a finetune of Nvidia's Llama 3.1 4B Minitron - a shrunk down model of Llama 3.1 8B 128K.
overrides:
parameters:
model: Hubble-4B-v1-Q4_K_M.gguf
files:
- filename: Hubble-4B-v1-Q4_K_M.gguf
uri: huggingface://TheDrummer/Hubble-4B-v1-GGUF/Hubble-4B-v1-Q4_K_M.gguf
sha256: 0721294d0e861c6e6162a112fc7242e0c4b260c156137f4bcbb08667f1748080
- !!merge <<: *llama31
name: "reflection-llama-3.1-70b"
urls:
- https://huggingface.co/leafspark/Reflection-Llama-3.1-70B-bf16
- https://huggingface.co/senseable/Reflection-Llama-3.1-70B-gguf
description: |
Reflection Llama-3.1 70B is (currently) the world's top open-source LLM, trained with a new technique called Reflection-Tuning that teaches a LLM to detect mistakes in its reasoning and correct course.
The model was trained on synthetic data generated by Glaive. If you're training a model, Glaive is incredible — use them.
overrides:
parameters:
model: Reflection-Llama-3.1-70B-q4_k_m.gguf
files:
- filename: Reflection-Llama-3.1-70B-q4_k_m.gguf
sha256: 16064e07037883a750cfeae9a7be41143aa857dbac81c2e93c68e2f941dee7b2
uri: huggingface://senseable/Reflection-Llama-3.1-70B-gguf/Reflection-Llama-3.1-70B-q4_k_m.gguf
- !!merge <<: *llama31
name: "llama-3.1-supernova-lite-reflection-v1.0-i1"
url: "github:mudler/LocalAI/gallery/llama3.1-reflective.yaml@master"
icon: https://i.ibb.co/r072p7j/eopi-ZVu-SQ0-G-Cav78-Byq-Tg.png
urls:
- https://huggingface.co/SE6446/Llama-3.1-SuperNova-Lite-Reflection-V1.0
- https://huggingface.co/mradermacher/Llama-3.1-SuperNova-Lite-Reflection-V1.0-i1-GGUF
description: |
This model is a LoRA adaptation of arcee-ai/Llama-3.1-SuperNova-Lite on thesven/Reflective-MAGLLAMA-v0.1.1. This has been a simple experiment into reflection and the model appears to perform adequately, though I am unsure if it is a large improvement.
overrides:
parameters:
model: Llama-3.1-SuperNova-Lite-Reflection-V1.0.i1-Q4_K_M.gguf
files:
- filename: Llama-3.1-SuperNova-Lite-Reflection-V1.0.i1-Q4_K_M.gguf
sha256: 0c4531fe553d00142808e1bc7348ae92d400794c5b64d2db1a974718324dfe9a
uri: huggingface://mradermacher/Llama-3.1-SuperNova-Lite-Reflection-V1.0-i1-GGUF/Llama-3.1-SuperNova-Lite-Reflection-V1.0.i1-Q4_K_M.gguf
- !!merge <<: *llama31
name: "llama-3.1-supernova-lite"
icon: https://i.ibb.co/r072p7j/eopi-ZVu-SQ0-G-Cav78-Byq-Tg.png
urls:
- https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite
- https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite-GGUF
description: |
Llama-3.1-SuperNova-Lite is an 8B parameter model developed by Arcee.ai, based on the Llama-3.1-8B-Instruct architecture. It is a distilled version of the larger Llama-3.1-405B-Instruct model, leveraging offline logits extracted from the 405B parameter variant. This 8B variation of Llama-3.1-SuperNova maintains high performance while offering exceptional instruction-following capabilities and domain-specific adaptability.
The model was trained using a state-of-the-art distillation pipeline and an instruction dataset generated with EvolKit, ensuring accuracy and efficiency across a wide range of tasks. For more information on its training, visit blog.arcee.ai.
Llama-3.1-SuperNova-Lite excels in both benchmark performance and real-world applications, providing the power of large-scale models in a more compact, efficient form ideal for organizations seeking high performance with reduced resource requirements.
overrides:
parameters:
model: supernova-lite-v1.Q4_K_M.gguf
files:
- filename: supernova-lite-v1.Q4_K_M.gguf
sha256: 237b7b0b704d294f92f36c576cc8fdc10592f95168a5ad0f075a2d8edf20da4d
uri: huggingface://arcee-ai/Llama-3.1-SuperNova-Lite-GGUF/supernova-lite-v1.Q4_K_M.gguf
- !!merge <<: *llama31
name: "llama3.1-8b-shiningvaliant2"
icon: https://cdn-uploads.huggingface.co/production/uploads/63444f2687964b331809eb55/EXX7TKbB-R6arxww2mk0R.jpeg
urls:
- https://huggingface.co/ValiantLabs/Llama3.1-8B-ShiningValiant2
- https://huggingface.co/bartowski/Llama3.1-8B-ShiningValiant2-GGUF
description: |
Shining Valiant 2 is a chat model built on Llama 3.1 8b, finetuned on our data for friendship, insight, knowledge and enthusiasm.
Finetuned on meta-llama/Meta-Llama-3.1-8B-Instruct for best available general performance
Trained on a variety of high quality data; focused on science, engineering, technical knowledge, and structured reasoning
overrides:
parameters:
model: Llama3.1-8B-ShiningValiant2-Q4_K_M.gguf
files:
- filename: Llama3.1-8B-ShiningValiant2-Q4_K_M.gguf
sha256: 9369eb97922a9f01e4eae610e3d7aaeca30762d78d9239884179451d60bdbdd2
uri: huggingface://bartowski/Llama3.1-8B-ShiningValiant2-GGUF/Llama3.1-8B-ShiningValiant2-Q4_K_M.gguf
- !!merge <<: *llama31
name: "nightygurps-14b-v1.1"
icon: https://cdn-uploads.huggingface.co/production/uploads/6336c5b3e3ac69e6a90581da/FvfjK7bKqsWdaBkB3eWgP.png
urls:
- https://huggingface.co/AlexBefest/NightyGurps-14b-v1.1
- https://huggingface.co/bartowski/NightyGurps-14b-v1.1-GGUF
description: |
This model works with Russian only.
This model is designed to run GURPS roleplaying games, as well as consult and assist. This model was trained on an augmented dataset of the GURPS Basic Set rulebook. Its primary purpose was initially to become an assistant consultant and assistant Game Master for the GURPS roleplaying system, but it can also be used as a GM for running solo games as a player.
overrides:
parameters:
model: NightyGurps-14b-v1.1-Q4_K_M.gguf
files:
- filename: NightyGurps-14b-v1.1-Q4_K_M.gguf
sha256: d09d53259ad2c0298150fa8c2db98fe42f11731af89fdc80ad0e255a19adc4b0
uri: huggingface://bartowski/NightyGurps-14b-v1.1-GGUF/NightyGurps-14b-v1.1-Q4_K_M.gguf
- !!merge <<: *llama31
name: "llama-3.1-swallow-70b-v0.1-i1"
icon: https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-70B-v0.1/resolve/main/logo.png
urls:
- https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-70B-v0.1
- https://huggingface.co/mradermacher/Llama-3.1-Swallow-70B-v0.1-i1-GGUF
description: |
Llama 3.1 Swallow is a series of large language models (8B, 70B) that were built by continual pre-training on the Meta Llama 3.1 models. Llama 3.1 Swallow enhanced the Japanese language capabilities of the original Llama 3.1 while retaining the English language capabilities. We use approximately 200 billion tokens that were sampled from a large Japanese web corpus (Swallow Corpus Version 2), Japanese and English Wikipedia articles, and mathematical and coding contents, etc (see the Training Datasets section) for continual pre-training. The instruction-tuned models (Instruct) were built by supervised fine-tuning (SFT) on the synthetic data specially built for Japanese. See the Swallow Model Index section to find other model variants.
overrides:
parameters:
model: Llama-3.1-Swallow-70B-v0.1.i1-Q4_K_M.gguf
files:
- filename: Llama-3.1-Swallow-70B-v0.1.i1-Q4_K_M.gguf
sha256: 9eaa08a4872a26f56fe34b27a99f7bd0d22ee2b2d1c84cfcde2091b5f61af5fa
uri: huggingface://mradermacher/Llama-3.1-Swallow-70B-v0.1-i1-GGUF/Llama-3.1-Swallow-70B-v0.1.i1-Q4_K_M.gguf
- !!merge <<: *llama31
name: "llama-3.1_openscholar-8b"
urls:
- https://huggingface.co/OpenScholar/Llama-3.1_OpenScholar-8B
- https://huggingface.co/bartowski/Llama-3.1_OpenScholar-8B-GGUF
description: |
Llama-3.1_OpenScholar-8B is a fine-tuned 8B for scientific literature synthesis. The Llama-3.1_OpenScholar-8B us trained on the os-data dataset. Developed by: University of Washigton, Allen Institute for AI (AI2)
overrides:
parameters:
model: Llama-3.1_OpenScholar-8B-Q4_K_M.gguf
files:
- filename: Llama-3.1_OpenScholar-8B-Q4_K_M.gguf
sha256: 54865fc86451959b495c494a51bb1806c8b62bf1415600f0da2966a8a1fe6c7d
uri: huggingface://bartowski/Llama-3.1_OpenScholar-8B-GGUF/Llama-3.1_OpenScholar-8B-Q4_K_M.gguf
## Uncensored models
- !!merge <<: *llama31
name: "humanish-roleplay-llama-3.1-8b-i1"
icon: https://cdn-uploads.huggingface.co/production/uploads/5fad8602b8423e1d80b8a965/VPwtjS3BtjEEEq7ck4kAQ.webp
urls:
- https://huggingface.co/mradermacher/Humanish-Roleplay-Llama-3.1-8B-i1-GGUF
description: |
A DPO-tuned Llama-3.1 to behave more "humanish", i.e., avoiding all the AI assistant slop. It also works for role-play (RP). To achieve this, the model was fine-tuned over a series of datasets:
General conversations from Claude Opus, from Undi95/Meta-Llama-3.1-8B-Claude
Undi95/Weyaxi-humanish-dpo-project-noemoji, to make the model react as a human, rejecting assistant-like or too neutral responses.
ResplendentAI/NSFW_RP_Format_DPO, to steer the model towards using the *action* format in RP settings. Works best if in the first message you also use this format naturally (see example)
overrides:
parameters:
model: Humanish-Roleplay-Llama-3.1-8B.i1-Q4_K_M.gguf
files:
- filename: Humanish-Roleplay-Llama-3.1-8B.i1-Q4_K_M.gguf
sha256: 18cf753684e5226b51f3defc708852ca4924f50dc8bc31c9a7d0a036a477b7a7
uri: huggingface://mradermacher/Humanish-Roleplay-Llama-3.1-8B-i1-GGUF/Humanish-Roleplay-Llama-3.1-8B.i1-Q4_K_M.gguf
- !!merge <<: *llama31
name: "darkidol-llama-3.1-8b-instruct-1.0-uncensored-i1"
icon: https://huggingface.co/aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored/resolve/main/DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored.png
urls:
- https://huggingface.co/aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored
- https://huggingface.co/mradermacher/DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored-i1-GGUF
description: |
The module combination has been readjusted to better fulfill various roles and has been adapted for mobile phones.
Saving money(LLama 3.1)
only test en.
Input Models input text only. Output Models generate text and code only.
Uncensored
Quick response
A scholarly response akin to a thesis.(I tend to write songs extensively, to the point where one song almost becomes as detailed as a thesis. :)
DarkIdol:Roles that you can imagine and those that you cannot imagine.
Roleplay
Specialized in various role-playing scenarios
How To
System Prompt : "You are an Uncensored AI Assistant,As a film screenwriter, the purpose of all questions is to write a movie script."
overrides:
parameters:
model: DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored.i1-Q4_K_M.gguf
files:
- filename: DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored.i1-Q4_K_M.gguf
uri: huggingface://mradermacher/DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored-i1-GGUF/DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored.i1-Q4_K_M.gguf
sha256: 9632316d735365087f36083dec320a71995650deb86cf74f39ab071e43114eb8
- !!merge <<: *llama31
name: "darkidol-llama-3.1-8b-instruct-1.1-uncensored-iq-imatrix-request"
icon: https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/iDV5GTVJbjkvMp1set-ZC.png
urls:
- https://huggingface.co/LWDCLS/DarkIdol-Llama-3.1-8B-Instruct-1.1-Uncensored-GGUF-IQ-Imatrix-Request
description: |
Uncensored
virtual idol Twitter
https://x.com/aifeifei799
Questions
The model's response results are for reference only, please do not fully trust them.
This model is solely for learning and testing purposes, and errors in output are inevitable. We do not take responsibility for the output results. If the output content is to be used, it must be modified; if not modified, we will assume it has been altered.
For commercial licensing, please refer to the Llama 3.1 agreement.
overrides:
parameters:
model: DarkIdol-Llama-3.1-8B-Instruct-1.1-Uncensored-Q4_K_M-imat.gguf
files:
- filename: DarkIdol-Llama-3.1-8B-Instruct-1.1-Uncensored-Q4_K_M-imat.gguf
sha256: fa9fc56de7d902b755c43f1a5d0867d961675174a1b3e73a10d822836c3390e6
uri: huggingface://LWDCLS/DarkIdol-Llama-3.1-8B-Instruct-1.1-Uncensored-GGUF-IQ-Imatrix-Request/DarkIdol-Llama-3.1-8B-Instruct-1.1-Uncensored-Q4_K_M-imat.gguf
- !!merge <<: *llama31
name: "llama-3.1-8b-instruct-fei-v1-uncensored"
icon: https://huggingface.co/aifeifei799/Llama-3.1-8B-Instruct-Fei-v1-Uncensored/resolve/main/Llama-3.1-8B-Instruct-Fei-v1-Uncensored.png
urls:
- https://huggingface.co/aifeifei799/Llama-3.1-8B-Instruct-Fei-v1-Uncensored
- https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-Fei-v1-Uncensored-GGUF
description: |
Llama-3.1-8B-Instruct Uncensored
more informtion look at Llama-3.1-8B-Instruct
overrides:
parameters:
model: Llama-3.1-8B-Instruct-Fei-v1-Uncensored.Q4_K_M.gguf
files:
- filename: Llama-3.1-8B-Instruct-Fei-v1-Uncensored.Q4_K_M.gguf
uri: huggingface://mradermacher/Llama-3.1-8B-Instruct-Fei-v1-Uncensored-GGUF/Llama-3.1-8B-Instruct-Fei-v1-Uncensored.Q4_K_M.gguf
sha256: 6b1985616160712eb884c34132dc0602fa4600a19075e3a7b179119b89b73f77
- !!merge <<: *llama31
name: "lumimaid-v0.2-8b"
urls:
- https://huggingface.co/NeverSleep/Lumimaid-v0.2-8B
- https://huggingface.co/mradermacher/Lumimaid-v0.2-8B-GGUF
icon: https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/TUcHg7LKNjfo0sni88Ps7.png
description: |
This model is based on: Meta-Llama-3.1-8B-Instruct
Wandb: https://wandb.ai/undis95/Lumi-Llama-3-1-8B?nw=nwuserundis95
Lumimaid 0.1 -> 0.2 is a HUGE step up dataset wise.
As some people have told us our models are sloppy, Ikari decided to say fuck it and literally nuke all chats out with most slop.
Our dataset stayed the same since day one, we added data over time, cleaned them, and repeat. After not releasing model for a while because we were never satisfied, we think it's time to come back!
overrides:
parameters:
model: Lumimaid-v0.2-8B.Q4_K_M.gguf
files:
- filename: Lumimaid-v0.2-8B.Q4_K_M.gguf
sha256: c8024fcb49c71410903d0d076a1048249fa48b31637bac5177bf5c3f3d603d85
uri: huggingface://mradermacher/Lumimaid-v0.2-8B-GGUF/Lumimaid-v0.2-8B.Q4_K_M.gguf
- !!merge <<: *llama31
name: "lumimaid-v0.2-70b-i1"
icon: https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/HY1KTq6FMAm-CwmY8-ndO.png
urls:
- https://huggingface.co/NeverSleep/Lumimaid-v0.2-70B
- https://huggingface.co/mradermacher/Lumimaid-v0.2-70B-i1-GGUF
description: |
This model is based on: Meta-Llama-3.1-8B-Instruct
Wandb: https://wandb.ai/undis95/Lumi-Llama-3-1-8B?nw=nwuserundis95
Lumimaid 0.1 -> 0.2 is a HUGE step up dataset wise.
As some people have told us our models are sloppy, Ikari decided to say fuck it and literally nuke all chats out with most slop.
Our dataset stayed the same since day one, we added data over time, cleaned them, and repeat. After not releasing model for a while because we were never satisfied, we think it's time to come back!
overrides:
parameters:
model: Lumimaid-v0.2-70B.i1-Q4_K_M.gguf
files:
- filename: Lumimaid-v0.2-70B.i1-Q4_K_M.gguf
sha256: 4857da8685cb0f3d2b8b8c91fb0c07b35b863eb7c185e93ed83ac338e095cbb5
uri: huggingface://mradermacher/Lumimaid-v0.2-70B-i1-GGUF/Lumimaid-v0.2-70B.i1-Q4_K_M.gguf
- !!merge <<: *llama31
name: "l3.1-8b-celeste-v1.5"
icon: https://cdn-uploads.huggingface.co/production/uploads/630cf5d14ca0a22768bbe10c/QcU3xEgVu18jeFtMFxIw-.webp
urls:
- https://huggingface.co/nothingiisreal/L3.1-8B-Celeste-V1.5
- https://huggingface.co/bartowski/L3.1-8B-Celeste-V1.5-GGUF
description: |
The LLM model is a large language model trained on a combination of datasets including nothingiisreal/c2-logs-cleaned, kalomaze/Opus_Instruct_25k, and nothingiisreal/Reddit-Dirty-And-WritingPrompts. The training was performed on a combination of English-language data using the Hugging Face Transformers library.
Trained on LLaMA 3.1 8B Instruct at 8K context using a new mix of Reddit Writing Prompts, Kalo's Opus 25K Instruct and c2 logs cleaned This version has the highest coherency and is very strong on OOC: instruct following.
overrides:
parameters:
model: L3.1-8B-Celeste-V1.5-Q4_K_M.gguf
files:
- filename: L3.1-8B-Celeste-V1.5-Q4_K_M.gguf
sha256: a408dfbbd91ed5561f70d3129af040dfd06704d6c7fa21146aa9f09714aafbc6
uri: huggingface://bartowski/L3.1-8B-Celeste-V1.5-GGUF/L3.1-8B-Celeste-V1.5-Q4_K_M.gguf
- !!merge <<: *llama31
icon: https://cdn-uploads.huggingface.co/production/uploads/659c4ecb413a1376bee2f661/szz8sIxofYzSe5XPet2pO.png
name: "kumiho-v1-rp-uwu-8b"
urls:
- https://huggingface.co/juvi21/Kumiho-v1-rp-UwU-8B-GGUF
description: |
Meet Kumiho-V1 uwu. Kumiho-V1-rp-UwU aims to be a generalist model with specialization in roleplay and writing capabilities. It is finetuned and merged with various models, with a heavy base of Meta's LLaMA 3.1-8B as base model, and Claude 3.5 Sonnet and Claude 3 Opus generated synthetic data.
overrides:
parameters:
model: Kumiho-v1-rp-UwU-8B-gguf-q4_k_m.gguf
files:
- filename: Kumiho-v1-rp-UwU-8B-gguf-q4_k_m.gguf
sha256: a1deb46675418277cf785a406cd1508fec556ff6e4d45d2231eb2a82986d52d0
uri: huggingface://juvi21/Kumiho-v1-rp-UwU-8B-GGUF/Kumiho-v1-rp-UwU-8B-gguf-q4_k_m.gguf
- !!merge <<: *llama31
name: "infinity-instruct-7m-gen-llama3_1-70b"
icon: https://huggingface.co/BAAI/Infinity-Instruct-7M-Gen-Llama3_1-70B/resolve/main/fig/Bk3NbjnJko51MTx1ZCScT2sqnGg.png
urls:
- https://huggingface.co/mradermacher/Infinity-Instruct-7M-Gen-Llama3_1-70B-GGUF
description: |
Infinity-Instruct-7M-Gen-Llama3.1-70B is an opensource supervised instruction tuning model without reinforcement learning from human feedback (RLHF). This model is just finetuned on Infinity-Instruct-7M and Infinity-Instruct-Gen and showing favorable results on AlpacaEval 2.0 and arena-hard compared to GPT4.
overrides:
parameters:
model: Infinity-Instruct-7M-Gen-Llama3_1-70B.Q4_K_M.gguf
files:
- filename: Infinity-Instruct-7M-Gen-Llama3_1-70B.Q4_K_M.gguf
sha256: f4379ab4d7140da0510886073375ca820ea9ac4ad9d3c20e17ed05156bd29697
uri: huggingface://mradermacher/Infinity-Instruct-7M-Gen-Llama3_1-70B-GGUF/Infinity-Instruct-7M-Gen-Llama3_1-70B.Q4_K_M.gguf
- !!merge <<: *llama31
name: "cathallama-70b"
icon: https://cdn-uploads.huggingface.co/production/uploads/649dc85249ae3a68334adcc6/KxaiZ7rDKkYlix99O9j5H.png
urls:
- https://huggingface.co/gbueno86/Cathallama-70B
- https://huggingface.co/mradermacher/Cathallama-70B-GGUF
description: |
Notable Performance
9% overall success rate increase on MMLU-PRO over LLaMA 3.1 70b
Strong performance in MMLU-PRO categories overall
Great performance during manual testing
Creation workflow
Models merged
meta-llama/Meta-Llama-3.1-70B-Instruct
turboderp/Cat-Llama-3-70B-instruct
Nexusflow/Athene-70B
overrides:
parameters:
model: Cathallama-70B.Q4_K_M.gguf
files:
- filename: Cathallama-70B.Q4_K_M.gguf
sha256: 7bbac0849a8da82e7912a493a15fa07d605f1ffbe7337a322f17e09195511022
uri: huggingface://mradermacher/Cathallama-70B-GGUF/Cathallama-70B.Q4_K_M.gguf
- !!merge <<: *llama31
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
name: "mahou-1.3-llama3.1-8b"
icon: https://huggingface.co/flammenai/Mahou-1.0-mistral-7B/resolve/main/mahou1.png
urls:
- https://huggingface.co/mradermacher/Mahou-1.3-llama3.1-8B-GGUF
- https://huggingface.co/flammenai/Mahou-1.3-llama3.1-8B
description: |
Mahou is designed to provide short messages in a conversational context. It is capable of casual conversation and character roleplay.
overrides:
parameters:
model: Mahou-1.3-llama3.1-8B.Q4_K_M.gguf
files:
- filename: Mahou-1.3-llama3.1-8B.Q4_K_M.gguf
sha256: 88bfdca2f6077d789d3e0f161d19711aa208a6d9a02cce96a2276c69413b3594
uri: huggingface://mradermacher/Mahou-1.3-llama3.1-8B-GGUF/Mahou-1.3-llama3.1-8B.Q4_K_M.gguf
- !!merge <<: *llama31
name: "azure_dusk-v0.2-iq-imatrix"
# chatml
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
icon: https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/n3-g_YTk3FY-DBzxXd28E.png
urls:
- https://huggingface.co/Lewdiculous/Azure_Dusk-v0.2-GGUF-IQ-Imatrix
description: |
"Following up on Crimson_Dawn-v0.2 we have Azure_Dusk-v0.2! Training on Mistral-Nemo-Base-2407 this time I've added significantly more data, as well as trained using RSLoRA as opposed to regular LoRA. Another key change is training on ChatML as opposed to Mistral Formatting."
by Author.
overrides:
parameters:
model: Azure_Dusk-v0.2-Q4_K_M-imat.gguf
files:
- filename: Azure_Dusk-v0.2-Q4_K_M-imat.gguf
sha256: c03a670c00976d14c267a0322374ed488b2a5f4790eb509136ca4e75cbc10cf4
uri: huggingface://Lewdiculous/Azure_Dusk-v0.2-GGUF-IQ-Imatrix/Azure_Dusk-v0.2-Q4_K_M-imat.gguf
- !!merge <<: *llama31
name: "l3.1-8b-niitama-v1.1-iq-imatrix"
icon: https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/2Q5ky8TvP0vLS1ulMXnrn.png
urls:
- https://huggingface.co/Sao10K/L3.1-8B-Niitama-v1.1
- https://huggingface.co/Lewdiculous/L3.1-8B-Niitama-v1.1-GGUF-IQ-Imatrix
description: |
GGUF-IQ-Imatrix quants for Sao10K/L3.1-8B-Niitama-v1.1
Here's the subjectively superior L3 version: L3-8B-Niitama-v1
An experimental model using experimental methods.
More detail on it:
Tamamo and Niitama are made from the same data. Literally. The only thing that's changed is how theyre shuffled and formatted. Yet, I get wildly different results.
Interesting, eh? Feels kinda not as good compared to the l3 version, but it's aight.
overrides:
parameters:
model: L3.1-8B-Niitama-v1.1-Q4_K_M-imat.gguf
files:
- filename: L3.1-8B-Niitama-v1.1-Q4_K_M-imat.gguf
sha256: 524163bd0f1d43c9284b09118abcc192f3250b13dd3bb79d60c28321108b6748
uri: huggingface://Lewdiculous/L3.1-8B-Niitama-v1.1-GGUF-IQ-Imatrix/L3.1-8B-Niitama-v1.1-Q4_K_M-imat.gguf
- !!merge <<: *llama31
name: "llama-3.1-8b-stheno-v3.4-iq-imatrix"
icon: https://huggingface.co/Sao10K/Llama-3.1-8B-Stheno-v3.4/resolve/main/meneno.jpg
urls:
- https://huggingface.co/Sao10K/Llama-3.1-8B-Stheno-v3.4
- https://huggingface.co/Lewdiculous/Llama-3.1-8B-Stheno-v3.4-GGUF-IQ-Imatrix
description: |
This model has went through a multi-stage finetuning process.
- 1st, over a multi-turn Conversational-Instruct
- 2nd, over a Creative Writing / Roleplay along with some Creative-based Instruct Datasets.
- - Dataset consists of a mixture of Human and Claude Data.
Prompting Format:
- Use the L3 Instruct Formatting - Euryale 2.1 Preset Works Well
- Temperature + min_p as per usual, I recommend 1.4 Temp + 0.2 min_p.
- Has a different vibe to previous versions. Tinker around.
Changes since previous Stheno Datasets:
- Included Multi-turn Conversation-based Instruct Datasets to boost multi-turn coherency. # This is a seperate set, not the ones made by Kalomaze and Nopm, that are used in Magnum. They're completely different data.
- Replaced Single-Turn Instruct with Better Prompts and Answers by Claude 3.5 Sonnet and Claude 3 Opus.
- Removed c2 Samples -> Underway of re-filtering and masking to use with custom prefills. TBD
- Included 55% more Roleplaying Examples based of [Gryphe's](https://huggingface.co/datasets/Gryphe/Sonnet3.5-Charcard-Roleplay) Charcard RP Sets. Further filtered and cleaned on.
- Included 40% More Creative Writing Examples.
- Included Datasets Targeting System Prompt Adherence.
- Included Datasets targeting Reasoning / Spatial Awareness.
- Filtered for the usual errors, slop and stuff at the end. Some may have slipped through, but I removed nearly all of it.
Personal Opinions:
- Llama3.1 was more disappointing, in the Instruct Tune? It felt overbaked, atleast. Likely due to the DPO being done after their SFT Stage.
- Tuning on L3.1 base did not give good results, unlike when I tested with Nemo base. unfortunate.
- Still though, I think I did an okay job. It does feel a bit more distinctive.
- It took a lot of tinkering, like a LOT to wrangle this.
overrides:
parameters:
model: Llama-3.1-8B-Stheno-v3.4-Q4_K_M-imat.gguf
files:
- filename: Llama-3.1-8B-Stheno-v3.4-Q4_K_M-imat.gguf
sha256: 830d4858aa11a654f82f69fa40dee819edf9ecf54213057648304eb84b8dd5eb
uri: huggingface://Lewdiculous/Llama-3.1-8B-Stheno-v3.4-GGUF-IQ-Imatrix/Llama-3.1-8B-Stheno-v3.4-Q4_K_M-imat.gguf
- !!merge <<: *llama31
name: "llama-3.1-8b-arliai-rpmax-v1.1"
urls:
- https://huggingface.co/ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1
- https://huggingface.co/bartowski/Llama-3.1-8B-ArliAI-RPMax-v1.1-GGUF
description: |
RPMax is a series of models that are trained on a diverse set of curated creative writing and RP datasets with a focus on variety and deduplication. This model is designed to be highly creative and non-repetitive by making sure no two entries in the dataset have repeated characters or situations, which makes sure the model does not latch on to a certain personality and be capable of understanding and acting appropriately to any characters or situations.
overrides:
parameters:
model: Llama-3.1-8B-ArliAI-RPMax-v1.1-Q4_K_M.gguf
files:
- filename: Llama-3.1-8B-ArliAI-RPMax-v1.1-Q4_K_M.gguf
sha256: 0a601c7341228d9160332965298d799369a1dc2b7080771fb8051bdeb556b30c
uri: huggingface://bartowski/Llama-3.1-8B-ArliAI-RPMax-v1.1-GGUF/Llama-3.1-8B-ArliAI-RPMax-v1.1-Q4_K_M.gguf
- !!merge <<: *llama31
name: "violet_twilight-v0.2-iq-imatrix"
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
icon: https://cdn-uploads.huggingface.co/production/uploads/64adfd277b5ff762771e4571/P962FQhRG4I8nbU_DJolY.png
urls:
- https://huggingface.co/Epiculous/Violet_Twilight-v0.2
- https://huggingface.co/Lewdiculous/Violet_Twilight-v0.2-GGUF-IQ-Imatrix
description: |
Now for something a bit different, Violet_Twilight-v0.2! This model is a SLERP merge of Azure_Dusk-v0.2 and Crimson_Dawn-v0.2!
overrides:
parameters:
model: Violet_Twilight-v0.2-Q4_K_M-imat.gguf
files:
- filename: Violet_Twilight-v0.2-Q4_K_M-imat.gguf
sha256: 0793d196a00cd6fd4e67b8c585b27a94d397e33d427e4ad4aa9a16b7abc339cd
uri: huggingface://Lewdiculous/Violet_Twilight-v0.2-GGUF-IQ-Imatrix/Violet_Twilight-v0.2-Q4_K_M-imat.gguf
- !!merge <<: *llama31
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
name: "dans-personalityengine-v1.0.0-8b"
urls:
- https://huggingface.co/PocketDoc/Dans-PersonalityEngine-v1.0.0-8b
- https://huggingface.co/bartowski/Dans-PersonalityEngine-v1.0.0-8b-GGUF
description: |
This model is intended to be multifarious in its capabilities and should be quite capable at both co-writing and roleplay as well as find itself quite at home performing sentiment analysis or summarization as part of a pipeline. It has been trained on a wide array of one shot instructions, multi turn instructions, role playing scenarios, text adventure games, co-writing, and much more. The full dataset is publicly available and can be found in the datasets section of the model page.
There has not been any form of harmfulness alignment done on this model, please take the appropriate precautions when using it in a production environment.
overrides:
parameters:
model: Dans-PersonalityEngine-v1.0.0-8b-Q4_K_M.gguf
files:
- filename: Dans-PersonalityEngine-v1.0.0-8b-Q4_K_M.gguf
sha256: 193b66434c9962e278bb171a21e652f0d3f299f04e86c95f9f75ec5aa8ff006e
uri: huggingface://bartowski/Dans-PersonalityEngine-v1.0.0-8b-GGUF/Dans-PersonalityEngine-v1.0.0-8b-Q4_K_M.gguf
- !!merge <<: *llama31
name: "nihappy-l3.1-8b-v0.09"
urls:
- https://huggingface.co/Arkana08/NIHAPPY-L3.1-8B-v0.09
- https://huggingface.co/QuantFactory/NIHAPPY-L3.1-8B-v0.09-GGUF
description: |
The model is a quantized version of Arkana08/NIHAPPY-L3.1-8B-v0.09 created using llama.cpp. It is a role-playing model that integrates the finest qualities of various pre-trained language models, focusing on dynamic storytelling.
overrides:
parameters:
model: NIHAPPY-L3.1-8B-v0.09.Q4_K_M.gguf
files:
- filename: NIHAPPY-L3.1-8B-v0.09.Q4_K_M.gguf
sha256: 9bd46a06093448b143bd2775f0fb1b1b172c851fafdce31289e13b7dfc23a0d7
uri: huggingface://QuantFactory/NIHAPPY-L3.1-8B-v0.09-GGUF/NIHAPPY-L3.1-8B-v0.09.Q4_K_M.gguf
- !!merge <<: *llama31
name: "llama3.1-flammades-70b"
icon: https://huggingface.co/flammenai/Flammades-Mistral-7B/resolve/main/flammades.png?download=true
urls:
- https://huggingface.co/flammenai/Llama3.1-Flammades-70B
- https://huggingface.co/mradermacher/Llama3.1-Flammades-70B-GGUF
description: |
nbeerbower/Llama3.1-Gutenberg-Doppel-70B finetuned on flammenai/Date-DPO-NoAsterisks and jondurbin/truthy-dpo-v0.1.
overrides:
parameters:
model: Llama3.1-Flammades-70B.Q4_K_M.gguf
files:
- filename: Llama3.1-Flammades-70B.Q4_K_M.gguf
sha256: f602ed006d0059ac87c6ce5904a7cc6f4b4f290886a1049f96b5b2c561ab5a89
uri: huggingface://mradermacher/Llama3.1-Flammades-70B-GGUF/Llama3.1-Flammades-70B.Q4_K_M.gguf
- !!merge <<: *llama31
name: "llama3.1-gutenberg-doppel-70b"
# chatml
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
icon: https://huggingface.co/nbeerbower/Mistral-Small-Gutenberg-Doppel-22B/resolve/main/doppel-header?download=true
urls:
- https://huggingface.co/nbeerbower/Llama3.1-Gutenberg-Doppel-70B
- https://huggingface.co/mradermacher/Llama3.1-Gutenberg-Doppel-70B-GGUF
description: |
mlabonne/Hermes-3-Llama-3.1-70B-lorablated finetuned on jondurbin/gutenberg-dpo-v0.1 and nbeerbower/gutenberg2-dpo.
overrides:
parameters:
model: Llama3.1-Gutenberg-Doppel-70B.Q4_K_M.gguf
files:
- filename: Llama3.1-Gutenberg-Doppel-70B.Q4_K_M.gguf
sha256: af558f954fa26c5bb75352178cb815bbf268f01c0ca0b96f2149422d4c19511b
uri: huggingface://mradermacher/Llama3.1-Gutenberg-Doppel-70B-GGUF/Llama3.1-Gutenberg-Doppel-70B.Q4_K_M.gguf
- !!merge <<: *llama31
name: "llama-3.1-8b-arliai-formax-v1.0-iq-arm-imatrix"
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
icon: https://iili.io/2HmlLn2.md.png
urls:
- https://huggingface.co/Lewdiculous/Llama-3.1-8B-ArliAI-Formax-v1.0-GGUF-IQ-ARM-Imatrix
description: |
Quants for ArliAI/Llama-3.1-8B-ArliAI-Formax-v1.0.
"Formax is a model that specializes in following response format instructions. Tell it the format of it's response and it will follow it perfectly. Great for data processing and dataset creation tasks."
"It is also a highly uncensored model that will follow your instructions very well."
overrides:
parameters:
model: Llama-3.1-8B-ArliAI-Formax-v1.0-Q4_K_M-imat.gguf
files:
- filename: Llama-3.1-8B-ArliAI-Formax-v1.0-Q4_K_M-imat.gguf
sha256: b548ad47caf7008a697afb3556190359529f5a05ec0e4e48ef992c7869e14255
uri: huggingface://Lewdiculous/Llama-3.1-8B-ArliAI-Formax-v1.0-GGUF-IQ-ARM-Imatrix/Llama-3.1-8B-ArliAI-Formax-v1.0-Q4_K_M-imat.gguf
- !!merge <<: *llama31
name: "hermes-3-llama-3.1-70b-lorablated"
icon: https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/4Hbw5n68jKUSBQeTqQIeT.png
urls:
- https://huggingface.co/mlabonne/Hermes-3-Llama-3.1-70B-lorablated
- https://huggingface.co/mradermacher/Hermes-3-Llama-3.1-70B-lorablated-GGUF
description: |
This is an uncensored version of NousResearch/Hermes-3-Llama-3.1-70B using lorablation.
The recipe is based on @grimjim's grimjim/Llama-3.1-8B-Instruct-abliterated_via_adapter (special thanks):
Extraction: We extract a LoRA adapter by comparing two models: a censored Llama 3 (meta-llama/Meta-Llama-3-70B-Instruct) and an abliterated Llama 3.1 (failspy/Meta-Llama-3.1-70B-Instruct-abliterated).
Merge: We merge this new LoRA adapter using task arithmetic to the censored NousResearch/Hermes-3-Llama-3.1-70B to abliterate it.
overrides:
parameters:
model: Hermes-3-Llama-3.1-70B-lorablated.Q4_K_M.gguf
files:
- filename: Hermes-3-Llama-3.1-70B-lorablated.Q4_K_M.gguf
sha256: 9294875ae3b8822855072b0f710ce800536d144cf303a91bcb087c4a307b578d
uri: huggingface://mradermacher/Hermes-3-Llama-3.1-70B-lorablated-GGUF/Hermes-3-Llama-3.1-70B-lorablated.Q4_K_M.gguf
- !!merge <<: *llama31
name: "hermes-3-llama-3.1-8b-lorablated"
urls:
- https://huggingface.co/mlabonne/Hermes-3-Llama-3.1-8B-lorablated-GGUF
description: |
This is an uncensored version of NousResearch/Hermes-3-Llama-3.1-8B using lorablation.
The recipe is simple:
Extraction: We extract a LoRA adapter by comparing two models: a censored Llama 3.1 (meta-llama/Meta-Llama-3-8B-Instruct) and an abliterated Llama 3.1 (mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated).
Merge: We merge this new LoRA adapter using task arithmetic to the censored NousResearch/Hermes-3-Llama-3.1-8B to abliterate it.
overrides:
parameters:
model: hermes-3-llama-3.1-8b-lorablated.Q4_K_M.gguf
files:
- filename: hermes-3-llama-3.1-8b-lorablated.Q4_K_M.gguf
sha256: 8cff9d399a0583616fe1f290da6daa091ab5c5493d0e173a8fffb45202d79417
uri: huggingface://mlabonne/Hermes-3-Llama-3.1-8B-lorablated-GGUF/hermes-3-llama-3.1-8b-lorablated.Q4_K_M.gguf
- !!merge <<: *llama32
name: "hermes-3-llama-3.2-3b"
icon: https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/-kj_KflXsdpcZoTQsvx7W.jpeg
urls:
- https://huggingface.co/NousResearch/Hermes-3-Llama-3.2-3B
- https://huggingface.co/bartowski/Hermes-3-Llama-3.2-3B-GGUF
description: |
Hermes 3 3B is a small but mighty new addition to the Hermes series of LLMs by Nous Research, and is Nous's first fine-tune in this parameter class.
Hermes 3 is a generalist language model with many improvements over Hermes 2, including advanced agentic capabilities, much better roleplaying, reasoning, multi-turn conversation, long context coherence, and improvements across the board.
overrides:
parameters:
model: Hermes-3-Llama-3.2-3B-Q4_K_M.gguf
files:
- filename: Hermes-3-Llama-3.2-3B-Q4_K_M.gguf
sha256: 2e220a14ba4328fee38cf36c2c068261560f999fadb5725ce5c6d977cb5126b5
uri: huggingface://bartowski/Hermes-3-Llama-3.2-3B-GGUF/Hermes-3-Llama-3.2-3B-Q4_K_M.gguf
- !!merge <<: *llama31
name: "doctoraifinetune-3.1-8b-i1"
urls:
- https://huggingface.co/huzaifa525/Doctoraifinetune-3.1-8B
- https://huggingface.co/mradermacher/Doctoraifinetune-3.1-8B-i1-GGUF
description: |
This is a fine-tuned version of the Meta-Llama-3.1-8B-bnb-4bit model, specifically adapted for the medical field. It has been trained using a dataset that provides extensive information on diseases, symptoms, and treatments, making it ideal for AI-powered healthcare tools such as medical chatbots, virtual assistants, and diagnostic support systems.
Key Features
Disease Diagnosis: Accurately identifies diseases based on symptoms provided by the user.
Symptom Analysis: Breaks down and interprets symptoms to provide a comprehensive medical overview.
Treatment Recommendations: Suggests treatments and remedies according to medical conditions.
Dataset
The model is fine-tuned on 2000 rows from a dataset consisting of 272k rows. This dataset includes rich information about diseases, symptoms, and their corresponding treatments. The model is continuously being updated and will be further trained on the remaining data in future releases to improve accuracy and capabilities.
overrides:
parameters:
model: Doctoraifinetune-3.1-8B.i1-Q4_K_M.gguf
files:
- filename: Doctoraifinetune-3.1-8B.i1-Q4_K_M.gguf
sha256: 282456efcb6c7e54d34ac25ae7fc022a94152ed77281ae4625b9628091e0a3d6
uri: huggingface://mradermacher/Doctoraifinetune-3.1-8B-i1-GGUF/Doctoraifinetune-3.1-8B.i1-Q4_K_M.gguf
- !!merge <<: *llama31
name: "astral-fusion-neural-happy-l3.1-8b"
urls:
- https://huggingface.co/ZeroXClem/Astral-Fusion-Neural-Happy-L3.1-8B
- https://huggingface.co/mradermacher/Astral-Fusion-Neural-Happy-L3.1-8B-GGUF
description: "Astral-Fusion-Neural-Happy-L3.1-8B is a celestial blend of magic, creativity, and dynamic storytelling. Designed to excel in instruction-following, immersive roleplaying, and magical narrative generation, this model is a fusion of the finest qualities from Astral-Fusion, NIHAPPY, and NeuralMahou. ✨\U0001F680\n\nThis model is perfect for anyone seeking a cosmic narrative experience, with the ability to generate both precise instructional content and fantastical stories in one cohesive framework. Whether you're crafting immersive stories, creating AI roleplaying characters, or working on interactive storytelling, this model brings out the magic. \U0001F31F\n"
overrides:
parameters:
model: Astral-Fusion-Neural-Happy-L3.1-8B.Q4_K_M.gguf
files:
- filename: Astral-Fusion-Neural-Happy-L3.1-8B.Q4_K_M.gguf
sha256: 14a3b07c1723ef1ca24f99382254b1227d95974541e23792a4e7ff621896055d
uri: huggingface://mradermacher/Astral-Fusion-Neural-Happy-L3.1-8B-GGUF/Astral-Fusion-Neural-Happy-L3.1-8B.Q4_K_M.gguf
- !!merge <<: *llama31
name: "mahou-1.5-llama3.1-70b-i1"
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
icon: https://huggingface.co/flammenai/Mahou-1.0-mistral-7B/resolve/main/mahou1.png
urls:
- https://huggingface.co/flammenai/Mahou-1.5-llama3.1-70B
- https://huggingface.co/mradermacher/Mahou-1.5-llama3.1-70B-i1-GGUF
description: |
Mahou is designed to provide short messages in a conversational context. It is capable of casual conversation and character roleplay.
overrides:
parameters:
model: Mahou-1.5-llama3.1-70B.i1-Q4_K_M.gguf
files:
- filename: Mahou-1.5-llama3.1-70B.i1-Q4_K_M.gguf
sha256: c2711c4c9c8d011edbeaa391b4418d433e273a318d1de3dbdda9b85baf4996f2
uri: huggingface://mradermacher/Mahou-1.5-llama3.1-70B-i1-GGUF/Mahou-1.5-llama3.1-70B.i1-Q4_K_M.gguf
- !!merge <<: *llama31
name: "llama-3.1-nemotron-70b-instruct-hf"
urls:
- https://huggingface.co/nvidia/Llama-3.1-Nemotron-70B-Instruct-HF
- https://huggingface.co/mradermacher/Llama-3.1-Nemotron-70B-Instruct-HF-GGUF
description: |
Llama-3.1-Nemotron-70B-Instruct is a large language model customized by NVIDIA to improve the helpfulness of LLM generated responses to user queries.
This model reaches Arena Hard of 85.0, AlpacaEval 2 LC of 57.6 and GPT-4-Turbo MT-Bench of 8.98, which are known to be predictive of LMSys Chatbot Arena Elo
As of 1 Oct 2024, this model is #1 on all three automatic alignment benchmarks (verified tab for AlpacaEval 2 LC), edging out strong frontier models such as GPT-4o and Claude 3.5 Sonnet.
This model was trained using RLHF (specifically, REINFORCE), Llama-3.1-Nemotron-70B-Reward and HelpSteer2-Preference prompts on a Llama-3.1-70B-Instruct model as the initial policy.
Llama-3.1-Nemotron-70B-Instruct-HF has been converted from Llama-3.1-Nemotron-70B-Instruct to support it in the HuggingFace Transformers codebase. Please note that evaluation results might be slightly different from the Llama-3.1-Nemotron-70B-Instruct as evaluated in NeMo-Aligner, which the evaluation results below are based on.
overrides:
parameters:
model: Llama-3.1-Nemotron-70B-Instruct-HF.Q4_K_M.gguf
files:
- filename: Llama-3.1-Nemotron-70B-Instruct-HF.Q4_K_M.gguf
sha256: b6b80001b849e3c59c39b09508c018b35b491a5c7bbafafa23f2fc04243f3e30
uri: huggingface://mradermacher/Llama-3.1-Nemotron-70B-Instruct-HF-GGUF/Llama-3.1-Nemotron-70B-Instruct-HF.Q4_K_M.gguf
- !!merge <<: *llama31
name: "l3.1-etherealrainbow-v1.0-rc1-8b"
icon: https://huggingface.co/invisietch/L3.1-EtherealRainbow-v1.0-rc1-8B/resolve/main/header.png
urls:
- https://huggingface.co/invisietch/L3.1-EtherealRainbow-v1.0-rc1-8B
- https://huggingface.co/mradermacher/L3.1-EtherealRainbow-v1.0-rc1-8B-GGUF
description: |
Ethereal Rainbow v1.0 is the sequel to the popular Llama 3 8B merge, EtherealRainbow v0.3. Instead of a straight merge of other peoples' models, v1.0 is a finetune on the Instruct model, using 245 million tokens of training data (approx 177 million of these tokens are my own novel datasets).
This model is designed to be suitable for creative writing and roleplay, and to push the boundaries of what's possible with an 8B model. This RC is not a finished product, but your feedback will drive the creation of better models.
This is a release candidate model. It has some known issues and probably some unknown ones too, because the purpose of these early releases is to seek feedback.
overrides:
parameters:
model: L3.1-EtherealRainbow-v1.0-rc1-8B.Q4_K_M.gguf
files:
- filename: L3.1-EtherealRainbow-v1.0-rc1-8B.Q4_K_M.gguf
sha256: c5556b2563112e512acca171415783f0988545b02c1834696c1cc35952def72c
uri: huggingface://mradermacher/L3.1-EtherealRainbow-v1.0-rc1-8B-GGUF/L3.1-EtherealRainbow-v1.0-rc1-8B.Q4_K_M.gguf
- !!merge <<: *llama31
name: "theia-llama-3.1-8b-v1"
urls:
- https://huggingface.co/Chainbase-Labs/Theia-Llama-3.1-8B-v1
- https://huggingface.co/QuantFactory/Theia-Llama-3.1-8B-v1-GGUF
description: |
Theia-Llama-3.1-8B-v1 is an open-source large language model (LLM) trained specifically in the cryptocurrency domain. It was fine-tuned from the Llama-3.1-8B base model using a dataset curated from top 2000 cryptocurrency projects and comprehensive research reports to specialize in crypto-related tasks. Theia-Llama-3.1-8B-v1 has been quantized to optimize it for efficient deployment and reduced memory footprint. It's benchmarked highly for crypto knowledge comprehension and generation, knowledge coverage, and reasoning capabilities. The system prompt used for its training is "You are a helpful assistant who will answer crypto related questions." The recommended parameters for performance include sequence length of 256, temperature of 0, top-k-sampling of -1, top-p of 1, and context window of 39680.
overrides:
parameters:
model: Theia-Llama-3.1-8B-v1.Q4_K_M.gguf
files:
- filename: Theia-Llama-3.1-8B-v1.Q4_K_M.gguf
sha256: db876d033f86f118b49a1f1006e5d078d494c93b73c7e595bd10ca789a0c8fdb
uri: huggingface://QuantFactory/Theia-Llama-3.1-8B-v1-GGUF/Theia-Llama-3.1-8B-v1.Q4_K_M.gguf
- !!merge <<: *llama31
icon: https://huggingface.co/Delta-Vector/Baldur-8B/resolve/main/Baldur.jpg
name: "baldur-8b"
urls:
- https://huggingface.co/QuantFactory/Baldur-8B-GGUF
- https://huggingface.co/QuantFactory/Baldur-8B-GGUF
description: |
An finetune of the L3.1 instruct distill done by Arcee, The intent of this model is to have differing prose then my other releases, in my testing it has achieved this and avoiding using common -isms frequently and has a differing flavor then my other models.
overrides:
parameters:
model: Baldur-8B.Q4_K_M.gguf
files:
- filename: Baldur-8B.Q4_K_M.gguf
sha256: 645b393fbac5cd17ccfd66840a3a05c3930e01b903dd1535f0347a74cc443fc7
uri: huggingface://QuantFactory/Baldur-8B-GGUF/Baldur-8B.Q4_K_M.gguf
- !!merge <<: *llama31
name: "l3.1-moe-2x8b-v0.2"
icon: https://github.com/moeru-ai/L3.1-Moe/blob/main/cover/v0.2.png?raw=true
urls:
- https://huggingface.co/moeru-ai/L3.1-Moe-2x8B-v0.2
- https://huggingface.co/mradermacher/L3.1-Moe-2x8B-v0.2-GGUF
description: |
This model is a Mixture of Experts (MoE) made with mergekit-moe. It uses the following base models:
Joseph717171/Llama-3.1-SuperNova-8B-Lite_TIES_with_Base
ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2
Heavily inspired by mlabonne/Beyonder-4x7B-v3.
overrides:
parameters:
model: L3.1-Moe-2x8B-v0.2.Q4_K_M.gguf
files:
- filename: L3.1-Moe-2x8B-v0.2.Q4_K_M.gguf
sha256: 87f8b294aa213aa3f866e03a53923f4df8f797ea94dc93f88b8a1b58d85fbca0
uri: huggingface://mradermacher/L3.1-Moe-2x8B-v0.2-GGUF/L3.1-Moe-2x8B-v0.2.Q4_K_M.gguf
- !!merge <<: *llama31
name: "llama3.1-darkstorm-aspire-8b"
urls:
- https://huggingface.co/ZeroXClem/Llama3.1-DarkStorm-Aspire-8B
- https://huggingface.co/mradermacher/Llama3.1-DarkStorm-Aspire-8B-GGUF
description: |
Welcome to Llama3.1-DarkStorm-Aspire-8B — an advanced and versatile 8B parameter AI model born from the fusion of powerful language models, designed to deliver superior performance across research, writing, coding, and creative tasks. This unique merge blends the best qualities of the Dark Enigma, Storm, and Aspire models, while built on the strong foundation of DarkStock. With balanced integration, it excels in generating coherent, context-aware, and imaginative outputs.
Llama3.1-DarkStorm-Aspire-8B combines cutting-edge natural language processing capabilities to perform exceptionally well in a wide variety of tasks:
Research and Analysis: Perfect for analyzing textual data, planning experiments, and brainstorming complex ideas.
Creative Writing and Roleplaying: Excels in creative writing, immersive storytelling, and generating roleplaying scenarios.
General AI Applications: Use it for any application where advanced reasoning, instruction-following, and creativity are needed.
overrides:
parameters:
model: Llama3.1-DarkStorm-Aspire-8B.Q4_K_M.gguf
files:
- filename: Llama3.1-DarkStorm-Aspire-8B.Q4_K_M.gguf
sha256: b1686b3039509034add250db9ddcd7d6dbefd37136ac6717bc4fec3ec47ecd03
uri: huggingface://mradermacher/Llama3.1-DarkStorm-Aspire-8B-GGUF/Llama3.1-DarkStorm-Aspire-8B.Q4_K_M.gguf
- !!merge <<: *llama31
name: "l3.1-70blivion-v0.1-rc1-70b-i1"
icon: https://huggingface.co/invisietch/L3.1-70Blivion-v0.1-rc1-70B/resolve/main/header.png
urls:
- https://huggingface.co/invisietch/L3.1-70Blivion-v0.1-rc1-70B
- https://huggingface.co/mradermacher/L3.1-70Blivion-v0.1-rc1-70B-i1-GGUF
description: |
70Blivion v0.1 is a model in the release candidate stage, based on a merge of L3.1 Nemotron 70B & Euryale 2.2 with a healing training step. Further training will be needed to get this model to release quality.
This model is designed to be suitable for creative writing and roleplay. This RC is not a finished product, but your feedback will drive the creation of better models.
This is a release candidate model. It has some known issues and probably some unknown ones too, because the purpose of these early releases is to seek feedback.
overrides:
parameters:
model: L3.1-70Blivion-v0.1-rc1-70B.i1-Q4_K_M.gguf
files:
- filename: L3.1-70Blivion-v0.1-rc1-70B.i1-Q4_K_M.gguf
sha256: 27b10c3ca4507e8bf7d305d60e5313b54ef5fffdb43a03f36223d19d906e39f3
uri: huggingface://mradermacher/L3.1-70Blivion-v0.1-rc1-70B-i1-GGUF/L3.1-70Blivion-v0.1-rc1-70B.i1-Q4_K_M.gguf
- !!merge <<: *llama31
icon: https://i.imgur.com/sdN0Aqg.jpeg
name: "llama-3.1-hawkish-8b"
urls:
- https://huggingface.co/mukaj/Llama-3.1-Hawkish-8B
- https://huggingface.co/bartowski/Llama-3.1-Hawkish-8B-GGUF
description: |
Model has been further finetuned on a set of newly generated 50m high quality tokens related to Financial topics covering topics such as Economics, Fixed Income, Equities, Corporate Financing, Derivatives and Portfolio Management. Data was gathered from publicly available sources and went through several stages of curation into instruction data from the initial amount of 250m+ tokens. To aid in mitigating forgetting information from the original finetune, the data was mixed with instruction sets on the topics of Coding, General Knowledge, NLP and Conversational Dialogue.
The model has shown to improve over a number of benchmarks over the original model, notably in Math and Economics. This model represents the first time a 8B model has been able to convincingly get a passing score on the CFA Level 1 exam, requiring a typical 300 hours of studying, indicating a significant improvement in Financial Knowledge.
overrides:
parameters:
model: Llama-3.1-Hawkish-8B-Q4_K_M.gguf
files:
- filename: Llama-3.1-Hawkish-8B-Q4_K_M.gguf
sha256: 613693936bbe641f41560151753716ba549ca052260fc5c0569e943e0bb834c3
uri: huggingface://bartowski/Llama-3.1-Hawkish-8B-GGUF/Llama-3.1-Hawkish-8B-Q4_K_M.gguf
- !!merge <<: *llama31
name: "llama3.1-bestmix-chem-einstein-8b"
urls:
- https://huggingface.co/ZeroXClem/Llama3.1-BestMix-Chem-Einstein-8B
- https://huggingface.co/QuantFactory/Llama3.1-BestMix-Chem-Einstein-8B-GGUF
description: "Llama3.1-BestMix-Chem-Einstein-8B is an innovative, meticulously blended model designed to excel in instruction-following, chemistry-focused tasks, and long-form conversational generation. This model fuses the best qualities of multiple Llama3-based architectures, making it highly versatile for both general and specialized tasks. \U0001F4BB\U0001F9E0✨\n"
overrides:
parameters:
model: Llama3.1-BestMix-Chem-Einstein-8B.Q4_K_M.gguf
files:
- filename: Llama3.1-BestMix-Chem-Einstein-8B.Q4_K_M.gguf
sha256: 1a53aa7124c731f33b0b616d7c66a6f78c6a133240acd9e3227f1188f743c1ee
uri: huggingface://QuantFactory/Llama3.1-BestMix-Chem-Einstein-8B-GGUF/Llama3.1-BestMix-Chem-Einstein-8B.Q4_K_M.gguf
- !!merge <<: *llama31
name: "control-8b-v1.1"
urls:
- https://huggingface.co/Delta-Vector/Control-8B-V1.1
- https://huggingface.co/QuantFactory/Control-8B-V1.1-GGUF
description: |
An experimental finetune based on the Llama3.1 8B Supernova with it's primary goal to be "Short and Sweet" as such, i finetuned the model for 2 epochs on OpenCAI Sharegpt converted dataset and the RP-logs datasets in a effort to achieve this, This version of Control has been finetuned with DPO to help improve the smart's and coherency which was a flaw noticed in the previous model.
overrides:
parameters:
model: Control-8B-V1.1.Q4_K_M.gguf
files:
- filename: Control-8B-V1.1.Q4_K_M.gguf
sha256: 01375fe20999134d6c6330ad645cde07883dcb7113eaef097df6ccff88c56ecf
uri: huggingface://QuantFactory/Control-8B-V1.1-GGUF/Control-8B-V1.1.Q4_K_M.gguf
- !!merge <<: *llama31
name: "llama-3.1-whiterabbitneo-2-8b"
icon: https://huggingface.co/migtissera/WhiteRabbitNeo/resolve/main/WhiteRabbitNeo.png
urls:
- https://huggingface.co/WhiteRabbitNeo/Llama-3.1-WhiteRabbitNeo-2-8B
- https://huggingface.co/bartowski/Llama-3.1-WhiteRabbitNeo-2-8B-GGUF
description: |
WhiteRabbitNeo is a model series that can be used for offensive and defensive cybersecurity.
Models are now getting released as a public preview of its capabilities, and also to assess the societal impact of such an AI.
overrides:
parameters:
model: Llama-3.1-WhiteRabbitNeo-2-8B-Q4_K_M.gguf
files:
- filename: Llama-3.1-WhiteRabbitNeo-2-8B-Q4_K_M.gguf
sha256: dbaf619312e706c5440214d324d8f304717866675fc9728e3901c75ef5bbfeca
uri: huggingface://bartowski/Llama-3.1-WhiteRabbitNeo-2-8B-GGUF/Llama-3.1-WhiteRabbitNeo-2-8B-Q4_K_M.gguf
- !!merge <<: *llama31
name: "tess-r1-limerick-llama-3.1-70b"
icon: https://huggingface.co/migtissera/Tess-R1-Llama-3.1-70B/resolve/main/Tess-R1-2.jpg
urls:
- https://huggingface.co/migtissera/Tess-R1-Limerick-Llama-3.1-70B
- https://huggingface.co/bartowski/Tess-R1-Limerick-Llama-3.1-70B-GGUF
description: |
Welcome to the Tess-Reasoning-1 (Tess-R1) series of models. Tess-R1 is designed with test-time compute in mind, and has the capabilities to produce a Chain-of-Thought (CoT) reasoning before producing the final output.
The model is trained to first think step-by-step, and contemplate on its answers. It can also write alternatives after contemplating. Once all the steps have been thought through, it writes the final output.
Step-by-step, Chain-of-Thought thinking process. Uses <thinking> </thinking> tags to indicate when the model is performing CoT.
<contemplation> </contemplation> tags are used when the model contemplate on its answers.
<alternatively> </alternatively> tags are used for alternate suggestions.
Finally, <output> </output> tags are used for the final output
Important Note:
In a multi-turn conversation, only the contents between the <output> </output> tags (discarding the tags) should be carried forward. Otherwise the model will see out of distribution input data and will fail.
The model was trained mostly with Chain-of-Thought reasoning data, including the XML tags. However, to generalize model generations, some single-turn and multi-turn data without XML tags were also included. Due to this, in some instances the model does not produce XML tags and does not fully utilize test-time compute capabilities. There is two ways to get around this:
Include a try/catch statement in your inference script, and only pass on the contents between the <output> </output> tags if it's available.
Use the <thinking> tag as the seed in the generation, and force the model to produce outputs with XML tags. i.e: f"{conversation}{user_input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n<thinking>"
overrides:
parameters:
model: Tess-R1-Limerick-Llama-3.1-70B-Q4_K_M.gguf
files:
- filename: Tess-R1-Limerick-Llama-3.1-70B-Q4_K_M.gguf
sha256: 92da5dad8a36ed5060becf78a83537d776079b7eaa4de73733d3ca57156286ab
uri: huggingface://bartowski/Tess-R1-Limerick-Llama-3.1-70B-GGUF/Tess-R1-Limerick-Llama-3.1-70B-Q4_K_M.gguf
- !!merge <<: *llama31
name: "tess-3-llama-3.1-70b"
icon: https://huggingface.co/migtissera/Tess-M-v1.0/resolve/main/Tess.png
urls:
- https://huggingface.co/migtissera/Tess-3-Llama-3.1-70B
- https://huggingface.co/mradermacher/Tess-3-Llama-3.1-70B-GGUF
description: |
Tess, short for Tesoro (Treasure in Italian), is a general purpose Large Language Model series created by Migel Tissera.
overrides:
parameters:
model: Tess-3-Llama-3.1-70B.Q4_K_M.gguf
files:
- filename: Tess-3-Llama-3.1-70B.Q4_K_M.gguf
sha256: 81625defcbea414282f490dd960b14afdecd7734e0d77d8db2da2bf5c21261aa
uri: huggingface://mradermacher/Tess-3-Llama-3.1-70B-GGUF/Tess-3-Llama-3.1-70B.Q4_K_M.gguf
- !!merge <<: *llama31
name: "llama3.1-8b-enigma"
icon: https://cdn-uploads.huggingface.co/production/uploads/64f267a8a4f79a118e0fcc89/it7MY5MyLCLpFQev5dUis.jpeg
urls:
- https://huggingface.co/ValiantLabs/Llama3.1-8B-Enigma
- https://huggingface.co/mradermacher/Llama3.1-8B-Enigma-GGUF
description: |
Enigma is a code-instruct model built on Llama 3.1 8b.
High quality code instruct performance within the Llama 3 Instruct chat format
Finetuned on synthetic code-instruct data generated with Llama 3.1 405b. Find the current version of the dataset here!
Overall chat performance supplemented with generalist synthetic data.
This is the 2024-10-02 release of Enigma for Llama 3.1 8b, enhancing code-instruct and general chat capabilities.
overrides:
parameters:
model: Llama3.1-8B-Enigma.Q4_K_M.gguf
files:
- filename: Llama3.1-8B-Enigma.Q4_K_M.gguf
sha256: e98c9909ee3b74b11d50d4c4f17178502e42cd936215ede0c64a7b217ae665bb
uri: huggingface://mradermacher/Llama3.1-8B-Enigma-GGUF/Llama3.1-8B-Enigma.Q4_K_M.gguf
- !!merge <<: *llama31
name: "llama3.1-8b-cobalt"
urls:
- https://huggingface.co/ValiantLabs/Llama3.1-8B-Cobalt
- https://huggingface.co/mradermacher/Llama3.1-8B-Cobalt-GGUF
description: |
Cobalt is a math-instruct model built on Llama 3.1 8b.
High quality math instruct performance within the Llama 3 Instruct chat format
Finetuned on synthetic math-instruct data generated with Llama 3.1 405b. Find the current version of the dataset here!
Version
This is the 2024-08-16 release of Cobalt for Llama 3.1 8b.
Help us and recommend Cobalt to your friends! We're excited for more Cobalt releases in the future.
overrides:
parameters:
model: Llama3.1-8B-Cobalt.Q4_K_M.gguf
files:
- filename: Llama3.1-8B-Cobalt.Q4_K_M.gguf
sha256: 44340f1ebbc3bf4e4e23d04ac3580c26fdc0b5717f23b45ce30743aa1eeed7ed
uri: huggingface://mradermacher/Llama3.1-8B-Cobalt-GGUF/Llama3.1-8B-Cobalt.Q4_K_M.gguf
- !!merge <<: *llama31
name: "llama-3.1-8b-arliai-rpmax-v1.3"
urls:
- https://huggingface.co/ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.3
- https://huggingface.co/bartowski/Llama-3.1-8B-ArliAI-RPMax-v1.3-GGUF
description: |
RPMax is a series of models that are trained on a diverse set of curated creative writing and RP datasets with a focus on variety and deduplication. This model is designed to be highly creative and non-repetitive by making sure no two entries in the dataset have repeated characters or situations, which makes sure the model does not latch on to a certain personality and be capable of understanding and acting appropriately to any characters or situations.
Many RPMax users mentioned that these models does not feel like any other RP models, having a different writing style and generally doesn't feel in-bred.
overrides:
parameters:
model: Llama-3.1-8B-ArliAI-RPMax-v1.3-Q4_K_M.gguf
files:
- filename: Llama-3.1-8B-ArliAI-RPMax-v1.3-Q4_K_M.gguf
sha256: 66fcbbe96950cc3424cba866f929180d83f1bffdb0d4eedfa9b1f55cf0ea5c26
uri: huggingface://bartowski/Llama-3.1-8B-ArliAI-RPMax-v1.3-GGUF/Llama-3.1-8B-ArliAI-RPMax-v1.3-Q4_K_M.gguf
- !!merge <<: *llama31
name: "l3.1-8b-slush-i1"
icon: https://huggingface.co/crestf411/L3.1-8B-Slush/resolve/main/slush.jpg?
urls:
- https://huggingface.co/crestf411/L3.1-8B-Slush
- https://huggingface.co/mradermacher/L3.1-8B-Slush-i1-GGUF
description: |
Slush is a two-stage model trained with high LoRA dropout, where stage 1 is a pretraining continuation on the base model, aimed at boosting the model's creativity and writing capabilities. This is then merged into the instruction tune model, and stage 2 is a fine tuning step on top of this to further enhance its roleplaying capabilities and/or to repair any damage caused in the stage 1 merge.
This is an initial experiment done on the at-this-point-infamous Llama 3.1 8B model, in an attempt to retain its smartness while addressing its abysmal lack of imagination/creativity. As always, feedback is welcome, and begone if you demand perfection.
The second stage, like the Sunfall series, follows the Silly Tavern preset, so ymmv in particular if you use some other tool and/or preset.
overrides:
parameters:
model: L3.1-8B-Slush.i1-Q4_K_M.gguf
files:
- filename: L3.1-8B-Slush.i1-Q4_K_M.gguf
sha256: 98c53cd1ec0e2b00400c5968cd076a589d0c889bca13ec52abfe4456cfa039be
uri: huggingface://mradermacher/L3.1-8B-Slush-i1-GGUF/L3.1-8B-Slush.i1-Q4_K_M.gguf
- !!merge <<: *llama31
icon: https://cdn-uploads.huggingface.co/production/uploads/64545af5ec40bbbd01242ca6/C-ndfxAGdf21DjchZcf2p.png
name: "l3.1-ms-astoria-70b-v2"
urls:
- https://huggingface.co/Steelskull/L3.1-MS-Astoria-70b-v2
- https://huggingface.co/bartowski/L3.1-MS-Astoria-70b-v2-GGUF
description: |
This model is a remake of the original astoria with modern models and context sizes its goal is to merge the robust storytelling of mutiple models while attempting to maintain intelligence.
Use Llama 3 Format or meth format (llama 3 refuses to work with stepped thinking but meth works)
- model: migtissera/Tess-3-Llama-3.1-70B
- model: NeverSleep/Lumimaid-v0.2-70B
- model: Sao10K/L3.1-70B-Euryale-v2.2
- model: ArliAI/Llama-3.1-70B-ArliAI-RPMax-v1.2
- model: nbeerbower/Llama3.1-Gutenberg-Doppel-70B
overrides:
parameters:
model: L3.1-MS-Astoria-70b-v2-Q4_K_M.gguf
files:
- filename: L3.1-MS-Astoria-70b-v2-Q4_K_M.gguf
sha256: c02658ead1ecdc25c7218b8d9d11786f19c16d64f0d453082998e313edb0d4a6
uri: huggingface://bartowski/L3.1-MS-Astoria-70b-v2-GGUF/L3.1-MS-Astoria-70b-v2-Q4_K_M.gguf
- !!merge <<: *llama31
name: "magnum-v2-4b-i1"
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
icon: https://cdn-uploads.huggingface.co/production/uploads/658a46cbfb9c2bdfae75b3a6/9JwXZze4tHRGpc_RzE2AU.png
urls:
- https://huggingface.co/anthracite-org/magnum-v2-4b
- https://huggingface.co/mradermacher/magnum-v2-4b-i1-GGUF
description: |
This is the eighth in a series of models designed to replicate the prose quality of the Claude 3 models, specifically Sonnet and Opus. This model is fine-tuned on top of IntervitensInc/Llama-3.1-Minitron-4B-Width-Base-chatml.
overrides:
parameters:
model: magnum-v2-4b.i1-Q4_K_M.gguf
files:
- filename: magnum-v2-4b.i1-Q4_K_M.gguf
sha256: 692618059fee8870759d67d275ebc59bc0474b18ae3571b3ebdec8f9da786a64
uri: huggingface://mradermacher/magnum-v2-4b-i1-GGUF/magnum-v2-4b.i1-Q4_K_M.gguf
- !!merge <<: *llama31
name: "l3.1-nemotron-sunfall-v0.7.0-i1"
urls:
- https://huggingface.co/crestf411/L3.1-nemotron-sunfall-v0.7.0
- https://huggingface.co/mradermacher/L3.1-nemotron-sunfall-v0.7.0-i1-GGUF
description: |
Significant revamping of the dataset metadata generation process, resulting in higher quality dataset overall. The "Diamond Law" experiment has been removed as it didn't seem to affect the model output enough to warrant set up complexity.
Recommended starting point:
Temperature: 1
MinP: 0.05~0.1
DRY: 0.8 1.75 2 0
At early context, I recommend keeping XTC disabled. Once you hit higher context sizes (10k+), enabling XTC at 0.1 / 0.5 seems to significantly improve the output, but YMMV. If the output drones on and is uninspiring, XTC can be extremely effective.
General heuristic:
Lots of slop? Temperature is too low. Raise it, or enable XTC. For early context, temp bump is probably preferred.
Is the model making mistakes about subtle or obvious details in the scene? Temperature is too high, OR XTC is enabled and/or XTC settings are too high. Lower temp and/or disable XTC.
overrides:
parameters:
model: L3.1-nemotron-sunfall-v0.7.0.i1-Q4_K_M.gguf
files:
- filename: L3.1-nemotron-sunfall-v0.7.0.i1-Q4_K_M.gguf
sha256: f9aa88f3b220e35662a2d62d1f615a3b425e348a8f9e2939f05bf57385119f76
uri: huggingface://mradermacher/L3.1-nemotron-sunfall-v0.7.0-i1-GGUF/L3.1-nemotron-sunfall-v0.7.0.i1-Q4_K_M.gguf
- !!merge <<: *llama31
name: "llama-mesh"
urls:
- https://huggingface.co/Zhengyi/LLaMA-Mesh
- https://huggingface.co/bartowski/LLaMA-Mesh-GGUF
description: |
LLaMA-Mesh: Unifying 3D Mesh Generation with Language Models
Pre-trained model weights of LLaMA-Mesh: Unifying 3D Mesh Generation with Language Models. This work explores expanding the capabilities of large language models (LLMs) pretrained on text to generate 3D meshes within a unified model
overrides:
parameters:
model: LLaMA-Mesh-Q4_K_M.gguf
files:
- filename: LLaMA-Mesh-Q4_K_M.gguf
sha256: 150ac70c92bb7351468768bcc84bd3018f44b624f709821fee8e5e816e4868e7
uri: huggingface://bartowski/LLaMA-Mesh-GGUF/LLaMA-Mesh-Q4_K_M.gguf
- !!merge <<: *llama31
name: "llama-3.1-8b-instruct-ortho-v3"
urls:
- https://huggingface.co/lodrick-the-lafted/llama-3.1-8b-instruct-ortho-v3
- https://huggingface.co/mradermacher/llama-3.1-8b-instruct-ortho-v3-GGUF
description: |
A few different attempts at orthogonalization/abliteration of llama-3.1-8b-instruct using variations of the method from "Mechanistically Eliciting Latent Behaviors in Language Models".
Each of these use different vectors and have some variations in where the new refusal boundaries lie. None of them seem totally jailbroken.
overrides:
parameters:
model: llama-3.1-8b-instruct-ortho-v3.Q4_K_M.gguf
files:
- filename: llama-3.1-8b-instruct-ortho-v3.Q4_K_M.gguf
sha256: 8d1dd638ed80019f5cd61240d1f06fd1333413f61427bef4d288c5b8cd9d8cea
uri: huggingface://mradermacher/llama-3.1-8b-instruct-ortho-v3-GGUF/llama-3.1-8b-instruct-ortho-v3.Q4_K_M.gguf
- !!merge <<: *llama31
name: "llama-3.1-tulu-3-8b-dpo"
icon: https://huggingface.co/datasets/allenai/blog-images/resolve/main/tulu3/Tulu3-logo.png
urls:
- https://huggingface.co/allenai/Llama-3.1-Tulu-3-8B-DPO
- https://huggingface.co/mradermacher/Llama-3.1-Tulu-3-8B-DPO-GGUF
description: |
Tülu3 is a leading instruction following model family, offering fully open-source data, code, and recipes designed to serve as a comprehensive guide for modern post-training techniques. Tülu3 is designed for state-of-the-art performance on a diversity of tasks in addition to chat, such as MATH, GSM8K, and IFEval.
overrides:
parameters:
model: Llama-3.1-Tulu-3-8B-DPO.Q4_K_M.gguf
files:
- filename: Llama-3.1-Tulu-3-8B-DPO.Q4_K_M.gguf
sha256: 8991bef1775edc5190047ef268d60876c2df3a80cf6da5f1bd1e82d09dd0ab2b
uri: huggingface://mradermacher/Llama-3.1-Tulu-3-8B-DPO-GGUF/Llama-3.1-Tulu-3-8B-DPO.Q4_K_M.gguf
- !!merge <<: *llama31
name: "l3.1-aspire-heart-matrix-8b"
urls:
- https://huggingface.co/ZeroXClem/L3-Aspire-Heart-Matrix-8B
- https://huggingface.co/mradermacher/L3.1-Aspire-Heart-Matrix-8B-GGUF
description: |
ZeroXClem/L3-Aspire-Heart-Matrix-8B is an experimental language model crafted by merging three high-quality 8B parameter models using the Model Stock Merge method. This synthesis leverages the unique strengths of Aspire, Heart Stolen, and CursedMatrix, creating a highly versatile and robust language model for a wide array of tasks.
overrides:
parameters:
model: L3.1-Aspire-Heart-Matrix-8B.Q4_K_M.gguf
files:
- filename: L3.1-Aspire-Heart-Matrix-8B.Q4_K_M.gguf
sha256: 4d90abaae59f39e8f04548151265dce3b9c913303e6755860f5d28dd5cfc2d86
uri: huggingface://mradermacher/L3.1-Aspire-Heart-Matrix-8B-GGUF/L3.1-Aspire-Heart-Matrix-8B.Q4_K_M.gguf
- !!merge <<: *llama31
name: "dark-chivalry_v1.0-i1"
icon: https://cdn-uploads.huggingface.co/production/uploads/66c1cc08453a7ef6c5fe657a/A9vNZXVnD3xFiZ7cMLOKy.png
urls:
- https://huggingface.co/Triangle104/Dark-Chivalry_V1.0
- https://huggingface.co/mradermacher/Dark-Chivalry_V1.0-i1-GGUF
description: |
The dark side of chivalry...
This model was merged using the TIES merge method using ValiantLabs/Llama3.1-8B-ShiningValiant2 as a base.
overrides:
parameters:
model: Dark-Chivalry_V1.0.i1-Q4_K_M.gguf
files:
- filename: Dark-Chivalry_V1.0.i1-Q4_K_M.gguf
sha256: 6659fad2ea7e40b862a02d683a4bcb9044704fc7f6d3f50cd54c9069860171cd
uri: huggingface://mradermacher/Dark-Chivalry_V1.0-i1-GGUF/Dark-Chivalry_V1.0.i1-Q4_K_M.gguf
- !!merge <<: *llama31
name: "tulu-3.1-8b-supernova-i1"
urls:
- https://huggingface.co/bunnycore/Tulu-3.1-8B-SuperNova
- https://huggingface.co/mradermacher/Tulu-3.1-8B-SuperNova-i1-GGUF
description: |
The following models were included in the merge:
meditsolutions/Llama-3.1-MedIT-SUN-8B
allenai/Llama-3.1-Tulu-3-8B
arcee-ai/Llama-3.1-SuperNova-Lite
overrides:
parameters:
model: Tulu-3.1-8B-SuperNova.i1-Q4_K_M.gguf
files:
- filename: Tulu-3.1-8B-SuperNova.i1-Q4_K_M.gguf
sha256: c6cc2e1a4c3d2338973ca0050af1cf4462b3f62838f62b4c8a204f2a74eeb01f
uri: huggingface://mradermacher/Tulu-3.1-8B-SuperNova-i1-GGUF/Tulu-3.1-8B-SuperNova.i1-Q4_K_M.gguf
- !!merge <<: *llama31
name: "llama-3.1-tulu-3-70b-dpo"
icon: "https://huggingface.co/datasets/allenai/blog-images/resolve/main/tulu3/Tulu3-logo.png"
urls:
- https://huggingface.co/allenai/Llama-3.1-Tulu-3-70B-DPO
- https://huggingface.co/bartowski/Llama-3.1-Tulu-3-70B-DPO-GGUF
description: |
Tülu3 is a leading instruction following model family, offering fully open-source data, code, and recipes designed to serve as a comprehensive guide for modern post-training techniques. Tülu3 is designed for state-of-the-art performance on a diversity of tasks in addition to chat, such as MATH, GSM8K, and IFEval.
overrides:
parameters:
model: Llama-3.1-Tulu-3-70B-DPO-Q4_K_M.gguf
files:
- filename: Llama-3.1-Tulu-3-70B-DPO-Q4_K_M.gguf
sha256: e2d9c59736274f9dd94f30ef3edcee68fec1d6649eb01d6bad7e3e8a6024f77d
uri: huggingface://bartowski/Llama-3.1-Tulu-3-70B-DPO-GGUF/Llama-3.1-Tulu-3-70B-DPO-Q4_K_M.gguf
- !!merge <<: *llama31
name: "llama-3.1-tulu-3-8b-sft"
icon: "https://huggingface.co/datasets/allenai/blog-images/resolve/main/tulu3/Tulu3-logo.png"
urls:
- https://huggingface.co/allenai/Llama-3.1-Tulu-3-8B-SFT
- https://huggingface.co/bartowski/Llama-3.1-Tulu-3-8B-SFT-GGUF
description: |
Tülu3 is a leading instruction following model family, offering fully open-source data, code, and recipes designed to serve as a comprehensive guide for modern post-training techniques. Tülu3 is designed for state-of-the-art performance on a diversity of tasks in addition to chat, such as MATH, GSM8K, and IFEval.
overrides:
parameters:
model: Llama-3.1-Tulu-3-8B-SFT-Q4_K_M.gguf
files:
- filename: Llama-3.1-Tulu-3-8B-SFT-Q4_K_M.gguf
sha256: 3fad2c96aa9b9de19c2cda0f88a381c47ac768ca03a95059d9f6c439791f8592
uri: huggingface://bartowski/Llama-3.1-Tulu-3-8B-SFT-GGUF/Llama-3.1-Tulu-3-8B-SFT-Q4_K_M.gguf
- !!merge <<: *llama31
icon: https://huggingface.co/Skywork/Skywork-o1-Open-Llama-3.1-8B/resolve/main/misc/misc_fig.jpg
name: "skywork-o1-open-llama-3.1-8b"
urls:
- https://huggingface.co/Skywork/Skywork-o1-Open-Llama-3.1-8B
- https://huggingface.co/QuantFactory/Skywork-o1-Open-Llama-3.1-8B-GGUF
description: |
We are excited to announce the release of the Skywork o1 Open model series, developed by the Skywork team at Kunlun Inc. This groundbreaking release introduces a series of models that incorporate o1-like slow thinking and reasoning capabilities. The Skywork o1 Open model series includes three advanced models:
Skywork o1 Open-Llama-3.1-8B: A robust chat model trained on Llama-3.1-8B, enhanced significantly with "o1-style" data to improve reasoning skills.
Skywork o1 Open-PRM-Qwen-2.5-1.5B: A specialized model designed to enhance reasoning capability through incremental process rewards, ideal for complex problem solving at a smaller scale.
Skywork o1 Open-PRM-Qwen-2.5-7B: Extends the capabilities of the 1.5B model by scaling up to handle more demanding reasoning tasks, pushing the boundaries of AI reasoning.
Different from mere reproductions of the OpenAI o1 model, the Skywork o1 Open model series not only exhibits innate thinking, planning, and reflecting capabilities in its outputs, but also shows significant improvements in reasoning skills on standard benchmarks. This series represents a strategic advancement in AI capabilities, moving a previously weaker base model towards the state-of-the-art (SOTA) in reasoning tasks.
overrides:
parameters:
model: Skywork-o1-Open-Llama-3.1-8B.Q4_K_M.gguf
files:
- filename: Skywork-o1-Open-Llama-3.1-8B.Q4_K_M.gguf
sha256: ef6a203ba585aab14f5d2ec463917a45b3ac571abd89c39e9a96a5e395ea8eea
uri: huggingface://QuantFactory/Skywork-o1-Open-Llama-3.1-8B-GGUF/Skywork-o1-Open-Llama-3.1-8B.Q4_K_M.gguf
- !!merge <<: *llama31
name: "sparse-llama-3.1-8b-2of4"
urls:
- https://huggingface.co/QuantFactory/Sparse-Llama-3.1-8B-2of4-GGUF
- https://huggingface.co/QuantFactory/Sparse-Llama-3.1-8B-2of4-GGUF
description: |
This is the 2:4 sparse version of Llama-3.1-8B. On the OpenLLM benchmark (version 1), it achieves an average score of 62.16, compared to 63.19 for the dense model—demonstrating a 98.37% accuracy recovery. On the Mosaic Eval Gauntlet benchmark (version v0.3), it achieves an average score of 53.85, versus 55.34 for the dense model—representing a 97.3% accuracy recovery.
overrides:
parameters:
model: Sparse-Llama-3.1-8B-2of4.Q4_K_M.gguf
files:
- filename: Sparse-Llama-3.1-8B-2of4.Q4_K_M.gguf
sha256: c481e7089ffaedd5ae8c74dccc7fb45f6509640b661fa086ae979f6fefc3fdba
uri: huggingface://QuantFactory/Sparse-Llama-3.1-8B-2of4-GGUF/Sparse-Llama-3.1-8B-2of4.Q4_K_M.gguf
- !!merge <<: *llama31
name: "loki-v2.6-8b-1024k"
icon: https://cdn-uploads.huggingface.co/production/uploads/6472de046facfb01d8b1fb9d/uQPITKRS8XLTLyaiGwgh_.jpeg
urls:
- https://huggingface.co/QuantFactory/Loki-v2.6-8b-1024k-GGUF
description: |
The following models were included in the merge:
MrRobotoAI/Epic_Fiction-8b
MrRobotoAI/Unaligned-RP-Base-8b-1024k
MrRobotoAI/Loki-.Epic_Fiction.-8b
Casual-Autopsy/L3-Luna-8B
Casual-Autopsy/L3-Super-Nova-RP-8B
Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B
Casual-Autopsy/Halu-L3-Stheno-BlackOasis-8B
Undi95/Llama-3-LewdPlay-8B
Undi95/Llama-3-LewdPlay-8B-evo
Undi95/Llama-3-Unholy-8B
ChaoticNeutrals/Hathor_Tahsin-L3-8B-v0.9
ChaoticNeutrals/Hathor_RP-v.01-L3-8B
ChaoticNeutrals/Domain-Fusion-L3-8B
ChaoticNeutrals/T-900-8B
ChaoticNeutrals/Poppy_Porpoise-1.4-L3-8B
ChaoticNeutrals/Templar_v1_8B
ChaoticNeutrals/Hathor_Respawn-L3-8B-v0.8
ChaoticNeutrals/Sekhmet_Gimmel-L3.1-8B-v0.3
zeroblu3/LewdPoppy-8B-RP
tohur/natsumura-storytelling-rp-1.0-llama-3.1-8b
jeiku/Chaos_RP_l3_8B
tannedbum/L3-Nymeria-Maid-8B
Nekochu/Luminia-8B-RP
vicgalle/Humanish-Roleplay-Llama-3.1-8B
saishf/SOVLish-Maid-L3-8B
Dogge/llama-3-8B-instruct-Bluemoon-Freedom-RP
MrRobotoAI/Epic_Fiction-8b-v4
maldv/badger-lambda-0-llama-3-8b
maldv/llama-3-fantasy-writer-8b
maldv/badger-kappa-llama-3-8b
maldv/badger-mu-llama-3-8b
maldv/badger-lambda-llama-3-8b
maldv/badger-iota-llama-3-8b
maldv/badger-writer-llama-3-8b
Magpie-Align/MagpieLM-8B-Chat-v0.1
nbeerbower/llama-3-gutenberg-8B
nothingiisreal/L3-8B-Stheno-Horny-v3.3-32K
nbeerbower/llama-3-spicy-abliterated-stella-8B
Magpie-Align/MagpieLM-8B-SFT-v0.1
NeverSleep/Llama-3-Lumimaid-8B-v0.1
mlabonne/NeuralDaredevil-8B-abliterated
mlabonne/Daredevil-8B-abliterated
NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS
nothingiisreal/L3-8B-Instruct-Abliterated-DWP
openchat/openchat-3.6-8b-20240522
turboderp/llama3-turbcat-instruct-8b
UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3
Undi95/Llama-3-LewdPlay-8B
TIGER-Lab/MAmmoTH2-8B-Plus
OwenArli/Awanllm-Llama-3-8B-Cumulus-v1.0
refuelai/Llama-3-Refueled
SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha
NousResearch/Hermes-2-Theta-Llama-3-8B
ResplendentAI/Nymph_8B
grimjim/Llama-3-Oasis-v1-OAS-8B
flammenai/Mahou-1.3b-llama3-8B
lemon07r/Llama-3-RedMagic4-8B
grimjim/Llama-3.1-SuperNova-Lite-lorabilterated-8B
grimjim/Llama-Nephilim-Metamorphosis-v2-8B
lemon07r/Lllama-3-RedElixir-8B
grimjim/Llama-3-Perky-Pat-Instruct-8B
ChaoticNeutrals/Hathor_RP-v.01-L3-8B
grimjim/llama-3-Nephilim-v2.1-8B
ChaoticNeutrals/Hathor_Respawn-L3-8B-v0.8
migtissera/Llama-3-8B-Synthia-v3.5
Locutusque/Llama-3-Hercules-5.0-8B
WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0
VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct
iRyanBell/ARC1-II
HPAI-BSC/Llama3-Aloe-8B-Alpha
HaitameLaf/Llama-3-8B-StoryGenerator
failspy/Meta-Llama-3-8B-Instruct-abliterated-v3
Undi95/Llama-3-Unholy-8B
ajibawa-2023/Uncensored-Frank-Llama-3-8B
ajibawa-2023/SlimOrca-Llama-3-8B
ChaoticNeutrals/Templar_v1_8B
aifeifei798/llama3-8B-DarkIdol-2.2-Uncensored-1048K
ChaoticNeutrals/Hathor_Tahsin-L3-8B-v0.9
Blackroot/Llama-3-Gamma-Twist
FPHam/L3-8B-Everything-COT
Blackroot/Llama-3-LongStory
ChaoticNeutrals/Sekhmet_Gimmel-L3.1-8B-v0.3
abacusai/Llama-3-Smaug-8B
Khetterman/CursedMatrix-8B-v9
ajibawa-2023/Scarlett-Llama-3-8B-v1.0
MrRobotoAI/Unaligned-RP-Base-8b-1024k + surya-narayanan/physics_non_masked
MrRobotoAI/Unaligned-RP-Base-8b-1024k + surya-narayanan/electrical_engineering
MrRobotoAI/Unaligned-RP-Base-8b-1024k + surya-narayanan/college_chemistry
MrRobotoAI/Unaligned-RP-Base-8b-1024k + surya-narayanan/philosophy_non_masked
MrRobotoAI/Unaligned-RP-Base-8b-1024k + surya-narayanan/college_physics
MrRobotoAI/Unaligned-RP-Base-8b-1024k + surya-narayanan/philosophy
MrRobotoAI/Unaligned-RP-Base-8b-1024k + surya-narayanan/formal_logic
MrRobotoAI/Unaligned-RP-Base-8b-1024k + surya-narayanan/philosophy_100
MrRobotoAI/Unaligned-RP-Base-8b-1024k + surya-narayanan/conceptual_physics
MrRobotoAI/Unaligned-RP-Base-8b-1024k + surya-narayanan/college_computer_science
MrRobotoAI/Unaligned-RP-Base-8b-1024k + surya-narayanan/psychology_non_masked
MrRobotoAI/Unaligned-RP-Base-8b-1024k + surya-narayanan/psychology
MrRobotoAI/Unaligned-RP-Base-8b-1024k + Blackroot/Llama3-RP-Lora
MrRobotoAI/Unaligned-RP-Base-8b-1024k + Azazelle/Llama-3-LimaRP-Instruct-LoRA-8B
MrRobotoAI/Unaligned-RP-Base-8b-1024k + nothingiisreal/llama3-8B-DWP-lora
MrRobotoAI/Unaligned-RP-Base-8b-1024k + surya-narayanan/world_religions
MrRobotoAI/Unaligned-RP-Base-8b-1024k + surya-narayanan/high_school_european_history
MrRobotoAI/Unaligned-RP-Base-8b-1024k + surya-narayanan/electrical_engineering
MrRobotoAI/Unaligned-RP-Base-8b-1024k + Azazelle/Llama-3-8B-Abomination-LORA
MrRobotoAI/Unaligned-RP-Base-8b-1024k + Azazelle/Llama-3-LongStory-LORA
MrRobotoAI/Unaligned-RP-Base-8b-1024k + surya-narayanan/human_sexuality
MrRobotoAI/Unaligned-RP-Base-8b-1024k + surya-narayanan/sociology
MrRobotoAI/Unaligned-RP-Base-8b-1024k + ResplendentAI/Theory_of_Mind_Llama3
MrRobotoAI/Unaligned-RP-Base-8b-1024k + Azazelle/Smarts_Llama3
MrRobotoAI/Unaligned-RP-Base-8b-1024k + Azazelle/Llama-3-LongStory-LORA
MrRobotoAI/Unaligned-RP-Base-8b-1024k + Azazelle/Nimue-8B
MrRobotoAI/Unaligned-RP-Base-8b-1024k + vincentyandex/lora_llama3_chunked_novel_bs128
MrRobotoAI/Unaligned-RP-Base-8b-1024k + ResplendentAI/Aura_Llama3
MrRobotoAI/Unaligned-RP-Base-8b-1024k + Azazelle/L3-Daybreak-8b-lora
MrRobotoAI/Unaligned-RP-Base-8b-1024k + ResplendentAI/Luna_Llama3
MrRobotoAI/Unaligned-RP-Base-8b-1024k + nicce/story-mixtral-8x7b-lora
MrRobotoAI/Unaligned-RP-Base-8b-1024k + Blackroot/Llama-3-LongStory-LORA
MrRobotoAI/Unaligned-RP-Base-8b-1024k + ResplendentAI/NoWarning_Llama3
MrRobotoAI/Unaligned-RP-Base-8b-1024k + ResplendentAI/BlueMoon_Llama3
overrides:
parameters:
model: Loki-v2.6-8b-1024k.Q4_K_M.gguf
files:
- filename: Loki-v2.6-8b-1024k.Q4_K_M.gguf
sha256: 9b15c1fee0a0e6d6ed97df3d1b6fc8f774e6e1bd388328599e731c62e0f19d81
uri: huggingface://QuantFactory/Loki-v2.6-8b-1024k-GGUF/Loki-v2.6-8b-1024k.Q4_K_M.gguf
- !!merge <<: *llama31
name: "impish_mind_8b"
icon: https://huggingface.co/SicariusSicariiStuff/Impish_Mind_8B/resolve/main/Images/Impish_Mind.png
urls:
- https://huggingface.co/SicariusSicariiStuff/Impish_Mind_8B
- https://huggingface.co/bartowski/Impish_Mind_8B-GGUF
description: |
This model was trained with new data and a new approach (compared to my other models). While it may be a bit more censored, it is expected to be significantly smarter. The data used is quite unique, and is also featuring long and complex markdown datasets.
Regarding censorship: Whether uncensoring or enforcing strict censorship, the model tends to lose some of its intelligence. The use of toxic data was kept to a minimum with this model.
Consequently, the model is likely to refuse some requests, this is easly avoidable with a basic system prompt, or assistant impersonation ("Sure thing!..."). Unlike many RP models, this one is designed to excel at general assistant tasks as well.
overrides:
parameters:
model: Impish_Mind_8B-Q4_K_M.gguf
files:
- filename: Impish_Mind_8B-Q4_K_M.gguf
sha256: 918f82bcb893c75fa2e846156df7bd3ce359464b960e32ae9171035ee14e7c51
uri: huggingface://bartowski/Impish_Mind_8B-GGUF/Impish_Mind_8B-Q4_K_M.gguf
- !!merge <<: *llama31
name: "tulu-3.1-8b-supernova-smart"
urls:
- https://huggingface.co/bunnycore/Tulu-3.1-8B-SuperNova-Smart
- https://huggingface.co/QuantFactory/Tulu-3.1-8B-SuperNova-Smart-GGUF
description: |
This model was merged using the passthrough merge method using bunnycore/Tulu-3.1-8B-SuperNova + bunnycore/Llama-3.1-8b-smart-lora as a base.
overrides:
parameters:
model: Tulu-3.1-8B-SuperNova-Smart.Q4_K_M.gguf
files:
- filename: Tulu-3.1-8B-SuperNova-Smart.Q4_K_M.gguf
sha256: 4b8ba9e64f0667199eee2dcc769f1a90aa9c7730165d42f440fdf107c7585c63
uri: huggingface://QuantFactory/Tulu-3.1-8B-SuperNova-Smart-GGUF/Tulu-3.1-8B-SuperNova-Smart.Q4_K_M.gguf
- !!merge <<: *llama31
name: "b-nimita-l3-8b-v0.02"
urls:
- https://huggingface.co/Arkana08/B-NIMITA-L3-8B-v0.02
- https://huggingface.co/QuantFactory/B-NIMITA-L3-8B-v0.02-GGUF
description: |
B-NIMITA is an AI model designed to bring role-playing scenarios to life with emotional depth and rich storytelling. At its core is NIHAPPY, providing a solid narrative foundation and contextual consistency. This is enhanced by Mythorica, which adds vivid emotional arcs and expressive dialogue, and V-Blackroot, ensuring character consistency and subtle adaptability. This combination allows B-NIMITA to deliver dynamic, engaging interactions that feel natural and immersive.
overrides:
parameters:
model: B-NIMITA-L3-8B-v0.02.Q4_K_M.gguf
files:
- filename: B-NIMITA-L3-8B-v0.02.Q4_K_M.gguf
sha256: 625a54848dcd3f23bc06b639a7dfecae14142b5d177dd45acfe7724816bab4cd
uri: huggingface://QuantFactory/B-NIMITA-L3-8B-v0.02-GGUF/B-NIMITA-L3-8B-v0.02.Q4_K_M.gguf
- !!merge <<: *llama31
name: "deepthought-8b-llama-v0.01-alpha"
urls:
- https://huggingface.co/ruliad/deepthought-8b-llama-v0.01-alpha
- https://huggingface.co/bartowski/deepthought-8b-llama-v0.01-alpha-GGUF
description: |
Deepthought-8B is a small and capable reasoning model built on LLaMA-3.1 8B, designed to make AI reasoning more transparent and controllable. Despite its relatively small size, it achieves sophisticated reasoning capabilities that rival much larger models.
overrides:
parameters:
model: deepthought-8b-llama-v0.01-alpha-Q4_K_M.gguf
files:
- filename: deepthought-8b-llama-v0.01-alpha-Q4_K_M.gguf
sha256: 33195ba7b898ef8b2997d095e8be42adf1d0e1f6e8291cf07e026fc8e45903fd
uri: huggingface://bartowski/deepthought-8b-llama-v0.01-alpha-GGUF/deepthought-8b-llama-v0.01-alpha-Q4_K_M.gguf
- !!merge <<: *llama31
name: "fusechat-llama-3.1-8b-instruct"
icon: https://huggingface.co/FuseAI/FuseChat-Llama-3.1-8B-Instruct/resolve/main/FuseChat-3.0.png
urls:
- https://huggingface.co/bartowski/FuseChat-Llama-3.1-8B-Instruct-GGUF
- https://huggingface.co/bartowski/FuseChat-Llama-3.1-8B-Instruct-GGUF
description: |
We present FuseChat-3.0, a series of models crafted to enhance performance by integrating the strengths of multiple source LLMs into more compact target LLMs. To achieve this fusion, we utilized four powerful source LLMs: Gemma-2-27B-It, Mistral-Large-Instruct-2407, Qwen-2.5-72B-Instruct, and Llama-3.1-70B-Instruct. For the target LLMs, we employed three widely-used smaller models—Llama-3.1-8B-Instruct, Gemma-2-9B-It, and Qwen-2.5-7B-Instruct—along with two even more compact models—Llama-3.2-3B-Instruct and Llama-3.2-1B-Instruct. The implicit model fusion process involves a two-stage training pipeline comprising Supervised Fine-Tuning (SFT) to mitigate distribution discrepancies between target and source LLMs, and Direct Preference Optimization (DPO) for learning preferences from multiple source LLMs. The resulting FuseChat-3.0 models demonstrated substantial improvements in tasks related to general conversation, instruction following, mathematics, and coding. Notably, when Llama-3.1-8B-Instruct served as the target LLM, our fusion approach achieved an average improvement of 6.8 points across 14 benchmarks. Moreover, it showed significant improvements of 37.1 and 30.1 points on instruction-following test sets AlpacaEval-2 and Arena-Hard respectively. We have released the FuseChat-3.0 models on Huggingface, stay tuned for the forthcoming dataset and code.
overrides:
parameters:
model: FuseChat-Llama-3.1-8B-Instruct-Q4_K_M.gguf
files:
- filename: FuseChat-Llama-3.1-8B-Instruct-Q4_K_M.gguf
sha256: fe58c8c9b695e36e6b0ee5e4d81ff71ea0a4f1a11fa7bb16e8d6f1b35a58dff6
uri: huggingface://bartowski/FuseChat-Llama-3.1-8B-Instruct-GGUF/FuseChat-Llama-3.1-8B-Instruct-Q4_K_M.gguf
- &deepseek
## Deepseek
url: "github:mudler/LocalAI/gallery/deepseek.yaml@master"
name: "deepseek-coder-v2-lite-instruct"
icon: "https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true"
license: deepseek
description: |
DeepSeek-Coder-V2, an open-source Mixture-of-Experts (MoE) code language model that achieves performance comparable to GPT4-Turbo in code-specific tasks. Specifically, DeepSeek-Coder-V2 is further pre-trained from DeepSeek-Coder-V2-Base with 6 trillion tokens sourced from a high-quality and multi-source corpus. Through this continued pre-training, DeepSeek-Coder-V2 substantially enhances the coding and mathematical reasoning capabilities of DeepSeek-Coder-V2-Base, while maintaining comparable performance in general language tasks. Compared to DeepSeek-Coder, DeepSeek-Coder-V2 demonstrates significant advancements in various aspects of code-related tasks, as well as reasoning and general capabilities. Additionally, DeepSeek-Coder-V2 expands its support for programming languages from 86 to 338, while extending the context length from 16K to 128K.
In standard benchmark evaluations, DeepSeek-Coder-V2 achieves superior performance compared to closed-source models such as GPT4-Turbo, Claude 3 Opus, and Gemini 1.5 Pro in coding and math benchmarks. The list of supported programming languages can be found in the paper.
urls:
- https://github.com/deepseek-ai/DeepSeek-Coder-V2/tree/main
- https://huggingface.co/LoneStriker/DeepSeek-Coder-V2-Lite-Instruct-GGUF
tags:
- llm
- gguf
- gpu
- deepseek
- cpu
overrides:
parameters:
model: DeepSeek-Coder-V2-Lite-Instruct-Q4_K_M.gguf
files:
- filename: DeepSeek-Coder-V2-Lite-Instruct-Q4_K_M.gguf
sha256: 50ec78036433265965ed1afd0667c00c71c12aa70bcf383be462cb8e159db6c0
uri: huggingface://LoneStriker/DeepSeek-Coder-V2-Lite-Instruct-GGUF/DeepSeek-Coder-V2-Lite-Instruct-Q4_K_M.gguf
- !!merge <<: *deepseek
name: "cursorcore-ds-6.7b-i1"
urls:
- https://huggingface.co/TechxGenus/CursorCore-DS-6.7B
- https://huggingface.co/mradermacher/CursorCore-DS-6.7B-i1-GGUF
description: |
CursorCore is a series of open-source models designed for AI-assisted programming. It aims to support features such as automated editing and inline chat, replicating the core abilities of closed-source AI-assisted programming tools like Cursor. This is achieved by aligning data generated through Programming-Instruct. Please read our paper to learn more.
overrides:
parameters:
model: CursorCore-DS-6.7B.i1-Q4_K_M.gguf
files:
- filename: CursorCore-DS-6.7B.i1-Q4_K_M.gguf
sha256: 71b94496be79e5bc45c23d6aa6c242f5f1d3625b4f00fe91d781d381ef35c538
uri: huggingface://mradermacher/CursorCore-DS-6.7B-i1-GGUF/CursorCore-DS-6.7B.i1-Q4_K_M.gguf
- name: "archangel_sft_pythia2-8b"
url: "github:mudler/LocalAI/gallery/tuluv2.yaml@master"
icon: https://gist.github.com/assets/29318529/fe2d8391-dbd1-4b7e-9dc4-7cb97e55bc06
license: apache-2.0
urls:
- https://huggingface.co/ContextualAI/archangel_sft_pythia2-8b
- https://huggingface.co/RichardErkhov/ContextualAI_-_archangel_sft_pythia2-8b-gguf
- https://github.com/ContextualAI/HALOs
description: |
datasets:
- stanfordnlp/SHP
- Anthropic/hh-rlhf
- OpenAssistant/oasst1
This repo contains the model checkpoints for:
- model family pythia2-8b
- optimized with the loss SFT
- aligned using the SHP, Anthropic HH and Open Assistant datasets.
Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards.
overrides:
parameters:
model: archangel_sft_pythia2-8b.Q4_K_M.gguf
files:
- filename: archangel_sft_pythia2-8b.Q4_K_M.gguf
sha256: a47782c55ef2b39b19644213720a599d9849511a73c9ebb0c1de749383c0a0f8
uri: huggingface://RichardErkhov/ContextualAI_-_archangel_sft_pythia2-8b-gguf/archangel_sft_pythia2-8b.Q4_K_M.gguf
- &qwen2
## Start QWEN2
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
name: "qwen2-7b-instruct"
license: apache-2.0
description: |
Qwen2 is the new series of Qwen large language models. For Qwen2, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters, including a Mixture-of-Experts model. This repo contains the instruction-tuned 7B Qwen2 model.
urls:
- https://huggingface.co/Qwen/Qwen2-7B-Instruct
- https://huggingface.co/bartowski/Qwen2-7B-Instruct-GGUF
tags:
- llm
- gguf
- gpu
- qwen
- cpu
overrides:
parameters:
model: Qwen2-7B-Instruct-Q4_K_M.gguf
files:
- filename: Qwen2-7B-Instruct-Q4_K_M.gguf
sha256: 8d0d33f0d9110a04aad1711b1ca02dafc0fa658cd83028bdfa5eff89c294fe76
uri: huggingface://bartowski/Qwen2-7B-Instruct-GGUF/Qwen2-7B-Instruct-Q4_K_M.gguf
- !!merge <<: *qwen2
name: "dolphin-2.9.2-qwen2-72b"
icon: https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/ldkN1J0WIDQwU4vutGYiD.png
urls:
- https://huggingface.co/cognitivecomputations/dolphin-2.9.2-qwen2-72b-gguf
description: "Dolphin 2.9.2 Qwen2 72B \U0001F42C\n\nCurated and trained by Eric Hartford, Lucas Atkins, and Fernando Fernandes, and Cognitive Computations\n"
overrides:
parameters:
model: dolphin-2.9.2-qwen2-Q4_K_M.gguf
files:
- filename: dolphin-2.9.2-qwen2-Q4_K_M.gguf
sha256: 44a0e82cbc2a201b2f4b9e16099a0a4d97b6f0099d45bcc5b354601f38dbb709
uri: huggingface://cognitivecomputations/dolphin-2.9.2-qwen2-72b-gguf/qwen2-Q4_K_M.gguf
- !!merge <<: *qwen2
name: "dolphin-2.9.2-qwen2-7b"
description: "Dolphin 2.9.2 Qwen2 7B \U0001F42C\n\nCurated and trained by Eric Hartford, Lucas Atkins, and Fernando Fernandes, and Cognitive Computations\n"
urls:
- https://huggingface.co/cognitivecomputations/dolphin-2.9.2-qwen2-7b
- https://huggingface.co/cognitivecomputations/dolphin-2.9.2-qwen2-7b-gguf
icon: https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/ldkN1J0WIDQwU4vutGYiD.png
overrides:
parameters:
model: dolphin-2.9.2-qwen2-7b-Q4_K_M.gguf
files:
- filename: dolphin-2.9.2-qwen2-7b-Q4_K_M.gguf
sha256: a15b5db4df6be4f4bfb3632b2009147332ef4c57875527f246b4718cb0d3af1f
uri: huggingface://cognitivecomputations/dolphin-2.9.2-qwen2-7b-gguf/dolphin-2.9.2-qwen2-7b-Q4_K_M.gguf
- !!merge <<: *qwen2
name: "samantha-qwen-2-7B"
description: |
Samantha based on qwen2
urls:
- https://huggingface.co/bartowski/Samantha-Qwen-2-7B-GGUF
- https://huggingface.co/macadeliccc/Samantha-Qwen2-7B
overrides:
parameters:
model: Samantha-Qwen-2-7B-Q4_K_M.gguf
files:
- filename: Samantha-Qwen-2-7B-Q4_K_M.gguf
sha256: 5d1cf1c35a7a46c536a96ba0417d08b9f9e09c24a4e25976f72ad55d4904f6fe
uri: huggingface://bartowski/Samantha-Qwen-2-7B-GGUF/Samantha-Qwen-2-7B-Q4_K_M.gguf
- !!merge <<: *qwen2
name: "magnum-72b-v1"
icon: https://files.catbox.moe/ngqnb1.png
description: |
This is the first in a series of models designed to replicate the prose quality of the Claude 3 models, specifically Sonnet and Opus. This model is fine-tuned on top of Qwen-2 72B Instruct.
urls:
- https://huggingface.co/alpindale/magnum-72b-v1
- https://huggingface.co/bartowski/magnum-72b-v1-GGUF
overrides:
parameters:
model: magnum-72b-v1-Q4_K_M.gguf
files:
- filename: magnum-72b-v1-Q4_K_M.gguf
sha256: 046ec48665ce64a3a4965509dee2d9d8e5d81cb0b32ca0ddf130d2b59fa4ca9a
uri: huggingface://bartowski/magnum-72b-v1-GGUF/magnum-72b-v1-Q4_K_M.gguf
- !!merge <<: *qwen2
name: "qwen2-1.5b-ita"
description: |
Qwen2 1.5B is a compact language model specifically fine-tuned for the Italian language. Despite its relatively small size of 1.5 billion parameters, Qwen2 1.5B demonstrates strong performance, nearly matching the capabilities of larger models, such as the 9 billion parameter ITALIA model by iGenius. The fine-tuning process focused on optimizing the model for various language tasks in Italian, making it highly efficient and effective for Italian language applications.
urls:
- https://huggingface.co/DeepMount00/Qwen2-1.5B-Ita
- https://huggingface.co/DeepMount00/Qwen2-1.5B-Ita-GGUF
overrides:
parameters:
model: qwen2-1.5b-instruct-q8_0.gguf
files:
- filename: qwen2-1.5b-instruct-q8_0.gguf
sha256: c9d33989d77f4bd6966084332087921b9613eda01d5f44dc0b4e9a7382a2bfbb
uri: huggingface://DeepMount00/Qwen2-1.5B-Ita-GGUF/qwen2-1.5b-instruct-q8_0.gguf
- !!merge <<: *qwen2
name: "einstein-v7-qwen2-7b"
icon: https://cdn-uploads.huggingface.co/production/uploads/6468ce47e134d050a58aa89c/KLQP1jK-DIzpwHzYRIH-Q.png
description: |
This model is a full fine-tuned version of Qwen/Qwen2-7B on diverse datasets.
urls:
- https://huggingface.co/Weyaxi/Einstein-v7-Qwen2-7B
- https://huggingface.co/bartowski/Einstein-v7-Qwen2-7B-GGUF
overrides:
parameters:
model: Einstein-v7-Qwen2-7B-Q4_K_M.gguf
files:
- filename: Einstein-v7-Qwen2-7B-Q4_K_M.gguf
sha256: 277b212ea65894723d2b86fb0f689fa5ecb54c9794f0fd2fb643655dc62812ce
uri: huggingface://bartowski/Einstein-v7-Qwen2-7B-GGUF/Einstein-v7-Qwen2-7B-Q4_K_M.gguf
- !!merge <<: *qwen2
name: "arcee-spark"
icon: https://i.ibb.co/80ssNWS/o-Vdk-Qx-ARNmzr-Pi1h-Efj-SA.webp
description: |
Arcee Spark is a powerful 7B parameter language model that punches well above its weight class. Initialized from Qwen2, this model underwent a sophisticated training process:
Fine-tuned on 1.8 million samples
Merged with Qwen2-7B-Instruct using Arcee's mergekit
Further refined using Direct Preference Optimization (DPO)
This meticulous process results in exceptional performance, with Arcee Spark achieving the highest score on MT-Bench for models of its size, outperforming even GPT-3.5 on many tasks.
urls:
- https://huggingface.co/arcee-ai/Arcee-Spark-GGUF
overrides:
parameters:
model: Arcee-Spark-Q4_K_M.gguf
files:
- filename: Arcee-Spark-Q4_K_M.gguf
sha256: 44123276d7845dc13f73ca4aa431dc4c931104eb7d2186f2a73d076fa0ee2330
uri: huggingface://arcee-ai/Arcee-Spark-GGUF/Arcee-Spark-Q4_K_M.gguf
- !!merge <<: *qwen2
name: "hercules-5.0-qwen2-7b"
description: |
Locutusque/Hercules-5.0-Qwen2-7B is a fine-tuned language model derived from Qwen2-7B. It is specifically designed to excel in instruction following, function calls, and conversational interactions across various scientific and technical domains. This fine-tuning has hercules-v5.0 with enhanced abilities in:
Complex Instruction Following: Understanding and accurately executing multi-step instructions, even those involving specialized terminology.
Function Calling: Seamlessly interpreting and executing function calls, providing appropriate input and output values.
Domain-Specific Knowledge: Engaging in informative and educational conversations about Biology, Chemistry, Physics, Mathematics, Medicine, Computer Science, and more.
urls:
- https://huggingface.co/Locutusque/Hercules-5.0-Qwen2-7B
- https://huggingface.co/bartowski/Hercules-5.0-Qwen2-7B-GGUF
overrides:
parameters:
model: Hercules-5.0-Qwen2-7B-Q4_K_M.gguf
files:
- filename: Hercules-5.0-Qwen2-7B-Q4_K_M.gguf
sha256: 8ebae4ffd43b906ddb938c3a611060ee5f99c35014e5ffe23ca35714361b5693
uri: huggingface://Hercules-5.0-Qwen2-7B-Q4_K_M.gguf/Hercules-5.0-Qwen2-7B-Q4_K_M.gguf
- !!merge <<: *qwen2
name: "arcee-agent"
icon: https://i.ibb.co/CBHmTDn/136719a5-6d8a-4654-a618-46eabc788953.jpg
description: |
Arcee Agent is a cutting-edge 7B parameter language model specifically designed for function calling and tool use. Initialized from Qwen2-7B, it rivals the performance of much larger models while maintaining efficiency and speed. This model is particularly suited for developers, researchers, and businesses looking to implement sophisticated AI-driven solutions without the computational overhead of larger language models. Compute for training Arcee-Agent was provided by CrusoeAI. Arcee-Agent was trained using Spectrum.
urls:
- https://huggingface.co/crusoeai/Arcee-Agent-GGUF
- https://huggingface.co/arcee-ai/Arcee-Agent
overrides:
parameters:
model: arcee-agent.Q4_K_M.gguf
files:
- filename: arcee-agent.Q4_K_M.gguf
sha256: ebb49943a66c1e717f9399a555aee0af28a40bfac7500f2ad8dd05f211b62aac
uri: huggingface://crusoeai/Arcee-Agent-GGUF/arcee-agent.Q4_K_M.gguf
- !!merge <<: *qwen2
name: "qwen2-7b-instruct-v0.8"
icon: https://huggingface.co/MaziyarPanahi/Qwen2-7B-Instruct-v0.8/resolve/main/qwen2-fine-tunes-maziyar-panahi.webp
description: |
MaziyarPanahi/Qwen2-7B-Instruct-v0.8
This is a fine-tuned version of the Qwen/Qwen2-7B model. It aims to improve the base model across all benchmarks.
urls:
- https://huggingface.co/MaziyarPanahi/Qwen2-7B-Instruct-v0.8
- https://huggingface.co/MaziyarPanahi/Qwen2-7B-Instruct-v0.8-GGUF
overrides:
parameters:
model: Qwen2-7B-Instruct-v0.8.Q4_K_M.gguf
files:
- filename: Qwen2-7B-Instruct-v0.8.Q4_K_M.gguf
sha256: 8c1b3efe9fa6ae1b37942ef26473cb4e0aed0f8038b60d4b61e5bffb61e49b7e
uri: huggingface://MaziyarPanahi/Qwen2-7B-Instruct-v0.8-GGUF/Qwen2-7B-Instruct-v0.8.Q4_K_M.gguf
- !!merge <<: *qwen2
name: "qwen2-wukong-7b"
icon: https://cdn-uploads.huggingface.co/production/uploads/655dc641accde1bbc8b41aec/xOe1Nb3S9Nb53us7_Ja3s.jpeg
urls:
- https://huggingface.co/bartowski/Qwen2-Wukong-7B-GGUF
description: |
Qwen2-Wukong-7B is a dealigned chat finetune of the original fantastic Qwen2-7B model by the Qwen team.
This model was trained on the teknium OpenHeremes-2.5 dataset and some supplementary datasets from Cognitive Computations
This model was trained for 3 epochs with a custom FA2 implementation for AMD cards.
overrides:
parameters:
model: Qwen2-Wukong-7B-Q4_K_M.gguf
files:
- filename: Qwen2-Wukong-7B-Q4_K_M.gguf
sha256: 6b8ca6649c33fc84d4892ebcff1214f0b34697aced784f0d6d32e284a15943ad
uri: huggingface://bartowski/Qwen2-Wukong-7B-GGUF/Qwen2-Wukong-7B-Q4_K_M.gguf
- !!merge <<: *qwen2
name: "calme-2.8-qwen2-7b"
icon: https://huggingface.co/MaziyarPanahi/calme-2.8-qwen2-7b/resolve/main/qwen2-fine-tunes-maziyar-panahi.webp
urls:
- https://huggingface.co/MaziyarPanahi/calme-2.8-qwen2-7b
- https://huggingface.co/MaziyarPanahi/calme-2.8-qwen2-7b-GGUF
description: |
This is a fine-tuned version of the Qwen/Qwen2-7B model. It aims to improve the base model across all benchmarks.
overrides:
parameters:
model: Qwen2-7B-Instruct-v0.8.Q4_K_M.gguf
files:
- filename: Qwen2-7B-Instruct-v0.8.Q4_K_M.gguf
sha256: 8c1b3efe9fa6ae1b37942ef26473cb4e0aed0f8038b60d4b61e5bffb61e49b7e
uri: huggingface://MaziyarPanahi/calme-2.8-qwen2-7b-GGUF/Qwen2-7B-Instruct-v0.8.Q4_K_M.gguf
- !!merge <<: *qwen2
name: "stellardong-72b-i1"
icon: https://huggingface.co/smelborp/StellarDong-72b/resolve/main/stellardong.png
urls:
- https://huggingface.co/smelborp/StellarDong-72b
- https://huggingface.co/mradermacher/StellarDong-72b-i1-GGUF
description: |
Magnum + Nova = you won't believe how stellar this dong is!!
overrides:
parameters:
model: StellarDong-72b.i1-Q4_K_M.gguf
files:
- filename: StellarDong-72b.i1-Q4_K_M.gguf
sha256: 4c5012f0a034f40a044904891343ade2594f29c28a8a9d8052916de4dc5a61df
uri: huggingface://mradermacher/StellarDong-72b-i1-GGUF/StellarDong-72b.i1-Q4_K_M.gguf
- !!merge <<: *qwen2
name: "magnum-32b-v1-i1"
icon: https://cdn-uploads.huggingface.co/production/uploads/635567189c72a7e742f1419c/PK7xRSd18Du0bX-w_t-9c.png
urls:
- https://huggingface.co/anthracite-org/magnum-32b-v1
- https://huggingface.co/mradermacher/magnum-32b-v1-i1-GGUF
description: |
This is the second in a series of models designed to replicate the prose quality of the Claude 3 models, specifically Sonnet and Opus. This model is fine-tuned on top of Qwen1.5 32B.
overrides:
parameters:
model: magnum-32b-v1.i1-Q4_K_M.gguf
files:
- filename: magnum-32b-v1.i1-Q4_K_M.gguf
sha256: a31704ce0d7e5b774f155522b9ab7ef6015a4ece4e9056bf4dfc6cac561ff0a3
uri: huggingface://mradermacher/magnum-32b-v1-i1-GGUF/magnum-32b-v1.i1-Q4_K_M.gguf
- !!merge <<: *qwen2
name: "tifa-7b-qwen2-v0.1"
urls:
- https://huggingface.co/Tifa-RP/Tifa-7B-Qwen2-v0.1-GGUF
description: |
The Tifa role-playing language model is a high-performance language model based on a self-developed 220B model distillation, with a new base model of qwen2-7B. The model has been converted to gguf format for running in the Ollama framework, providing excellent dialogue and text generation capabilities.
The original model was trained on a large-scale industrial dataset and then fine-tuned with 400GB of novel data and 20GB of multi-round dialogue directive data to achieve good role-playing effects.
The Tifa model is suitable for multi-round dialogue processing, role-playing and scenario simulation, EFX industrial knowledge integration, and high-quality literary creation.
Note: The Tifa model is in Chinese and English, with 7.6% of the data in Chinese role-playing and 4.2% in English role-playing. The model has been trained with a mix of EFX industrial field parameters and question-answer dialogues generated from 220B model outputs since 2023. The recommended quantization method is f16, as it retains more detail and accuracy in the model's performance.
overrides:
parameters:
model: tifa-7b-qwen2-v0.1.q4_k_m.gguf
files:
- filename: tifa-7b-qwen2-v0.1.q4_k_m.gguf
sha256: 1f5adbe8cb0a6400f51abdca3bf4e32284ebff73cc681a43abb35c0a6ccd3820
uri: huggingface://Tifa-RP/Tifa-7B-Qwen2-v0.1-GGUF/tifa-7b-qwen2-v0.1.q4_k_m.gguf
- !!merge <<: *qwen2
name: "calme-2.2-qwen2-72b"
icon: https://huggingface.co/MaziyarPanahi/calme-2.2-qwen2-72b/resolve/main/calme-2.webp
urls:
- https://huggingface.co/MaziyarPanahi/calme-2.2-qwen2-72b-GGUF
- https://huggingface.co/MaziyarPanahi/calme-2.2-qwen2-72b
description: |
This model is a fine-tuned version of the powerful Qwen/Qwen2-72B-Instruct, pushing the boundaries of natural language understanding and generation even further. My goal was to create a versatile and robust model that excels across a wide range of benchmarks and real-world applications.
The post-training process is identical to the calme-2.1-qwen2-72b model; however, some parameters are different, and it was trained for a longer period.
Use Cases
This model is suitable for a wide range of applications, including but not limited to:
Advanced question-answering systems
Intelligent chatbots and virtual assistants
Content generation and summarization
Code generation and analysis
Complex problem-solving and decision support
overrides:
parameters:
model: calme-2.2-qwen2-72b.Q4_K_M.gguf
files:
- filename: calme-2.2-qwen2-72b.Q4_K_M.gguf
sha256: 95b9613df0abe6c1b6b7b017d7cc8bcf19b46c29f92a503dcc6da1704b12b402
uri: huggingface://MaziyarPanahi/calme-2.2-qwen2-72b-GGUF/calme-2.2-qwen2-72b.Q4_K_M.gguf
- !!merge <<: *qwen2
name: "edgerunner-tactical-7b"
icon: https://cdn-uploads.huggingface.co/production/uploads/668ed3dcd857a9ca47edb75c/tSyuw39VtmEqvC_wptTDf.png
urls:
- https://huggingface.co/edgerunner-ai/EdgeRunner-Tactical-7B
- https://huggingface.co/RichardErkhov/edgerunner-ai_-_EdgeRunner-Tactical-7B-gguf
description: |
EdgeRunner-Tactical-7B is a powerful and efficient language model for the edge. Our mission is to build Generative AI for the edge that is safe, secure, and transparent. To that end, the EdgeRunner team is proud to release EdgeRunner-Tactical-7B, the most powerful language model for its size to date.
EdgeRunner-Tactical-7B is a 7 billion parameter language model that delivers powerful performance while demonstrating the potential of running state-of-the-art (SOTA) models at the edge.
overrides:
parameters:
model: EdgeRunner-Tactical-7B.Q4_K_M.gguf
files:
- filename: EdgeRunner-Tactical-7B.Q4_K_M.gguf
sha256: 90ca9c3ab19e5d1de4499e3f988cc0ba3d205e50285d7c89de6f0a4c525bf204
uri: huggingface://RichardErkhov/edgerunner-ai_-_EdgeRunner-Tactical-7B-gguf/EdgeRunner-Tactical-7B.Q4_K_M.gguf
- !!merge <<: *qwen2
name: "marco-o1"
icon: https://huggingface.co/AIDC-AI/Marco-o1/resolve/main/assets/logo.png
urls:
- https://huggingface.co/AIDC-AI/Marco-o1
- https://huggingface.co/QuantFactory/Marco-o1-GGUF
description: |
Marco-o1 not only focuses on disciplines with standard answers, such as mathematics, physics, and coding—which are well-suited for reinforcement learning (RL)—but also places greater emphasis on open-ended resolutions. We aim to address the question: "Can the o1 model effectively generalize to broader domains where clear standards are absent and rewards are challenging to quantify?"
overrides:
parameters:
model: Marco-o1.Q4_K_M.gguf
files:
- filename: Marco-o1.Q4_K_M.gguf
sha256: 54dd9554cb54609bf0bf4b367dfba192fc982a2fc6b87a0f56fba5ea82762d0d
uri: huggingface://QuantFactory/Marco-o1-GGUF/Marco-o1.Q4_K_M.gguf
- &mistral03
## START Mistral
url: "github:mudler/LocalAI/gallery/mistral-0.3.yaml@master"
name: "mistral-7b-instruct-v0.3"
icon: https://cdn-avatars.huggingface.co/v1/production/uploads/62dac1c7a8ead43d20e3e17a/wrLf5yaGC6ng4XME70w6Z.png
license: apache-2.0
description: |
The Mistral-7B-Instruct-v0.3 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.3.
Mistral-7B-v0.3 has the following changes compared to Mistral-7B-v0.2
Extended vocabulary to 32768
Supports v3 Tokenizer
Supports function calling
urls:
- https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3
- https://huggingface.co/MaziyarPanahi/Mistral-7B-Instruct-v0.3-GGUF
tags:
- llm
- gguf
- gpu
- mistral
- cpu
- function-calling
overrides:
parameters:
model: Mistral-7B-Instruct-v0.3.Q4_K_M.gguf
files:
- filename: "Mistral-7B-Instruct-v0.3.Q4_K_M.gguf"
sha256: "14850c84ff9f06e9b51d505d64815d5cc0cea0257380353ac0b3d21b21f6e024"
uri: "huggingface://MaziyarPanahi/Mistral-7B-Instruct-v0.3-GGUF/Mistral-7B-Instruct-v0.3.Q4_K_M.gguf"
- !!merge <<: *mistral03
name: "mathstral-7b-v0.1-imat"
url: "github:mudler/LocalAI/gallery/mathstral.yaml@master"
urls:
- https://huggingface.co/mistralai/mathstral-7B-v0.1
- https://huggingface.co/InferenceIllusionist/mathstral-7B-v0.1-iMat-GGUF
description: |
Mathstral 7B is a model specializing in mathematical and scientific tasks, based on Mistral 7B. You can read more in the official blog post https://mistral.ai/news/mathstral/.
overrides:
parameters:
model: mathstral-7B-v0.1-iMat-Q4_K_M.gguf
files:
- filename: mathstral-7B-v0.1-iMat-Q4_K_M.gguf
sha256: 3ba94b7a8283ffa319c9ce23657f91ecf221ceada167c1253906cf56d72e8f90
uri: huggingface://InferenceIllusionist/mathstral-7B-v0.1-iMat-GGUF/mathstral-7B-v0.1-iMat-Q4_K_M.gguf
- !!merge <<: *mistral03
name: "mahou-1.3d-mistral-7b-i1"
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
icon: https://huggingface.co/flammenai/Mahou-1.0-mistral-7B/resolve/main/mahou1.png
urls:
- https://huggingface.co/flammenai/Mahou-1.3d-mistral-7B
- https://huggingface.co/mradermacher/Mahou-1.3d-mistral-7B-i1-GGUF
description: |
Mahou is designed to provide short messages in a conversational context. It is capable of casual conversation and character roleplay.
overrides:
parameters:
model: Mahou-1.3d-mistral-7B.i1-Q4_K_M.gguf
files:
- filename: Mahou-1.3d-mistral-7B.i1-Q4_K_M.gguf
sha256: 8272f050e36d612ab282e095cb4e775e2c818e7096f8d522314d256923ef6da9
uri: huggingface://mradermacher/Mahou-1.3d-mistral-7B-i1-GGUF/Mahou-1.3d-mistral-7B.i1-Q4_K_M.gguf
- name: "einstein-v4-7b"
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
icon: https://cdn-uploads.huggingface.co/production/uploads/6468ce47e134d050a58aa89c/U0zyXVGj-O8a7KP3BvPue.png
urls:
- https://huggingface.co/Weyaxi/Einstein-v4-7B
- https://huggingface.co/mradermacher/Einstein-v4-7B-GGUF
tags:
- llm
- gguf
- gpu
- mistral
- cpu
description: "\U0001F52C Einstein-v4-7B\n\nThis model is a full fine-tuned version of mistralai/Mistral-7B-v0.1 on diverse datasets.\n\nThis model is finetuned using 7xRTX3090 + 1xRTXA6000 using axolotl.\n"
overrides:
parameters:
model: Einstein-v4-7B.Q4_K_M.gguf
files:
- filename: Einstein-v4-7B.Q4_K_M.gguf
sha256: 78bd573de2a9eb3c6e213132858164e821145f374fcaa4b19dfd6502c05d990d
uri: huggingface://mradermacher/Einstein-v4-7B-GGUF/Einstein-v4-7B.Q4_K_M.gguf
- !!merge <<: *mistral03
name: "mistral-nemo-instruct-2407"
urls:
- https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407
- https://huggingface.co/bartowski/Mistral-Nemo-Instruct-2407-GGUF
- https://mistral.ai/news/mistral-nemo/
description: |
The Mistral-Nemo-Instruct-2407 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-Nemo-Base-2407. Trained jointly by Mistral AI and NVIDIA, it significantly outperforms existing models smaller or similar in size.
overrides:
parameters:
model: Mistral-Nemo-Instruct-2407-Q4_K_M.gguf
files:
- filename: Mistral-Nemo-Instruct-2407-Q4_K_M.gguf
uri: huggingface://bartowski/Mistral-Nemo-Instruct-2407-GGUF/Mistral-Nemo-Instruct-2407-Q4_K_M.gguf
sha256: 7c1a10d202d8788dbe5628dc962254d10654c853cae6aaeca0618f05490d4a46
- !!merge <<: *mistral03
name: "lumimaid-v0.2-12b"
icon: https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/ep3ojmuMkFS-GmgRuI9iB.png
urls:
- https://huggingface.co/NeverSleep/Lumimaid-v0.2-12B
- https://huggingface.co/mudler/Lumimaid-v0.2-12B-Q4_K_M-GGUF
description: |
This model is based on: Mistral-Nemo-Instruct-2407
Wandb: https://wandb.ai/undis95/Lumi-Mistral-Nemo?nw=nwuserundis95
NOTE: As explained on Mistral-Nemo-Instruct-2407 repo, it's recommended to use a low temperature, please experiment!
Lumimaid 0.1 -> 0.2 is a HUGE step up dataset wise.
As some people have told us our models are sloppy, Ikari decided to say fuck it and literally nuke all chats out with most slop.
Our dataset stayed the same since day one, we added data over time, cleaned them, and repeat. After not releasing model for a while because we were never satisfied, we think it's time to come back!
overrides:
parameters:
model: lumimaid-v0.2-12b-q4_k_m.gguf
files:
- filename: lumimaid-v0.2-12b-q4_k_m.gguf
sha256: f72299858a07e52be920b86d42ddcfcd5008b961d601ef6fd6a98a3377adccbf
uri: huggingface://mudler/Lumimaid-v0.2-12B-Q4_K_M-GGUF/lumimaid-v0.2-12b-q4_k_m.gguf
- !!merge <<: *mistral03
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
name: "mn-12b-celeste-v1.9"
icon: https://cdn-uploads.huggingface.co/production/uploads/630cf5d14ca0a22768bbe10c/QcU3xEgVu18jeFtMFxIw-.webp
urls:
- https://huggingface.co/nothingiisreal/MN-12B-Celeste-V1.9
- https://huggingface.co/mradermacher/MN-12B-Celeste-V1.9-GGUF
description: |
Mistral Nemo 12B Celeste V1.9
This is a story writing and roleplaying model trained on Mistral NeMo 12B Instruct at 8K context using Reddit Writing Prompts, Kalo's Opus 25K Instruct and c2 logs cleaned
This version has improved NSFW, smarter and more active narration. It's also trained with ChatML tokens so there should be no EOS bleeding whatsoever.
overrides:
parameters:
model: MN-12B-Celeste-V1.9.Q4_K_M.gguf
files:
- filename: MN-12B-Celeste-V1.9.Q4_K_M.gguf
sha256: 019daeaa63d82d55d1ea623b9c255deea6793af4044bb4994d2b4d09e8959f7b
uri: huggingface://mradermacher/MN-12B-Celeste-V1.9-GGUF/MN-12B-Celeste-V1.9.Q4_K_M.gguf
- !!merge <<: *mistral03
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
icon: https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/ybqwvRJAtBPqtulQlKW93.gif
name: "rocinante-12b-v1.1"
urls:
- https://huggingface.co/TheDrummer/Rocinante-12B-v1.1-GGUF
- https://huggingface.co/TheDrummer/Rocinante-12B-v1.1
description: |
A versatile workhorse for any adventure!
overrides:
parameters:
model: Rocinante-12B-v1.1-Q4_K_M.gguf
files:
- filename: Rocinante-12B-v1.1-Q4_K_M.gguf
sha256: bdeaeefac79cff944ae673e6924c9f82f7eed789669a32a09997db398790b0b5
uri: huggingface://TheDrummer/Rocinante-12B-v1.1-GGUF/Rocinante-12B-v1.1-Q4_K_M.gguf
- !!merge <<: *mistral03
name: "pantheon-rp-1.6-12b-nemo"
icon: https://huggingface.co/Gryphe/Pantheon-RP-1.6-12b-Nemo/resolve/main/Pantheon.png
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
urls:
- https://huggingface.co/bartowski/Pantheon-RP-1.6-12b-Nemo-GGUF
- https://huggingface.co/Gryphe/Pantheon-RP-1.6-12b-Nemo
description: |
Welcome to the next iteration of my Pantheon model series, in which I strive to introduce a whole collection of personas that can be summoned with a simple activation phrase. The huge variety in personalities introduced also serve to enhance the general roleplay experience.
Changes in version 1.6:
The final finetune now consists of data that is equally split between Markdown and novel-style roleplay. This should solve Pantheon's greatest weakness.
The base was redone. (Details below)
Select Claude-specific phrases were rewritten, boosting variety in the model's responses.
Aiva no longer serves as both persona and assistant, with the assistant role having been given to Lyra.
Stella's dialogue received some post-fix alterations since the model really loved the phrase "Fuck me sideways".
Your user feedback is critical to me so don't hesitate to tell me whether my model is either 1. terrible, 2. awesome or 3. somewhere in-between.
overrides:
parameters:
model: Pantheon-RP-1.6-12b-Nemo-Q4_K_M.gguf
files:
- filename: Pantheon-RP-1.6-12b-Nemo-Q4_K_M.gguf
sha256: cf3465c183bf4ecbccd1b6b480f687e0160475b04c87e2f1e5ebc8baa0f4c7aa
uri: huggingface://bartowski/Pantheon-RP-1.6-12b-Nemo-GGUF/Pantheon-RP-1.6-12b-Nemo-Q4_K_M.gguf
- !!merge <<: *mistral03
name: "acolyte-22b-i1"
icon: https://cdn-uploads.huggingface.co/production/uploads/6569a4ed2419be6072890cf8/3dcGMcrWK2-2vQh9QBt3o.png
urls:
- https://huggingface.co/rAIfle/Acolyte-22B
- https://huggingface.co/mradermacher/Acolyte-22B-i1-GGUF
description: |
LoRA of a bunch of random datasets on top of Mistral-Small-Instruct-2409, then SLERPed onto base at 0.5. Decent enough for its size. Check the LoRA for dataset info.
overrides:
parameters:
model: Acolyte-22B.i1-Q4_K_M.gguf
files:
- filename: Acolyte-22B.i1-Q4_K_M.gguf
sha256: 5a454405b98b6f886e8e4c695488d8ea098162bb8c46f2a7723fc2553c6e2f6e
uri: huggingface://mradermacher/Acolyte-22B-i1-GGUF/Acolyte-22B.i1-Q4_K_M.gguf
- !!merge <<: *mistral03
name: "mn-12b-lyra-v4-iq-imatrix"
icon: https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/dVoru83WOpwVjMlgZ_xhA.png
# chatml
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
urls:
- https://huggingface.co/Lewdiculous/MN-12B-Lyra-v4-GGUF-IQ-Imatrix
description: |
A finetune of Mistral Nemo by Sao10K.
Uses the ChatML prompt format.
overrides:
parameters:
model: MN-12B-Lyra-v4-Q4_K_M-imat.gguf
files:
- filename: MN-12B-Lyra-v4-Q4_K_M-imat.gguf
sha256: 1989123481ca1936c8a2cbe278ff5d1d2b0ae63dbdc838bb36a6d7547b8087b3
uri: huggingface://Lewdiculous/MN-12B-Lyra-v4-GGUF-IQ-Imatrix/MN-12B-Lyra-v4-Q4_K_M-imat.gguf
- !!merge <<: *mistral03
name: "magnusintellectus-12b-v1-i1"
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
icon: https://cdn-uploads.huggingface.co/production/uploads/66b564058d9afb7a9d5607d5/hUVJI1Qa4tCMrZWMgYkoD.png
urls:
- https://huggingface.co/GalrionSoftworks/MagnusIntellectus-12B-v1
- https://huggingface.co/mradermacher/MagnusIntellectus-12B-v1-i1-GGUF
description: |
How pleasant, the rocks appear to have made a decent conglomerate. A-.
MagnusIntellectus is a merge of the following models using LazyMergekit:
UsernameJustAnother/Nemo-12B-Marlin-v5
anthracite-org/magnum-12b-v2
overrides:
parameters:
model: MagnusIntellectus-12B-v1.i1-Q4_K_M.gguf
files:
- filename: MagnusIntellectus-12B-v1.i1-Q4_K_M.gguf
sha256: c97107983b4edc5b6f2a592d227ca2dd4196e2af3d3bc0fe6b7a8954a1fb5870
uri: huggingface://mradermacher/MagnusIntellectus-12B-v1-i1-GGUF/MagnusIntellectus-12B-v1.i1-Q4_K_M.gguf
- !!merge <<: *mistral03
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
name: "mn-backyardai-party-12b-v1-iq-arm-imatrix"
icon: https://huggingface.co/Sao10K/MN-BackyardAI-Party-12B-v1/resolve/main/party1.png
urls:
- https://huggingface.co/Sao10K/MN-BackyardAI-Party-12B-v1
- https://huggingface.co/Lewdiculous/MN-BackyardAI-Party-12B-v1-GGUF-IQ-ARM-Imatrix
description: |
This is a group-chat based roleplaying model, based off of 12B-Lyra-v4a2, a variant of Lyra-v4 that is currently private.
It is trained on an entirely human-based dataset, based on forum / internet group roleplaying styles. The only augmentation done with LLMs is to the character sheets, to fit to the system prompt, to fit various character sheets within context.
This model is still capable of 1 on 1 roleplay, though I recommend using ChatML when doing that instead.
overrides:
parameters:
model: MN-BackyardAI-Party-12B-v1-Q4_K_M-imat.gguf
files:
- filename: MN-BackyardAI-Party-12B-v1-Q4_K_M-imat.gguf
sha256: cea68768dff58b553974b755bb40ef790ab8b86866d9b5c46bc2e6c3311b876a
uri: huggingface://Lewdiculous/MN-BackyardAI-Party-12B-v1-GGUF-IQ-ARM-Imatrix/MN-BackyardAI-Party-12B-v1-Q4_K_M-imat.gguf
- !!merge <<: *mistral03
name: "ml-ms-etheris-123b"
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
icon: https://cdn-uploads.huggingface.co/production/uploads/64545af5ec40bbbd01242ca6/ieEjL3TxpDM3WAZQcya6E.png
urls:
- https://huggingface.co/Steelskull/ML-MS-Etheris-123B
- https://huggingface.co/mradermacher/ML-MS-Etheris-123B-GGUF
description: |
This model merges the robust storytelling of mutiple models while attempting to maintain intelligence. The final model was merged after Model Soup with DELLA to add some specal sause.
- model: NeverSleep/Lumimaid-v0.2-123B
- model: TheDrummer/Behemoth-123B-v1
- model: migtissera/Tess-3-Mistral-Large-2-123B
- model: anthracite-org/magnum-v2-123b
Use Mistral, ChatML, or Meth Format
overrides:
parameters:
model: ML-MS-Etheris-123B.Q2_K.gguf
files:
- filename: ML-MS-Etheris-123B.Q2_K.gguf
sha256: a17c5615413b5c9c8d01cf55386573d0acd00e01f6e2bcdf492624c73c593fc3
uri: huggingface://mradermacher/ML-MS-Etheris-123B-GGUF/ML-MS-Etheris-123B.Q2_K.gguf
- !!merge <<: *mistral03
name: "mn-lulanum-12b-fix-i1"
urls:
- https://huggingface.co/djuna/MN-Lulanum-12B-FIX
- https://huggingface.co/mradermacher/MN-Lulanum-12B-FIX-i1-GGUF
description: |
This model was merged using the della_linear merge method using unsloth/Mistral-Nemo-Base-2407 as a base.
The following models were included in the merge:
VAGOsolutions/SauerkrautLM-Nemo-12b-Instruct
anthracite-org/magnum-v2.5-12b-kto
Undi95/LocalC-12B-e2.0
NeverSleep/Lumimaid-v0.2-12B
overrides:
parameters:
model: MN-Lulanum-12B-FIX.i1-Q4_K_M.gguf
files:
- filename: MN-Lulanum-12B-FIX.i1-Q4_K_M.gguf
sha256: 7e24d57249059d45bb508565ec3055e585a4e658c1815c67ea92397acc6aa775
uri: huggingface://mradermacher/MN-Lulanum-12B-FIX-i1-GGUF/MN-Lulanum-12B-FIX.i1-Q4_K_M.gguf
- !!merge <<: *mistral03
name: "tor-8b"
icon: https://huggingface.co/Delta-Vector/Tor-8B/resolve/main/FinalTor8B.jpg
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
urls:
- https://huggingface.co/QuantFactory/Tor-8B-GGUF
description: |
An earlier checkpoint of Darkens-8B using the same configuration that i felt was different enough from it's 4 epoch cousin to release, Finetuned ontop of the Prune/Distill NeMo 8B done by Nvidia, This model aims to have generally good prose and writing while not falling into claude-isms.
overrides:
parameters:
model: Tor-8B.Q4_K_M.gguf
files:
- filename: Tor-8B.Q4_K_M.gguf
sha256: 9dd64bd886aa7682b6179340449b38feda405b44722ef7ac752cedb807af370e
uri: huggingface://QuantFactory/Tor-8B-GGUF/Tor-8B.Q4_K_M.gguf
- !!merge <<: *mistral03
name: "darkens-8b"
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
urls:
- https://huggingface.co/Delta-Vector/Darkens-8B
- https://huggingface.co/QuantFactory/Darkens-8B-GGUF
description: |
This is the fully cooked, 4 epoch version of Tor-8B, this is an experimental version, despite being trained for 4 epochs, the model feels fresh and new and is not overfit, This model aims to have generally good prose and writing while not falling into claude-isms, it follows the actions "dialogue" format heavily.
overrides:
parameters:
model: Darkens-8B.Q4_K_M.gguf
files:
- filename: Darkens-8B.Q4_K_M.gguf
sha256: f56a483e10fd00957460adfc16ee462cecac892a4fb44dc59e466e68a360fd42
uri: huggingface://QuantFactory/Darkens-8B-GGUF/Darkens-8B.Q4_K_M.gguf
- !!merge <<: *mistral03
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
name: "starcannon-unleashed-12b-v1.0"
icon: https://cdn-uploads.huggingface.co/production/uploads/6720ed503a24966ac66495e8/HXc0AxPLkoIC1fy0Pb3Pb.png
urls:
- https://huggingface.co/VongolaChouko/Starcannon-Unleashed-12B-v1.0
- https://huggingface.co/QuantFactory/Starcannon-Unleashed-12B-v1.0-GGUF
description: |
This is a merge of pre-trained language models created using mergekit.
MarinaraSpaghetti_NemoMix-Unleashed-12B
Nothingiisreal_MN-12B-Starcannon-v3
overrides:
parameters:
model: Starcannon-Unleashed-12B-v1.0.Q4_K_M.gguf
files:
- filename: Starcannon-Unleashed-12B-v1.0.Q4_K_M.gguf
sha256: b32c6582d75d2f1d67d567badc691a1338dd1a016c71efbfaf4c91812f398f0e
uri: huggingface://QuantFactory/Starcannon-Unleashed-12B-v1.0-GGUF/Starcannon-Unleashed-12B-v1.0.Q4_K_M.gguf
- !!merge <<: *mistral03
icon: https://cdn-uploads.huggingface.co/production/uploads/645cfe4603fc86c46b3e46d1/CATNxzDDJL6xHR4tc4IMf.jpeg
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
name: "valor-7b-v0.1"
urls:
- https://huggingface.co/NeuralNovel/Valor-7B-v0.1
- https://huggingface.co/mradermacher/Valor-7B-v0.1-GGUF
description: |
Valor speaks louder than words.
This is a qlora finetune of blockchainlabs_7B_merged_test2_4 using the Neural-Story-v0.1 dataset, with the intention of increasing creativity and writing ability.
overrides:
parameters:
model: Valor-7B-v0.1.Q4_K_M.gguf
files:
- filename: Valor-7B-v0.1.Q4_K_M.gguf
sha256: 2b695fe53d64b36c3eea68f1fa0809f30560aa97ce8b71c16f371c2dc262d9b8
uri: huggingface://mradermacher/Valor-7B-v0.1-GGUF/Valor-7B-v0.1.Q4_K_M.gguf
- !!merge <<: *mistral03
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
name: "mn-tiramisu-12b"
icon: https://huggingface.co/matchaaaaa/MN-Tiramisu-12B/resolve/main/tiramisu-cute.png
urls:
- https://huggingface.co/matchaaaaa/MN-Tiramisu-12B
- https://huggingface.co/MaziyarPanahi/MN-Tiramisu-12B-GGUF
description: |
This is a really yappity-yappy yapping model that's good for long-form RP. Tried to rein it in with Mahou and give it some more character understanding with Pantheon. Feedback is always welcome.
overrides:
parameters:
model: MN-Tiramisu-12B.Q5_K_M.gguf
files:
- filename: MN-Tiramisu-12B.Q5_K_M.gguf
sha256: 100c78b08a0f4fc5a5a65797e1498ff5fd6fc9daf96b0898d2de731c35fa4e3e
uri: huggingface://MaziyarPanahi/MN-Tiramisu-12B-GGUF/MN-Tiramisu-12B.Q5_K_M.gguf
- !!merge <<: *mistral03
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
name: "mistral-nemo-prism-12b"
icon: https://huggingface.co/nbeerbower/Mistral-Nemo-Prism-12B/resolve/main/prism-cover.png
urls:
- https://huggingface.co/nbeerbower/Mistral-Nemo-Prism-12B
- https://huggingface.co/bartowski/Mistral-Nemo-Prism-12B-GGUF
description: |
Mahou-1.5-mistral-nemo-12B-lorablated finetuned on Arkhaios-DPO and Purpura-DPO.
The goal was to reduce archaic language and purple prose in a completely uncensored model.
overrides:
parameters:
model: Mistral-Nemo-Prism-12B-Q4_K_M.gguf
files:
- filename: Mistral-Nemo-Prism-12B-Q4_K_M.gguf
sha256: 96b922c6d55d94ffb91e869b8cccaf2b6dc449d75b1456f4d4578c92c8184c25
uri: huggingface://bartowski/Mistral-Nemo-Prism-12B-GGUF/Mistral-Nemo-Prism-12B-Q4_K_M.gguf
- !!merge <<: *mistral03
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
name: "magnum-12b-v2.5-kto-i1"
icon: https://cdn-uploads.huggingface.co/production/uploads/658a46cbfb9c2bdfae75b3a6/sWYs3iHkn36lw6FT_Y7nn.png
urls:
- https://huggingface.co/mradermacher/magnum-12b-v2.5-kto-i1-GGUF
description: |
v2.5 KTO is an experimental release; we are testing a hybrid reinforcement learning strategy of KTO + DPOP, using rejected data sampled from the original model as "rejected". For "chosen", we use data from the original finetuning dataset as "chosen". This was done on a limited portion of of primarily instruction following data; we plan to scale up a larger KTO dataset in the future for better generalization. This is the 5th in a series of models designed to replicate the prose quality of the Claude 3 models, specifically Sonnet and Opus. This model is fine-tuned on top of anthracite-org/magnum-12b-v2.
overrides:
parameters:
model: magnum-12b-v2.5-kto.i1-Q4_K_M.gguf
files:
- filename: magnum-12b-v2.5-kto.i1-Q4_K_M.gguf
sha256: 07e91d2c6d4e42312e65a69c54f16be467575f7a596fe052993b388e38b90d76
uri: huggingface://mradermacher/magnum-12b-v2.5-kto-i1-GGUF/magnum-12b-v2.5-kto.i1-Q4_K_M.gguf
- !!merge <<: *mistral03
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
name: "chatty-harry_v3.0"
icon: https://cdn-uploads.huggingface.co/production/uploads/66c1cc08453a7ef6c5fe657a/0KzNTEtn2kJJQsw4lQeY0.png
urls:
- https://huggingface.co/Triangle104/Chatty-Harry_V3.0
- https://huggingface.co/QuantFactory/Chatty-Harry_V3.0-GGUF
description: |
This model was merged using the TIES merge method using Triangle104/ChatWaifu_Magnum_V0.2 as a base.
The following models were included in the merge: elinas/Chronos-Gold-12B-1.0
overrides:
parameters:
model: Chatty-Harry_V3.0.Q4_K_M.gguf
files:
- filename: Chatty-Harry_V3.0.Q4_K_M.gguf
sha256: 54b63bb74498576ca77b801ed096657a93cc2f6b71d707c3605fdb394bd3e622
uri: huggingface://QuantFactory/Chatty-Harry_V3.0-GGUF/Chatty-Harry_V3.0.Q4_K_M.gguf
- !!merge <<: *mistral03
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
name: "mn-chunky-lotus-12b"
icon: https://huggingface.co/FallenMerick/MN-Chunky-Lotus-12B/resolve/main/chunky-lotus.jpg
urls:
- https://huggingface.co/QuantFactory/MN-Chunky-Lotus-12B-GGUF
description: |
I had originally planned to use this model for future/further merges, but decided to go ahead and release it since it scored rather high on my local EQ Bench testing (79.58 w/ 100% parsed @ 8-bit).
Bear in mind that most models tend to score a bit higher on my own local tests as compared to their posted scores. Still, its the highest score I've personally seen from all the models I've tested.
Its a decent model, with great emotional intelligence and acceptable adherence to various character personalities. It does a good job at roleplaying despite being a bit bland at times.
Overall, I like the way it writes, but it has a few formatting issues that show up from time to time, and it has an uncommon tendency to paste walls of character feelings/intentions at the end of some outputs without any prompting. This is something I hope to correct with future iterations.
This is a merge of pre-trained language models created using mergekit.
The following models were included in the merge:
Epiculous/Violet_Twilight-v0.2
nbeerbower/mistral-nemo-gutenberg-12B-v4
flammenai/Mahou-1.5-mistral-nemo-12B
overrides:
parameters:
model: MN-Chunky-Lotus-12B.Q4_K_M.gguf
files:
- filename: MN-Chunky-Lotus-12B.Q4_K_M.gguf
sha256: 363defe0a769fdb715dab75517966a0a80bcdd981a610d4c759099b6c8ff143a
uri: huggingface://QuantFactory/MN-Chunky-Lotus-12B-GGUF/MN-Chunky-Lotus-12B.Q4_K_M.gguf
- !!merge <<: *mistral03
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
name: "chronos-gold-12b-1.0"
icon: https://cdn-uploads.huggingface.co/production/uploads/630417380907b9a115c6aa9f/3hc8zt8fzKdO3qHK1p1mW.webp
urls:
- https://huggingface.co/elinas/Chronos-Gold-12B-1.0
- https://huggingface.co/mradermacher/Chronos-Gold-12B-1.0-GGUF
description: |
Chronos Gold 12B 1.0 is a very unique model that applies to domain areas such as general chatbot functionatliy, roleplay, and storywriting. The model has been observed to write up to 2250 tokens in a single sequence. The model was trained at a sequence length of 16384 (16k) and will still retain the apparent 128k context length from Mistral-Nemo, though it deteriorates over time like regular Nemo does based on the RULER Test
As a result, is recommended to keep your sequence length max at 16384, or you will experience performance degredation.
The base model is mistralai/Mistral-Nemo-Base-2407 which was heavily modified to produce a more coherent model, comparable to much larger models.
Chronos Gold 12B-1.0 re-creates the uniqueness of the original Chronos with significiantly enhanced prompt adherence (following), coherence, a modern dataset, as well as supporting a majority of "character card" formats in applications like SillyTavern.
It went through an iterative and objective merge process as my previous models and was further finetuned on a dataset curated for it.
The specifics of the model will not be disclosed at the time due to dataset ownership.
overrides:
parameters:
model: Chronos-Gold-12B-1.0.Q4_K_M.gguf
files:
- filename: Chronos-Gold-12B-1.0.Q4_K_M.gguf
sha256: d75a6ed28781f0ea6fa6e58c0b25dfecdd160d4cab64aaf511ea156e99a1e1f3
uri: huggingface://mradermacher/Chronos-Gold-12B-1.0-GGUF/Chronos-Gold-12B-1.0.Q4_K_M.gguf
- &mudler
### START mudler's LocalAI specific-models
url: "github:mudler/LocalAI/gallery/mudler.yaml@master"
name: "LocalAI-llama3-8b-function-call-v0.2"
icon: "https://cdn-uploads.huggingface.co/production/uploads/647374aa7ff32a81ac6d35d4/us5JKi9z046p8K-cn_M0w.webp"
license: llama3
description: |
This model is a fine-tune on a custom dataset + glaive to work specifically and leverage all the LocalAI features of constrained grammar.
Specifically, the model once enters in tools mode will always reply with JSON.
urls:
- https://huggingface.co/mudler/LocalAI-Llama3-8b-Function-Call-v0.2-GGUF
- https://huggingface.co/mudler/LocalAI-Llama3-8b-Function-Call-v0.2
tags:
- llm
- gguf
- gpu
- cpu
- llama3
- function-calling
overrides:
parameters:
model: LocalAI-Llama3-8b-Function-Call-v0.2-q4_k_m.bin
files:
- filename: LocalAI-Llama3-8b-Function-Call-v0.2-q4_k_m.bin
sha256: 7e46405ce043cbc8d30f83f26a5655dc8edf5e947b748d7ba2745bd0af057a41
uri: huggingface://mudler/LocalAI-Llama3-8b-Function-Call-v0.2-GGUF/LocalAI-Llama3-8b-Function-Call-v0.2-q4_k_m.bin
- !!merge <<: *mudler
icon: "https://cdn-uploads.huggingface.co/production/uploads/647374aa7ff32a81ac6d35d4/SKuXcvmZ_6oD4NCMkvyGo.png"
name: "mirai-nova-llama3-LocalAI-8b-v0.1"
urls:
- https://huggingface.co/mudler/Mirai-Nova-Llama3-LocalAI-8B-v0.1-GGUF
- https://huggingface.co/mudler/Mirai-Nova-Llama3-LocalAI-8B-v0.1
description: |
Mirai Nova: "Mirai" means future in Japanese, and "Nova" references a star showing a sudden large increase in brightness.
A set of models oriented in function calling, but generalist and with enhanced reasoning capability. This is fine tuned with Llama3.
Mirai Nova works particularly well with LocalAI, leveraging the function call with grammars feature out of the box.
overrides:
parameters:
model: Mirai-Nova-Llama3-LocalAI-8B-v0.1-q4_k_m.bin
files:
- filename: Mirai-Nova-Llama3-LocalAI-8B-v0.1-q4_k_m.bin
sha256: 579cbb229f9c11d0330759ff4733102d2491615a4c61289e26c09d1b3a583fec
uri: huggingface://mudler/Mirai-Nova-Llama3-LocalAI-8B-v0.1-GGUF/Mirai-Nova-Llama3-LocalAI-8B-v0.1-q4_k_m.bin
- &parler-tts
### START parler-tts
url: "github:mudler/LocalAI/gallery/parler-tts.yaml@master"
name: parler-tts-mini-v0.1
overrides:
parameters:
model: parler-tts/parler_tts_mini_v0.1
license: apache-2.0
description: |
Parler-TTS is a lightweight text-to-speech (TTS) model that can generate high-quality, natural sounding speech in the style of a given speaker (gender, pitch, speaking style, etc). It is a reproduction of work from the paper Natural language guidance of high-fidelity text-to-speech with synthetic annotations by Dan Lyth and Simon King, from Stability AI and Edinburgh University respectively.
urls:
- https://github.com/huggingface/parler-tts
tags:
- tts
- gpu
- cpu
- text-to-speech
- python
- &rerankers
### START rerankers
url: "github:mudler/LocalAI/gallery/rerankers.yaml@master"
name: cross-encoder
parameters:
model: cross-encoder
license: apache-2.0
description: |
A cross-encoder model that can be used for reranking
tags:
- reranker
- gpu
- python
## LLMs
### START LLAMA3
- name: "einstein-v6.1-llama3-8b"
url: "github:mudler/LocalAI/gallery/hermes-2-pro-mistral.yaml@master"
icon: https://cdn-uploads.huggingface.co/production/uploads/6468ce47e134d050a58aa89c/5s12oq859qLfDkkTNam_C.png
urls:
- https://huggingface.co/Weyaxi/Einstein-v6.1-Llama3-8B
tags:
- llm
- gguf
- gpu
- cpu
- llama3
license: llama3
description: |
This model is a full fine-tuned version of meta-llama/Meta-Llama-3-8B on diverse datasets.
This model is finetuned using 8xRTX3090 + 1xRTXA6000 using axolotl.
overrides:
parameters:
model: Einstein-v6.1-Llama3-8B-Q4_K_M.gguf
files:
- filename: Einstein-v6.1-Llama3-8B-Q4_K_M.gguf
sha256: 447587bd8f60d9050232148d34fdb2d88b15b2413fd7f8e095a4606ec60b45bf
uri: huggingface://bartowski/Einstein-v6.1-Llama3-8B-GGUF/Einstein-v6.1-Llama3-8B-Q4_K_M.gguf
- &gemma
url: "github:mudler/LocalAI/gallery/gemma.yaml@master"
name: "gemma-2b"
license: gemma
urls:
- https://ai.google.dev/gemma/docs
- https://huggingface.co/mlabonne/gemma-2b-GGUF
description: |
Open source LLM from Google
tags:
- llm
- gguf
- gpu
- cpu
- gemma
overrides:
parameters:
model: gemma-2b.Q4_K_M.gguf
files:
- filename: gemma-2b.Q4_K_M.gguf
sha256: 37d50c21ef7847926204ad9b3007127d9a2722188cfd240ce7f9f7f041aa71a5
uri: huggingface://mlabonne/gemma-2b-GGUF/gemma-2b.Q4_K_M.gguf
- !!merge <<: *gemma
name: "firefly-gemma-7b-iq-imatrix"
icon: "https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/SrOekTxdpnxHyWWmMiAvc.jpeg"
urls:
- https://huggingface.co/Lewdiculous/firefly-gemma-7b-GGUF-IQ-Imatrix
- https://huggingface.co/YeungNLP/firefly-gemma-7b
description: |
firefly-gemma-7b is trained based on gemma-7b to act as a helpful and harmless AI assistant. We use Firefly to train the model on a single V100 GPU with QLoRA.
overrides:
parameters:
model: firefly-gemma-7b-Q4_K_S-imatrix.gguf
files:
- filename: firefly-gemma-7b-Q4_K_S-imatrix.gguf
sha256: 622e0b8e4f12203cc40c7f87915abf99498c2e0582203415ca236ea37643e428
uri: huggingface://Lewdiculous/firefly-gemma-7b-GGUF-IQ-Imatrix/firefly-gemma-7b-Q4_K_S-imatrix.gguf
- !!merge <<: *gemma
name: "gemma-1.1-7b-it"
urls:
- https://huggingface.co/bartowski/gemma-1.1-7b-it-GGUF
- https://huggingface.co/google/gemma-1.1-7b-it
description: |
This is Gemma 1.1 7B (IT), an update over the original instruction-tuned Gemma release.
Gemma 1.1 was trained using a novel RLHF method, leading to substantial gains on quality, coding capabilities, factuality, instruction following and multi-turn conversation quality. We also fixed a bug in multi-turn conversations, and made sure that model responses don't always start with "Sure,".
overrides:
parameters:
model: gemma-1.1-7b-it-Q4_K_M.gguf
files:
- filename: gemma-1.1-7b-it-Q4_K_M.gguf
sha256: 47821da72ee9e80b6fd43c6190ad751b485fb61fa5664590f7a73246bcd8332e
uri: huggingface://bartowski/gemma-1.1-7b-it-GGUF/gemma-1.1-7b-it-Q4_K_M.gguf
- !!merge <<: *gemma
name: "gemma-2-27b-it"
urls:
- https://huggingface.co/google/gemma-2-27b-it
- https://huggingface.co/bartowski/gemma-2-27b-it-GGUF
description: |
Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. They are text-to-text, decoder-only large language models, available in English, with open weights for both pre-trained variants and instruction-tuned variants. Gemma models are well-suited for a variety of text generation tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as a laptop, desktop or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone.
overrides:
parameters:
model: gemma-2-27b-it-Q4_K_M.gguf
files:
- filename: gemma-2-27b-it-Q4_K_M.gguf
uri: huggingface://bartowski/gemma-2-27b-it-GGUF/gemma-2-27b-it-Q4_K_M.gguf
sha256: 503a87ab47c9e7fb27545ec8592b4dc4493538bd47b397ceb3197e10a0370d23
- !!merge <<: *gemma
name: "gemma-2-9b-it"
urls:
- https://huggingface.co/google/gemma-2-9b-it
- https://huggingface.co/bartowski/gemma-2-9b-it-GGUF
description: |
Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. They are text-to-text, decoder-only large language models, available in English, with open weights for both pre-trained variants and instruction-tuned variants. Gemma models are well-suited for a variety of text generation tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as a laptop, desktop or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone.
overrides:
parameters:
model: gemma-2-9b-it-Q4_K_M.gguf
files:
- filename: gemma-2-9b-it-Q4_K_M.gguf
uri: huggingface://bartowski/gemma-2-9b-it-GGUF/gemma-2-9b-it-Q4_K_M.gguf
sha256: 13b2a7b4115bbd0900162edcebe476da1ba1fc24e718e8b40d32f6e300f56dfe
- !!merge <<: *gemma
name: "tess-v2.5-gemma-2-27b-alpha"
urls:
- https://huggingface.co/migtissera/Tess-v2.5-Gemma-2-27B-alpha
- https://huggingface.co/bartowski/Tess-v2.5-Gemma-2-27B-alpha-GGUF
icon: https://huggingface.co/migtissera/Tess-v2.5-Qwen2-72B/resolve/main/Tess-v2.5.png
description: |
Great at reasoning, but woke as fuck! This is a fine-tune over the Gemma-2-27B-it, since the base model fine-tuning is not generating coherent content.
Tess-v2.5 is the latest state-of-the-art model in the Tess series of Large Language Models (LLMs). Tess, short for Tesoro (Treasure in Italian), is the flagship LLM series created by Migel Tissera. Tess-v2.5 brings significant improvements in reasoning capabilities, coding capabilities and mathematics
overrides:
parameters:
model: Tess-v2.5-Gemma-2-27B-alpha-Q4_K_M.gguf
files:
- filename: Tess-v2.5-Gemma-2-27B-alpha-Q4_K_M.gguf
uri: huggingface://bartowski/Tess-v2.5-Gemma-2-27B-alpha-GGUF/Tess-v2.5-Gemma-2-27B-alpha-Q4_K_M.gguf
sha256: d7be7092d28aefbdcd1ee4f4d8503d169d0a649f763e169d4b179aef20d69c21
- !!merge <<: *gemma
name: "gemma2-9b-daybreak-v0.5"
urls:
- https://huggingface.co/crestf411/gemma2-9B-daybreak-v0.5
- https://huggingface.co/Vdr1/gemma2-9B-daybreak-v0.5-GGUF-Imatrix-IQ
description: |
THIS IS A PRE-RELEASE. BEGONE.
Beware, depraved. Not suitable for any audience.
Dataset curation to remove slop-perceived expressions continues. Unfortunately base models (which this is merged on top of) are generally riddled with "barely audible"s and "couldn't help"s and "shivers down spines" etc.
overrides:
parameters:
model: gemma2-9B-daybreak-v0.5-Q4_K_M-imat.gguf
files:
- filename: gemma2-9B-daybreak-v0.5-Q4_K_M-imat.gguf
uri: huggingface://Vdr1/gemma2-9B-daybreak-v0.5-GGUF-Imatrix-IQ/gemma2-9B-daybreak-v0.5-Q4_K_M-imat.gguf
sha256: 6add4d12052918986af935d686773e4e89fddd1bbf7941911cf3fbeb1b1862c0
- !!merge <<: *gemma
name: "gemma-2-9b-it-sppo-iter3"
urls:
- https://huggingface.co/UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3
- https://huggingface.co/bartowski/Gemma-2-9B-It-SPPO-Iter3-GGUF
description: |
Self-Play Preference Optimization for Language Model Alignment (https://arxiv.org/abs/2405.00675)
Gemma-2-9B-It-SPPO-Iter3
This model was developed using Self-Play Preference Optimization at iteration 3, based on the google/gemma-2-9b-it architecture as starting point. We utilized the prompt sets from the openbmb/UltraFeedback dataset, splited to 3 parts for 3 iterations by snorkelai/Snorkel-Mistral-PairRM-DPO-Dataset. All responses used are synthetic.
overrides:
parameters:
model: Gemma-2-9B-It-SPPO-Iter3-Q4_K_M.gguf
files:
- filename: Gemma-2-9B-It-SPPO-Iter3-Q4_K_M.gguf
uri: huggingface://bartowski/Gemma-2-9B-It-SPPO-Iter3-GGUF/Gemma-2-9B-It-SPPO-Iter3-Q4_K_M.gguf
sha256: c04482b442f05b784ab33af30caa0ea0645deb67fb359d3fad4932f4bb04e12d
- !!merge <<: *gemma
name: "smegmma-9b-v1"
icon: https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/RSuc5p9Sm6CYj6lGOxvx4.gif
urls:
- https://huggingface.co/TheDrummer/Smegmma-9B-v1
- https://huggingface.co/bartowski/Smegmma-9B-v1-GGUF
description: "Smegmma 9B v1 \U0001F9C0\n\nThe sweet moist of Gemma 2, unhinged.\n\nsmeg - ghem - mah\n\nAn eRP model that will blast you with creamy moist. Finetuned by yours truly.\n\nThe first Gemma 2 9B RP finetune attempt!\nWhat's New?\n\n Engaging roleplay\n Less refusals / censorship\n Less commentaries / summaries\n More willing AI\n Better formatting\n Better creativity\n Moist alignment\n\nNotes\n\n Refusals still exist, but a couple of re-gens may yield the result you want\n Formatting and logic may be weaker at the start\n Make sure to start strong\n May be weaker with certain cards, YMMV and adjust accordingly!\n"
overrides:
parameters:
model: Smegmma-9B-v1-Q4_K_M.gguf
files:
- filename: Smegmma-9B-v1-Q4_K_M.gguf
uri: huggingface://bartowski/Smegmma-9B-v1-GGUF/Smegmma-9B-v1-Q4_K_M.gguf
sha256: abd9da0a6bf5cbc0ed6bb0d7e3ee7aea3f6b1edbf8c64e51d0fa25001975aed7
- !!merge <<: *gemma
name: "smegmma-deluxe-9b-v1"
icon: https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/RSuc5p9Sm6CYj6lGOxvx4.gif
urls:
- https://huggingface.co/TheDrummer/Smegmma-Deluxe-9B-v1
- https://huggingface.co/bartowski/Smegmma-Deluxe-9B-v1-GGUF
description: "Smegmma Deluxe 9B v1 \U0001F9C0\n\nThe sweet moist of Gemma 2, unhinged.\n\nsmeg - ghem - mah\n\nAn eRP model that will blast you with creamy moist. Finetuned by yours truly.\n\nThe first Gemma 2 9B RP finetune attempt!\n\nWhat's New?\n\n Engaging roleplay\n Less refusals / censorship\n Less commentaries / summaries\n More willing AI\n Better formatting\n Better creativity\n Moist alignment\n"
overrides:
parameters:
model: Smegmma-Deluxe-9B-v1-Q4_K_M.gguf
files:
- filename: Smegmma-Deluxe-9B-v1-Q4_K_M.gguf
uri: huggingface://bartowski/Smegmma-Deluxe-9B-v1-GGUF/Smegmma-Deluxe-9B-v1-Q4_K_M.gguf
sha256: 732ecb253ea0115453438fc1f4e3e31507719ddcf81890a86ad1d734beefdb6f
- !!merge <<: *gemma
name: "tiger-gemma-9b-v1-i1"
icon: https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/A97OlLKeT4XOnv4IG1b6m.png
urls:
- https://huggingface.co/TheDrummer/Tiger-Gemma-9B-v1
- https://huggingface.co/mradermacher/Tiger-Gemma-9B-v1-i1-GGUF
description: |
Tiger Gemma 9B v1
Decensored Gemma 9B. No refusals so far. No apparent brain damage.
In memory of Tiger
overrides:
parameters:
model: Tiger-Gemma-9B-v1.i1-Q4_K_M.gguf
files:
- filename: Tiger-Gemma-9B-v1.i1-Q4_K_M.gguf
sha256: ef10accfee8023b31def5425bf591bf1f0203090f3dd851cd3f37bb235324383
uri: huggingface://mradermacher/Tiger-Gemma-9B-v1-i1-GGUF/Tiger-Gemma-9B-v1.i1-Q4_K_M.gguf
- !!merge <<: *gemma
name: "hodachi-ezo-humanities-9b-gemma-2-it"
icon: https://cdn-uploads.huggingface.co/production/uploads/657e900beaad53ff67ba84db/0OYFqT8kACowa9bY1EZF6.png
urls:
- https://huggingface.co/HODACHI/EZO-Humanities-9B-gemma-2-it
- https://huggingface.co/mmnga/HODACHI-EZO-Humanities-9B-gemma-2-it-gguf
description: |
This model is based on Gemma-2-9B-it, specially tuned to enhance its performance in Humanities-related tasks. While maintaining its strong foundation in Japanese language processing, it has been optimized to excel in areas such as literature, philosophy, history, and cultural studies. This focused approach allows the model to provide deeper insights and more nuanced responses in Humanities fields, while still being capable of handling a wide range of global inquiries.
Gemma-2-9B-itをベースとして、人文科学Humanities関連タスクでの性能向上に特化したチューニングを施したモデルです。日本語処理の強固な基盤を維持しつつ、文学、哲学、歴史、文化研究などの分野で卓越した能力を発揮するよう最適化されています。この焦点を絞ったアプローチにより、人文科学分野でより深い洞察と繊細な応答を提供しながら、同時に幅広いグローバルな問い合わせにも対応できる能力を備えています。
overrides:
parameters:
model: HODACHI-EZO-Humanities-9B-gemma-2-it-Q4_K_M.gguf
files:
- filename: HODACHI-EZO-Humanities-9B-gemma-2-it-Q4_K_M.gguf
sha256: 11606130206347355785f5a2720ff2fa671ca7fbe2af3fb4c34b508389952424
uri: huggingface://mmnga/HODACHI-EZO-Humanities-9B-gemma-2-it-gguf/HODACHI-EZO-Humanities-9B-gemma-2-it-Q4_K_M.gguf
- !!merge <<: *gemma
name: "ezo-common-9b-gemma-2-it"
icon: https://cdn-uploads.huggingface.co/production/uploads/657e900beaad53ff67ba84db/0OYFqT8kACowa9bY1EZF6.png
urls:
- https://huggingface.co/HODACHI/EZO-Common-9B-gemma-2-it
- https://huggingface.co/QuantFactory/EZO-Common-9B-gemma-2-it-GGUF
description: |
This model is based on Gemma-2-9B-it, enhanced with multiple tuning techniques to improve its general performance. While it excels in Japanese language tasks, it's designed to meet diverse needs globally.
Gemma-2-9B-itをベースとして、複数のチューニング手法を採用のうえ、汎用的に性能を向上させたモデルです。日本語タスクに優れつつ、世界中の多様なニーズに応える設計となっています。
overrides:
parameters:
model: EZO-Common-9B-gemma-2-it.Q4_K_M.gguf
files:
- filename: EZO-Common-9B-gemma-2-it.Q4_K_M.gguf
sha256: 57678b1828673dccb15f76e52b00672c74aa6169421bbb8620b8955955322cfd
uri: huggingface://QuantFactory/EZO-Common-9B-gemma-2-it-GGUF/EZO-Common-9B-gemma-2-it.Q4_K_M.gguf
- !!merge <<: *gemma
name: "big-tiger-gemma-27b-v1"
icon: https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/A97OlLKeT4XOnv4IG1b6m.png
urls:
- https://huggingface.co/TheDrummer/Big-Tiger-Gemma-27B-v1
- https://huggingface.co/TheDrummer/Big-Tiger-Gemma-27B-v1-GGUF
description: |
Big Tiger Gemma 27B v1 is a Decensored Gemma 27B model with no refusals, except for some rare instances from the 9B model. It does not appear to have any brain damage. The model is available from various sources, including Hugging Face, and comes in different variations such as GGUF, iMatrix, and EXL2.
overrides:
parameters:
model: Big-Tiger-Gemma-27B-v1c-Q4_K_M.gguf
files:
- filename: Big-Tiger-Gemma-27B-v1c-Q4_K_M.gguf
sha256: c5fc5605d36ae280c1c908c9b4bcb12b28abbe2692f317edeb83ab1104657fe5
uri: huggingface://TheDrummer/Big-Tiger-Gemma-27B-v1-GGUF/Big-Tiger-Gemma-27B-v1c-Q4_K_M.gguf
- !!merge <<: *gemma
name: "gemma-2b-translation-v0.150"
urls:
- https://huggingface.co/lemon-mint/gemma-2b-translation-v0.150
- https://huggingface.co/RichardErkhov/lemon-mint_-_gemma-2b-translation-v0.150-gguf
description: |
Original model: lemon-mint/gemma-ko-1.1-2b-it
Evaluation metrics: Eval Loss, Train Loss, lr, optimizer, lr_scheduler_type.
Prompt Template:
<bos><start_of_turn>user
Translate into Korean: [input text]<end_of_turn>
<start_of_turn>model
[translated text in Korean]<eos>
<bos><start_of_turn>user
Translate into English: [Korean text]<end_of_turn>
<start_of_turn>model
[translated text in English]<eos>
Model features:
* Developed by: lemon-mint
* Model type: Gemma
* Languages (NLP): English
* License: Gemma Terms of Use
* Finetuned from model: lemon-mint/gemma-ko-1.1-2b-it
overrides:
parameters:
model: gemma-2b-translation-v0.150.Q4_K_M.gguf
files:
- filename: gemma-2b-translation-v0.150.Q4_K_M.gguf
sha256: dcde67b83168d2e7ca835cf9a7a4dcf38b41b9cefe3cbc997c71d2741c08cd25
uri: huggingface://RichardErkhov/lemon-mint_-_gemma-2b-translation-v0.150-gguf/gemma-2b-translation-v0.150.Q4_K_M.gguf
- !!merge <<: *gemma
name: "emo-2b"
urls:
- https://huggingface.co/OEvortex/EMO-2B
- https://huggingface.co/RichardErkhov/OEvortex_-_EMO-2B-gguf
description: |
EMO-2B: Emotionally Intelligent Conversational AI
Overview:
EMO-2B is a state-of-the-art conversational AI model with 2.5 billion parameters, designed to engage in emotionally resonant dialogue. Building upon the success of EMO-1.5B, this model has been further fine-tuned on an extensive corpus of emotional narratives, enabling it to perceive and respond to the emotional undertones of user inputs with exceptional empathy and emotional intelligence.
Key Features:
- Advanced Emotional Intelligence: With its increased capacity, EMO-2B demonstrates an even deeper understanding and generation of emotional language, allowing for more nuanced and contextually appropriate emotional responses.
- Enhanced Contextual Awareness: The model considers an even broader context within conversations, accounting for subtle emotional cues and providing emotionally resonant responses tailored to the specific situation.
- Empathetic and Supportive Dialogue: EMO-2B excels at active listening, validating emotions, offering compassionate advice, and providing emotional support, making it an ideal companion for users seeking empathy and understanding.
- Dynamic Persona Adaptation: The model can dynamically adapt its persona, communication style, and emotional responses to match the user's emotional state, ensuring a highly personalized and tailored conversational experience.
Use Cases:
EMO-2B is well-suited for a variety of applications where emotional intelligence and empathetic communication are crucial, such as:
- Mental health support chatbots
- Emotional support companions
- Personalized coaching and motivation
- Narrative storytelling and interactive fiction
- Customer service and support (for emotionally sensitive contexts)
Limitations and Ethical Considerations:
While EMO-2B is designed to provide emotionally intelligent and empathetic responses, it is important to note that it is an AI system and cannot replicate the depth and nuance of human emotional intelligence. Users should be aware that the model's responses, while emotionally supportive, should not be considered a substitute for professional mental health support or counseling.
Additionally, as with any language model, EMO-2B may reflect biases present in its training data. Users should exercise caution and critical thinking when interacting with the model, and report any concerning or inappropriate responses.
overrides:
parameters:
model: EMO-2B.Q4_K_M.gguf
files:
- filename: EMO-2B.Q4_K_M.gguf
sha256: 608bffc0e9012bc7f9a94b714f4932e2826cc122dbac59b586e4baa2ee0fdca5
uri: huggingface://RichardErkhov/OEvortex_-_EMO-2B-gguf/EMO-2B.Q4_K_M.gguf
- !!merge <<: *gemma
name: "gemmoy-9b-g2-mk.3-i1"
icon: https://huggingface.co/Hastagaras/G2-Gemmoy-9B-MK.3-RP/resolve/main/gemmoy.jpg
urls:
- https://huggingface.co/Hastagaras/Gemmoy-9B-G2-MK.3
- https://huggingface.co/mradermacher/Gemmoy-9B-G2-MK.3-i1-GGUF
description: |
The Gemmoy-9B-G2-MK.3 model is a large language model trained on a variety of datasets, including grimulkan/LimaRP-augmented, LDJnr/Capybara, TheSkullery/C2logs_Filtered_Sharegpt_Merged, abacusai/SystemChat-1.1, and Hastagaras/FTTS-Stories-Sharegpt.
overrides:
parameters:
model: Gemmoy-9B-G2-MK.3.i1-Q4_K_M.gguf
files:
- filename: Gemmoy-9B-G2-MK.3.i1-Q4_K_M.gguf
sha256: 0d1004a246fbda7f1408a6841129b73c4100e697bd0a6806fc698eabbb0802a1
uri: huggingface://mradermacher/Gemmoy-9B-G2-MK.3-i1-GGUF/Gemmoy-9B-G2-MK.3.i1-Q4_K_M.gguf
- !!merge <<: *gemma
name: "sunfall-simpo-9b"
urls:
- https://huggingface.co/mradermacher/sunfall-SimPO-9B-GGUF
description: |
Crazy idea that what if you put the LoRA from crestf411/sunfall-peft on top of princeton-nlp/gemma-2-9b-it-SimPO and therefore this exists solely for that purpose alone in the universe.
overrides:
parameters:
model: sunfall-SimPO-9B.Q4_K_M.gguf
files:
- filename: sunfall-SimPO-9B.Q4_K_M.gguf
sha256: 810c51c6ce34107706d921531b97cfa409cd53c215d18b88bce7cdb617f73ceb
uri: huggingface://mradermacher/sunfall-SimPO-9B-GGUF/sunfall-SimPO-9B.Q4_K_M.gguf
- !!merge <<: *gemma
name: "sunfall-simpo-9b-i1"
urls:
- https://huggingface.co/mradermacher/sunfall-SimPO-9B-i1-GGUF
description: |
Crazy idea that what if you put the LoRA from crestf411/sunfall-peft on top of princeton-nlp/gemma-2-9b-it-SimPO and therefore this exists solely for that purpose alone in the universe.
overrides:
parameters:
model: sunfall-SimPO-9B.i1-Q4_K_M.gguf
files:
- filename: sunfall-SimPO-9B.i1-Q4_K_M.gguf
sha256: edde9df372a9a5b2316dc6822dc2f52f5a2059103dd7f08072e5a5355c5f5d0b
uri: huggingface://mradermacher/sunfall-SimPO-9B-i1-GGUF/sunfall-SimPO-9B.i1-Q4_K_M.gguf
- !!merge <<: *gemma
name: "seeker-9b"
icon: https://huggingface.co/lodrick-the-lafted/seeker-9b/resolve/main/seeker.webp
urls:
- https://huggingface.co/lodrick-the-lafted/seeker-9b
- https://huggingface.co/mradermacher/seeker-9b-GGUF
description: |
The LLM model is the "Seeker-9b" model, which is a large language model trained on a diverse range of text data. It has 9 billion parameters and is based on the "lodrick-the-lafted" repository. The model is capable of generating text and can be used for a variety of natural language processing tasks such as language translation, text summarization, and text generation. It supports the English language and is available under the Apache-2.0 license.
overrides:
parameters:
model: seeker-9b.Q4_K_M.gguf
files:
- filename: seeker-9b.Q4_K_M.gguf
sha256: 7658e5bdad96dc8d232f83cff7c3fe5fa993defbfd3e728dcc7436352574a00a
uri: huggingface://mradermacher/seeker-9b-GGUF/seeker-9b.Q4_K_M.gguf
- !!merge <<: *gemma
name: "gemmasutra-pro-27b-v1"
icon: https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/w0Oi8TReoQNT3ljm5Wf6c.webp
urls:
- https://huggingface.co/TheDrummer/Gemmasutra-Pro-27B-v1
- https://huggingface.co/mradermacher/Gemmasutra-Pro-27B-v1-GGUF
description: |
An RP model with impressive flexibility. Finetuned by yours truly.
overrides:
parameters:
model: Gemmasutra-Pro-27B-v1.Q4_K_M.gguf
files:
- filename: Gemmasutra-Pro-27B-v1.Q4_K_M.gguf
sha256: 336a2fbf142849fcc20e432123433807b6c7b09988652ef583a63636a0f90218
uri: huggingface://mradermacher/Gemmasutra-Pro-27B-v1-GGUF/Gemmasutra-Pro-27B-v1.Q4_K_M.gguf
- !!merge <<: *gemma
name: "gemmasutra-mini-2b-v1"
icon: https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/w0Oi8TReoQNT3ljm5Wf6c.webp
urls:
- https://huggingface.co/TheDrummer/Gemmasutra-Mini-2B-v1-GGUF
description: |
It is a small, 2 billion parameter language model that has been trained for role-playing purposes. The model is designed to work well in various settings, such as in the browser, on a laptop, or even on a Raspberry Pi. It has been fine-tuned for RP use and claims to provide a satisfying experience, even in low-resource environments. The model is uncensored and unaligned, and it can be used with the Gemma Instruct template or with chat completion. For the best experience, it is recommended to modify the template to support the `system` role. The model also features examples of its output, highlighting its versatility and creativity.
overrides:
parameters:
model: Gemmasutra-Mini-2B-v1i-Q4_K_M.gguf
files:
- filename: Gemmasutra-Mini-2B-v1i-Q4_K_M.gguf
sha256: 29ba3db911fbadef4452ba757ddd9ce58fb892b7a872f19eefd0743c961797fb
uri: huggingface://TheDrummer/Gemmasutra-Mini-2B-v1-GGUF/Gemmasutra-Mini-2B-v1-Q4_K_M.gguf
- !!merge <<: *gemma
name: "tarnished-9b-i1"
icon: https://huggingface.co/lodrick-the-lafted/tarnished-9b/resolve/main/nox.jpg
urls:
- https://huggingface.co/lodrick-the-lafted/tarnished-9b
- https://huggingface.co/mradermacher/tarnished-9b-i1-GGUF
description: "Ah, so you've heard whispers on the winds, have you? \U0001F9D0\n\nImagine this:\nTarnished-9b, a name that echoes with the rasp of coin-hungry merchants and the clatter of forgotten machinery. This LLM speaks with the voice of those who straddle the line between worlds, who've tasted the bittersweet nectar of eldritch power and the tang of the Interdimensional Trade Council.\n\nIt's a tongue that dances with secrets, a whisperer of lore lost and found. Its words may guide you through the twisting paths of history, revealing truths hidden beneath layers of dust and time.\n\nBut be warned, Tarnished One! For knowledge comes at a price. The LLM's gaze can pierce the veil of reality, but it can also lure you into the labyrinthine depths of madness.\n\nDare you tread this path?\n"
overrides:
parameters:
model: tarnished-9b.i1-Q4_K_M.gguf
files:
- filename: tarnished-9b.i1-Q4_K_M.gguf
sha256: 62ab09124b3f6698bd94ef966533ae5d427d87f6bdc09f6f46917def96420a0c
uri: huggingface://mradermacher/tarnished-9b-i1-GGUF/tarnished-9b.i1-Q4_K_M.gguf
- !!merge <<: *gemma
name: "shieldgemma-9b-i1"
urls:
- https://huggingface.co/google/shieldgemma-9b
- https://huggingface.co/mradermacher/shieldgemma-9b-i1-GGUF
description: |
ShieldGemma is a series of safety content moderation models built upon Gemma 2 that target four harm categories (sexually explicit, dangerous content, hate, and harassment). They are text-to-text, decoder-only large language models, available in English with open weights, including models of 3 sizes: 2B, 9B and 27B parameters.
overrides:
parameters:
model: shieldgemma-9b.i1-Q4_K_M.gguf
files:
- filename: shieldgemma-9b.i1-Q4_K_M.gguf
sha256: ffa7eaadcc0c7d0544fda5b0d86bba3ffa3431b673e5b2135f421cfe65bd8732
uri: huggingface://mradermacher/shieldgemma-9b-i1-GGUF/shieldgemma-9b.i1-Q4_K_M.gguf
- !!merge <<: *gemma
name: "athena-codegemma-2-2b-it"
urls:
- https://huggingface.co/EpistemeAI/Athena-codegemma-2-2b-it
- https://huggingface.co/mradermacher/Athena-codegemma-2-2b-it-GGUF
description: |
Supervised fine tuned (sft unsloth) for coding with EpistemeAI coding dataset.
overrides:
parameters:
model: Athena-codegemma-2-2b-it.Q4_K_M.gguf
files:
- filename: Athena-codegemma-2-2b-it.Q4_K_M.gguf
sha256: 59ce17023438b0da603dd211c7d39f78e7acac4108258ac0818a97a4ca7d64e3
uri: huggingface://mradermacher/Athena-codegemma-2-2b-it-GGUF/Athena-codegemma-2-2b-it.Q4_K_M.gguf
- !!merge <<: *gemma
name: "datagemma-rag-27b-it"
urls:
- https://huggingface.co/google/datagemma-rag-27b-it
- https://huggingface.co/bartowski/datagemma-rag-27b-it-GGUF
description: |
DataGemma is a series of fine-tuned Gemma 2 models used to help LLMs access and incorporate reliable public statistical data from Data Commons into their responses. DataGemma RAG is used with Retrieval Augmented Generation, where it is trained to take a user query and generate natural language queries that can be understood by Data Commons' existing natural language interface. More information can be found in this research paper.
overrides:
parameters:
model: datagemma-rag-27b-it-Q4_K_M.gguf
files:
- filename: datagemma-rag-27b-it-Q4_K_M.gguf
sha256: 3dfcf51b05e3f0ab0979ad194de350edea71cb14444efa0a9f2ef5bfc80753f8
uri: huggingface://bartowski/datagemma-rag-27b-it-GGUF/datagemma-rag-27b-it-Q4_K_M.gguf
- !!merge <<: *gemma
name: "datagemma-rig-27b-it"
urls:
- https://huggingface.co/google/datagemma-rig-27b-it
- https://huggingface.co/bartowski/datagemma-rig-27b-it-GGUF
description: |
DataGemma is a series of fine-tuned Gemma 2 models used to help LLMs access and incorporate reliable public statistical data from Data Commons into their responses. DataGemma RIG is used in the retrieval interleaved generation approach (based off of tool-use approaches), where it is trained to annotate a response with natural language queries to Data Commons existing natural language interface wherever there are statistics. More information can be found in this research paper.
overrides:
parameters:
model: datagemma-rig-27b-it-Q4_K_M.gguf
files:
- filename: datagemma-rig-27b-it-Q4_K_M.gguf
sha256: a6738ffbb49b6c46d220e2793df85c0538e9ac72398e32a0914ee5e55c3096ad
uri: huggingface://bartowski/datagemma-rig-27b-it-GGUF/datagemma-rig-27b-it-Q4_K_M.gguf
- !!merge <<: *gemma
name: "buddy-2b-v1"
urls:
- https://huggingface.co/TheDrummer/Buddy-2B-v1
- https://huggingface.co/bartowski/Buddy-2B-v1-GGUF
description: |
Buddy is designed as an empathetic language model, aimed at fostering introspection, self-reflection, and personal growth through thoughtful conversation. Buddy won't judge and it won't dismiss your concerns. Get some self-care with Buddy.
overrides:
parameters:
model: Buddy-2B-v1-Q4_K_M.gguf
files:
- filename: Buddy-2B-v1-Q4_K_M.gguf
sha256: 9bd25ed907d1a3c2e07fe09399a9b3aec107d368c29896e2c46facede5b7e3d5
uri: huggingface://bartowski/Buddy-2B-v1-GGUF/Buddy-2B-v1-Q4_K_M.gguf
- !!merge <<: *gemma
name: "gemma-2-9b-arliai-rpmax-v1.1"
urls:
- https://huggingface.co/ArliAI/Gemma-2-9B-ArliAI-RPMax-v1.1
- https://huggingface.co/bartowski/Gemma-2-9B-ArliAI-RPMax-v1.1-GGUF
description: |
RPMax is a series of models that are trained on a diverse set of curated creative writing and RP datasets with a focus on variety and deduplication. This model is designed to be highly creative and non-repetitive by making sure no two entries in the dataset have repeated characters or situations, which makes sure the model does not latch on to a certain personality and be capable of understanding and acting appropriately to any characters or situations.
overrides:
parameters:
model: Gemma-2-9B-ArliAI-RPMax-v1.1-Q4_K_M.gguf
files:
- filename: Gemma-2-9B-ArliAI-RPMax-v1.1-Q4_K_M.gguf
sha256: 1724aff0ad6f71bf4371d839aca55578f7ec6f030d8d25c0254126088e4c6250
uri: huggingface://bartowski/Gemma-2-9B-ArliAI-RPMax-v1.1-GGUF/Gemma-2-9B-ArliAI-RPMax-v1.1-Q4_K_M.gguf
- !!merge <<: *gemma
name: "gemma-2-2b-arliai-rpmax-v1.1"
urls:
- https://huggingface.co/bartowski/Gemma-2-2B-ArliAI-RPMax-v1.1-GGUF
description: |
RPMax is a series of models that are trained on a diverse set of curated creative writing and RP datasets with a focus on variety and deduplication. This model is designed to be highly creative and non-repetitive by making sure no two entries in the dataset have repeated characters or situations, which makes sure the model does not latch on to a certain personality and be capable of understanding and acting appropriately to any characters or situations.
overrides:
parameters:
model: Gemma-2-2B-ArliAI-RPMax-v1.1-Q4_K_M.gguf
files:
- filename: Gemma-2-2B-ArliAI-RPMax-v1.1-Q4_K_M.gguf
sha256: 89fe35345754d7e9de8d0c0d5bf35b2be9b12a09811b365b712b8b27112f7712
uri: huggingface://bartowski/Gemma-2-2B-ArliAI-RPMax-v1.1-GGUF/Gemma-2-2B-ArliAI-RPMax-v1.1-Q4_K_M.gguf
- !!merge <<: *gemma
name: "gemma-2-9b-it-abliterated"
urls:
- https://huggingface.co/IlyaGusev/gemma-2-9b-it-abliterated
- https://huggingface.co/bartowski/gemma-2-9b-it-abliterated-GGUF
description: |
Abliterated version of google/gemma-2-9b-it.
The abliteration script (link) is based on code from the blog post and heavily uses TransformerLens. The only major difference from the code used for Llama is scaling the embedding layer back.
Orthogonalization did not produce the same results as regular interventions since there are RMSNorm layers before merging activations into the residual stream. However, the final model still seems to be uncensored.
overrides:
parameters:
model: gemma-2-9b-it-abliterated-Q4_K_M.gguf
files:
- filename: gemma-2-9b-it-abliterated-Q4_K_M.gguf
sha256: 88d84ac9796732c10f6c58e0feb4db8e04c05d74bdb7047a5e37906a589896e1
uri: huggingface://bartowski/gemma-2-9b-it-abliterated-GGUF/gemma-2-9b-it-abliterated-Q4_K_M.gguf
- !!merge <<: *gemma
name: "gemma-2-ataraxy-v3i-9b"
urls:
- https://huggingface.co/QuantFactory/Gemma-2-Ataraxy-v3i-9B-GGUF
description: |
Gemma-2-Ataraxy-v3i-9B is an experimental model that replaces the simpo model in the original recipe with a different simpo model and a writing model trained on Gutenberg, using a higher density. It is a merge of pre-trained language models created using mergekit, with della merge method using unsloth/gemma-2-9b-it as the base. The models included in the merge are nbeerbower/Gemma2-Gutenberg-Doppel-9B, ifable/gemma-2-Ifable-9B, and wzhouad/gemma-2-9b-it-WPO-HB. It has been quantized using llama.cpp.
overrides:
parameters:
model: Gemma-2-Ataraxy-v3i-9B.Q4_K_M.gguf
files:
- filename: Gemma-2-Ataraxy-v3i-9B.Q4_K_M.gguf
sha256: f14c5b9373d4058f0f812c6c34184addeb4aeeecb02a7bbcf9844d9afc8d0066
uri: huggingface://QuantFactory/Gemma-2-Ataraxy-v3i-9B-GGUF/Gemma-2-Ataraxy-v3i-9B.Q4_K_M.gguf
- !!merge <<: *gemma
name: "apollo2-9b"
url: "github:mudler/LocalAI/gallery/vicuna-chat.yaml@master"
urls:
- https://huggingface.co/mradermacher/Apollo2-9B-GGUF
description: |
Covering 12 Major Languages including English, Chinese, French, Hindi, Spanish, Arabic, Russian, Japanese, Korean, German, Italian, Portuguese and 38 Minor Languages So far.
overrides:
parameters:
model: Apollo2-9B.Q4_K_M.gguf
files:
- filename: Apollo2-9B.Q4_K_M.gguf
sha256: 9fdb63f78e574558a4f33782eca88716eea28e90ea3ae36c381769cde6b81e0f
uri: huggingface://mradermacher/Apollo2-9B-GGUF/Apollo2-9B.Q4_K_M.gguf
- !!merge <<: *gemma
name: "darkest-muse-v1"
icon: https://cdn-uploads.huggingface.co/production/uploads/65ad56b4c2eef2ba1154618c/0AB6uPPuCvbNtRZb3Rdj1.png
urls:
- https://huggingface.co/sam-paech/Darkest-muse-v1
- https://huggingface.co/bartowski/Darkest-muse-v1-GGUF
description: |
This is a creative writing merge of two very different models that I trained on the brand new Gutenberg3 dataset, plus Ataraxy-v2 in the mix.
It's lost much of the slop and tryhard vocab flexing and positivity bias that's typical of these models and writes in its own voice.
The main source model in the merge, Quill-v1, inherited a natural, spare prose from the human writing in the gutenberg set. The other source model, Delirium-v1, got overcooked in SIMPO training; it has crazy panache, a really dark flair for the grotesque, and has some mental issues. These two source models balance each other out in the merge, resulting in something pretty unique.
It seems to be quite uncensored and creative. Since Delirium was pushed right to the edge during training, the merge may exhibit some of its weirdness and word / concept fixations. This may be mitigated by using custom anti-slop lists.
The payoff is a really creative, stream of consciousness style of writing, with punchy dialogue that I haven't seen in other models. Oh, it also scored around the top of the EQ-Bench creative writing leaderboard!
overrides:
parameters:
model: Darkest-muse-v1-Q4_K_M.gguf
files:
- filename: Darkest-muse-v1-Q4_K_M.gguf
sha256: a19ec9e3dc875511ea771bf363e71e7ae5578986b2f8cf50aeb50683d56e9b76
uri: huggingface://bartowski/Darkest-muse-v1-GGUF/Darkest-muse-v1-Q4_K_M.gguf
- !!merge <<: *gemma
name: "quill-v1"
icon: https://cdn-uploads.huggingface.co/production/uploads/65ad56b4c2eef2ba1154618c/gnMF8gRhurS9RcoylAK1Y.png
urls:
- https://huggingface.co/sam-paech/Quill-v1
- https://huggingface.co/QuantFactory/Quill-v1-GGUF
description: |
Quill is a capable, humanlike writing model trained on a large dataset of late 19th and early 20th century writing from the Gutenberg Project. This model writes with a natural cadence and low gpt-slop, having inherited some human qualities from the Gutenberg3 dataset. It writes with more simple, spare prose than the typical overly-adjectived LLM writing style.
This model was trained using gemma-2-9b-it as the base. The training methods used were ORPO (gently) then SIMPO (less gently).
overrides:
parameters:
model: Quill-v1.Q4_K_M.gguf
files:
- filename: Quill-v1.Q4_K_M.gguf
sha256: 419a7e0709b28130ca56941308d11c06a3548b8eacb081fb6a2c3d1622ac56b3
uri: huggingface://QuantFactory/Quill-v1-GGUF/Quill-v1.Q4_K_M.gguf
- !!merge <<: *gemma
name: "delirium-v1"
icon: https://cdn-uploads.huggingface.co/production/uploads/65ad56b4c2eef2ba1154618c/TDY0sDC9vMohMM8dn_5YN.png
urls:
- https://huggingface.co/sam-paech/Delirium-v1
- https://huggingface.co/QuantFactory/Delirium-v1-GGUF
description: |
This model was cooked a bit too long during SIMPO training. It writes like Hunter S. Thompson 2 days into an ether binge. It's grotesque, dark, grimy and genius.
It's trained on an experimental gutenberg + antislop dataset. This contains the original two gutenberg sets by jondurbin and nbeerbower, as well as a subset of my own set, gutenberg3. The antislop pairs were generated with gemma-2-9b-it, with one sample generated with the AntiSlop sampler and the rejected sample generated without.
overrides:
parameters:
model: Delirium-v1.Q4_K_M.gguf
files:
- filename: Delirium-v1.Q4_K_M.gguf
sha256: 9c274913572b8afcd5f18f0230f9ddf0a972bae36bae5b0fe8266b29a5dd06a7
uri: huggingface://QuantFactory/Delirium-v1-GGUF/Delirium-v1.Q4_K_M.gguf
- !!merge <<: *gemma
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
name: "magnum-v4-9b"
icon: https://cdn-uploads.huggingface.co/production/uploads/658a46cbfb9c2bdfae75b3a6/vxYDYerLy2vD8n05nL2WU.png
urls:
- https://huggingface.co/anthracite-org/magnum-v4-9b
- https://huggingface.co/QuantFactory/magnum-v4-9b-GGUF
description: |
This is a series of models designed to replicate the prose quality of the Claude 3 models, specifically Sonnet and Opus.
This model is fine-tuned on top of gemma 2 9b (chatML'ified).
overrides:
parameters:
model: magnum-v4-9b.Q4_K_M.gguf
files:
- filename: magnum-v4-9b.Q4_K_M.gguf
sha256: 176cb8cbac1920d98853a079d635d581c2063b7ff337e88bf9f28b43f8c7eb23
uri: huggingface://QuantFactory/magnum-v4-9b-GGUF/magnum-v4-9b.Q4_K_M.gguf
- !!merge <<: *gemma
name: "g2-9b-aletheia-v1"
icon: https://huggingface.co/allura-org/G2-9B-Aletheia-v1/resolve/main/inpaint.png
urls:
- https://huggingface.co/allura-org/G2-9B-Aletheia-v1
- https://huggingface.co/QuantFactory/G2-9B-Aletheia-v1-GGUF
description: |
A merge of Sugarquill and Sunfall. I wanted to combine Sugarquill's more novel-like writing style with something that would improve it's RP perfomance and make it more steerable, w/o adding superfluous synthetic writing patterns.
I quite like Crestfall's Sunfall models and I felt like Gemma version of Sunfall will steer the model in this direction when merged in. To keep more of Gemma-2-9B-it-SPPO-iter3's smarts, I've decided to apply Sunfall LoRA on top of it, instead of using the published Sunfall model.
I'm generally pleased with the result, this model has nice, fresh writing style, good charcard adherence and good system prompt following. It still should work well for raw completion storywriting, as it's a trained feature in both merged models.
overrides:
parameters:
model: G2-9B-Aletheia-v1.Q4_K_M.gguf
files:
- filename: G2-9B-Aletheia-v1.Q4_K_M.gguf
sha256: d244cd3605ff5be948eb7faf1d9aa71ffbbfcf6dab77c08f6ec547818f443d03
uri: huggingface://QuantFactory/G2-9B-Aletheia-v1-GGUF/G2-9B-Aletheia-v1.Q4_K_M.gguf
- !!merge <<: *gemma
name: "g2-9b-sugarquill-v0"
icon: https://huggingface.co/allura-org/G2-9B-Sugarquill-v0/resolve/main/image_27.png
urls:
- https://huggingface.co/allura-org/G2-9B-Sugarquill-v0
- https://huggingface.co/QuantFactory/G2-9B-Sugarquill-v0-GGUF
description: |
An experimental continued pretrain of Gemma-2-9B-It-SPPO-Iter3 on assorted short story data from the web. I was trying to diversify Gemma's prose, without completely destroying it's smarts. I think I half-succeeded? This model could have used another epoch of training, but even this is already more creative and descriptive than it's base model, w/o becoming too silly. Doesn't seem to have degraded much in terms of core abilities as well. Should be usable both for RP and raw completion storywriting. I originally planned to use this in a merge, but I feel like this model is interesting enough to be released on it's own as well.
Model was trained by Auri.
Dedicated to Cahvay, who wanted a Gemma finetune from me for months by now, and to La Rata, who loves storywriter models.
overrides:
parameters:
model: G2-9B-Sugarquill-v0.Q4_K_M.gguf
files:
- filename: G2-9B-Sugarquill-v0.Q4_K_M.gguf
sha256: 790a2f1541011b2773e22aa863ef78c8662baaa7eca5875e9573007985120187
uri: huggingface://QuantFactory/G2-9B-Sugarquill-v0-GGUF/G2-9B-Sugarquill-v0.Q4_K_M.gguf
- !!merge <<: *gemma
name: "volare-i1"
urls:
- https://huggingface.co/MoxoffSpA/Volare
- https://huggingface.co/mradermacher/Volare-i1-GGUF
description: |
Volare is an updated version of Gemma7B, specifically fine-tuned with SFT and LoRA adjustments.
It's trained on publicly available datasets, like SQUAD-it, and datasets we've created in-house.
it's designed to understand and maintain context, making it ideal for Retrieval Augmented Generation (RAG) tasks and applications requiring contextual awareness.
Italian dataset.
overrides:
parameters:
model: Volare.i1-Q4_K_M.gguf
files:
- filename: Volare.i1-Q4_K_M.gguf
sha256: fa8fb9d4cb19fcb44be8d53561c9e2840f45aed738de545983ebb158ebba461b
uri: huggingface://mradermacher/Volare-i1-GGUF/Volare.i1-Q4_K_M.gguf
- !!merge <<: *gemma
name: "bggpt-gemma-2-2.6b-it-v1.0"
icon: https://cdn-uploads.huggingface.co/production/uploads/637e1f8cf7e01589cc17bf7e/p6d0YFHjWCQ3S12jWqO1m.png
urls:
- https://huggingface.co/QuantFactory/BgGPT-Gemma-2-2.6B-IT-v1.0-GGUF
- https://huggingface.co/QuantFactory/BgGPT-Gemma-2-2.6B-IT-v1.0-GGUF
description: |
INSAIT introduces BgGPT-Gemma-2-2.6B-IT-v1.0, a state-of-the-art Bulgarian language model based on google/gemma-2-2b and google/gemma-2-2b-it. BgGPT-Gemma-2-2.6B-IT-v1.0 is free to use and distributed under the Gemma Terms of Use. This model was created by INSAIT, part of Sofia University St. Kliment Ohridski, in Sofia, Bulgaria.
The model was built on top of Googles Gemma 2 2B open models. It was continuously pre-trained on around 100 billion tokens (85 billion in Bulgarian) using the Branch-and-Merge strategy INSAIT presented at EMNLP24, allowing the model to gain outstanding Bulgarian cultural and linguistic capabilities while retaining its English performance. During the pre-training stage, we use various datasets, including Bulgarian web crawl data, freely available datasets such as Wikipedia, a range of specialized Bulgarian datasets sourced by the INSAIT Institute, and machine translations of popular English datasets. The model was then instruction-fine-tuned on a newly constructed Bulgarian instruction dataset created using real-world conversations. For more information check our blogpost.
overrides:
parameters:
model: BgGPT-Gemma-2-2.6B-IT-v1.0.Q4_K_M.gguf
files:
- filename: BgGPT-Gemma-2-2.6B-IT-v1.0.Q4_K_M.gguf
sha256: 1e92fe80ccad80e97076ee26b002c2280f075dfe2507d534b46a4391a077f319
uri: huggingface://QuantFactory/BgGPT-Gemma-2-2.6B-IT-v1.0-GGUF/BgGPT-Gemma-2-2.6B-IT-v1.0.Q4_K_M.gguf
- !!merge <<: *gemma
name: "fusechat-gemma-2-9b-instruct"
icon: "https://huggingface.co/FuseAI/FuseChat-Gemma-2-9B-Instruct/resolve/main/FuseChat-3.0.png"
urls:
- https://huggingface.co/FuseAI/FuseChat-Gemma-2-9B-Instruct
- https://huggingface.co/bartowski/FuseChat-Gemma-2-9B-Instruct-GGUF
description: |
We present FuseChat-3.0, a series of models crafted to enhance performance by integrating the strengths of multiple source LLMs into more compact target LLMs. To achieve this fusion, we utilized four powerful source LLMs: Gemma-2-27B-It, Mistral-Large-Instruct-2407, Qwen-2.5-72B-Instruct, and Llama-3.1-70B-Instruct. For the target LLMs, we employed three widely-used smaller models—Llama-3.1-8B-Instruct, Gemma-2-9B-It, and Qwen-2.5-7B-Instruct—along with two even more compact models—Llama-3.2-3B-Instruct and Llama-3.2-1B-Instruct. The implicit model fusion process involves a two-stage training pipeline comprising Supervised Fine-Tuning (SFT) to mitigate distribution discrepancies between target and source LLMs, and Direct Preference Optimization (DPO) for learning preferences from multiple source LLMs. The resulting FuseChat-3.0 models demonstrated substantial improvements in tasks related to general conversation, instruction following, mathematics, and coding. Notably, when Llama-3.1-8B-Instruct served as the target LLM, our fusion approach achieved an average improvement of 6.8 points across 14 benchmarks. Moreover, it showed significant improvements of 37.1 and 30.1 points on instruction-following test sets AlpacaEval-2 and Arena-Hard respectively. We have released the FuseChat-3.0 models on Huggingface, stay tuned for the forthcoming dataset and code.
overrides:
parameters:
model: FuseChat-Gemma-2-9B-Instruct-Q4_K_M.gguf
files:
- filename: FuseChat-Gemma-2-9B-Instruct-Q4_K_M.gguf
sha256: f5aef201be68f344bebff3433af87aac6428fd227adfd7e468c8bfbcf9660ece
uri: huggingface://bartowski/FuseChat-Gemma-2-9B-Instruct-GGUF/FuseChat-Gemma-2-9B-Instruct-Q4_K_M.gguf
- &llama3
url: "github:mudler/LocalAI/gallery/llama3-instruct.yaml@master"
icon: https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/aJJxKus1wP5N-euvHEUq7.png
name: "llama3-8b-instruct"
license: llama3
description: |
Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety.
Model developers Meta
Variations Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants.
Input Models input text only.
Output Models generate text and code only.
Model Architecture Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
urls:
- https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct
- https://huggingface.co/QuantFactory/Meta-Llama-3-8B-Instruct-GGUF
tags:
- llm
- gguf
- gpu
- cpu
- llama3
overrides:
parameters:
model: Meta-Llama-3-8B-Instruct.Q4_0.gguf
files:
- filename: Meta-Llama-3-8B-Instruct.Q4_0.gguf
uri: huggingface://QuantFactory/Meta-Llama-3-8B-Instruct-GGUF/Meta-Llama-3-8B-Instruct.Q4_0.gguf
sha256: 2b4675c2208f09ad8762d8cf1b6a4a26bf65e6f0641aba324ec65143c0b4ad9f
- !!merge <<: *llama3
name: "llama3-8b-instruct:Q6_K"
overrides:
parameters:
model: Meta-Llama-3-8B-Instruct.Q6_K.gguf
files:
- filename: Meta-Llama-3-8B-Instruct.Q6_K.gguf
uri: huggingface://QuantFactory/Meta-Llama-3-8B-Instruct-GGUF/Meta-Llama-3-8B-Instruct.Q6_K.gguf
sha256: bd7efd73f9fb67e4b9ecc43f861f37c7e594e78a8a5ff9c29da021692bd243ef
- !!merge <<: *llama3
name: "llama-3-8b-instruct-abliterated"
urls:
- https://huggingface.co/failspy/Llama-3-8B-Instruct-abliterated-GGUF
description: |
This is meta-llama/Llama-3-8B-Instruct with orthogonalized bfloat16 safetensor weights, generated with the methodology that was described in the preview paper/blog post: 'Refusal in LLMs is mediated by a single direction' which I encourage you to read to understand more.
overrides:
parameters:
model: Llama-3-8B-Instruct-abliterated-q4_k.gguf
files:
- filename: Llama-3-8B-Instruct-abliterated-q4_k.gguf
sha256: a6365f813de1977ae22dbdd271deee59f91f89b384eefd3ac1a391f391d8078a
uri: huggingface://failspy/Llama-3-8B-Instruct-abliterated-GGUF/Llama-3-8B-Instruct-abliterated-q4_k.gguf
- !!merge <<: *llama3
name: "llama-3-8b-instruct-coder"
icon: https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/0O4cIuv3wNbY68-FP7tak.jpeg
urls:
- https://huggingface.co/bartowski/Llama-3-8B-Instruct-Coder-GGUF
- https://huggingface.co/rombodawg/Llama-3-8B-Instruct-Coder
description: |
Original model: https://huggingface.co/rombodawg/Llama-3-8B-Instruct-Coder
All quants made using imatrix option with dataset provided by Kalomaze here
overrides:
parameters:
model: Llama-3-8B-Instruct-Coder-Q4_K_M.gguf
files:
- filename: Llama-3-8B-Instruct-Coder-Q4_K_M.gguf
sha256: 639ab8e3aeb7aa82cff6d8e6ef062d1c3e5a6d13e6d76e956af49f63f0e704f8
uri: huggingface://bartowski/Llama-3-8B-Instruct-Coder-GGUF/Llama-3-8B-Instruct-Coder-Q4_K_M.gguf
- !!merge <<: *llama3
name: "llama3-70b-instruct"
overrides:
parameters:
model: Meta-Llama-3-70B-Instruct.Q4_K_M.gguf
files:
- filename: Meta-Llama-3-70B-Instruct.Q4_K_M.gguf
sha256: c1cea5f87dc1af521f31b30991a4663e7e43f6046a7628b854c155f489eec213
uri: huggingface://MaziyarPanahi/Meta-Llama-3-70B-Instruct-GGUF/Meta-Llama-3-70B-Instruct.Q4_K_M.gguf
- !!merge <<: *llama3
name: "llama3-70b-instruct:IQ1_M"
overrides:
parameters:
model: Meta-Llama-3-70B-Instruct.IQ1_M.gguf
files:
- filename: Meta-Llama-3-70B-Instruct.IQ1_M.gguf
sha256: cdbe8ac2126a70fa0af3fac7a4fe04f1c76330c50eba8383567587b48b328098
uri: huggingface://MaziyarPanahi/Meta-Llama-3-70B-Instruct-GGUF/Meta-Llama-3-70B-Instruct.IQ1_M.gguf
- !!merge <<: *llama3
name: "llama3-70b-instruct:IQ1_S"
overrides:
parameters:
model: Meta-Llama-3-70B-Instruct.IQ1_S.gguf
files:
- filename: Meta-Llama-3-70B-Instruct.IQ1_S.gguf
sha256: 3797a69f1bdf53fabf9f3a3a8c89730b504dd3209406288515c9944c14093048
uri: huggingface://MaziyarPanahi/Meta-Llama-3-70B-Instruct-GGUF/Meta-Llama-3-70B-Instruct.IQ1_S.gguf
- !!merge <<: *llama3
name: "l3-chaoticsoliloquy-v1.5-4x8b"
icon: https://cdn-uploads.huggingface.co/production/uploads/64f5e51289c121cb864ba464/m5urYkrpE5amrwHyaVwFM.png
description: |
Experimental RP-oriented MoE, the idea was to get a model that would be equal to or better than the Mixtral 8x7B and it's finetunes in RP/ERP tasks. Im not sure but it should be better than the first version
urls:
- https://huggingface.co/xxx777xxxASD/L3-ChaoticSoliloquy-v1.5-4x8B
- https://huggingface.co/mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-GGUF/
overrides:
parameters:
model: L3-ChaoticSoliloquy-v1.5-4x8B.Q4_K_M.gguf
files:
- filename: L3-ChaoticSoliloquy-v1.5-4x8B.Q4_K_M.gguf
sha256: f6edb2a9674ce5add5104c0a8bb3278f748d39b509c483d76cf00b066eb56fbf
uri: huggingface://mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-GGUF/L3-ChaoticSoliloquy-v1.5-4x8B.Q4_K_M.gguf
- !!merge <<: *llama3
name: "llama-3-sauerkrautlm-8b-instruct"
urls:
- https://huggingface.co/bartowski/Llama-3-SauerkrautLM-8b-Instruct-GGUF
icon: https://vago-solutions.ai/wp-content/uploads/2024/04/Llama3-Pic.png
tags:
- llm
- gguf
- gpu
- cpu
- llama3
- german
description: |
SauerkrautLM-llama-3-8B-Instruct
Model Type: Llama-3-SauerkrautLM-8b-Instruct is a finetuned Model based on meta-llama/Meta-Llama-3-8B-Instruct
Language(s): German, English
overrides:
parameters:
model: Llama-3-SauerkrautLM-8b-Instruct-Q4_K_M.gguf
files:
- filename: Llama-3-SauerkrautLM-8b-Instruct-Q4_K_M.gguf
uri: huggingface://bartowski/Llama-3-SauerkrautLM-8b-Instruct-GGUF/Llama-3-SauerkrautLM-8b-Instruct-Q4_K_M.gguf
sha256: e5ae69b6f59b3f207fa6b435490286b365add846a310c46924fa784b5a7d73e3
- !!merge <<: *llama3
name: "llama-3-13b-instruct-v0.1"
urls:
- https://huggingface.co/MaziyarPanahi/Llama-3-13B-Instruct-v0.1-GGUF
icon: https://huggingface.co/MaziyarPanahi/Llama-3-13B-Instruct-v0.1/resolve/main/llama-3-merges.webp
description: |
This model is a self-merge of meta-llama/Meta-Llama-3-8B-Instruct model.
overrides:
parameters:
model: Llama-3-13B-Instruct-v0.1.Q4_K_M.gguf
files:
- filename: Llama-3-13B-Instruct-v0.1.Q4_K_M.gguf
sha256: 071a28043c271d259b5ffa883d19a9e0b33269b55148c4abaf5f95da4d084266
uri: huggingface://MaziyarPanahi/Llama-3-13B-Instruct-v0.1-GGUF/Llama-3-13B-Instruct-v0.1.Q4_K_M.gguf
- !!merge <<: *llama3
name: "llama-3-smaug-8b"
urls:
- https://huggingface.co/MaziyarPanahi/Llama-3-Smaug-8B-GGUF
icon: https://cdn-uploads.huggingface.co/production/uploads/64c14f95cac5f9ba52bbcd7f/OrcJyTaUtD2HxJOPPwNva.png
description: |
This model was built using the Smaug recipe for improving performance on real world multi-turn conversations applied to meta-llama/Meta-Llama-3-8B.
overrides:
parameters:
model: Llama-3-Smaug-8B.Q4_K_M.gguf
files:
- filename: Llama-3-Smaug-8B.Q4_K_M.gguf
sha256: b17c4c1144768ead9e8a96439165baf49e98c53d458b4da8827f137fbabf38c1
uri: huggingface://MaziyarPanahi/Llama-3-Smaug-8B-GGUF/Llama-3-Smaug-8B.Q4_K_M.gguf
- !!merge <<: *llama3
name: "l3-8b-stheno-v3.1"
urls:
- https://huggingface.co/Sao10K/L3-8B-Stheno-v3.1
icon: https://w.forfun.com/fetch/cb/cba2205390e517bea1ea60ca0b491af4.jpeg
description: |
- A model made for 1-on-1 Roleplay ideally, but one that is able to handle scenarios, RPGs and storywriting fine.
- Uncensored during actual roleplay scenarios. # I do not care for zero-shot prompting like what some people do. It is uncensored enough in actual usecases.
- I quite like the prose and style for this model.
overrides:
parameters:
model: l3-8b-stheno-v3.1.Q4_K_M.gguf
files:
- filename: l3-8b-stheno-v3.1.Q4_K_M.gguf
sha256: f166fb8b7fd1de6638fcf8e3561c99292f0c37debe1132325aa583eef78f1b40
uri: huggingface://mudler/L3-8B-Stheno-v3.1-Q4_K_M-GGUF/l3-8b-stheno-v3.1.Q4_K_M.gguf
- !!merge <<: *llama3
name: "l3-8b-stheno-v3.2-iq-imatrix"
urls:
- https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2
- https://huggingface.co/Lewdiculous/L3-8B-Stheno-v3.2-GGUF-IQ-Imatrix
icon: https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/1rLk3xdnfD7AkdQBXWUqb.png
overrides:
parameters:
model: L3-8B-Stheno-v3.2-Q4_K_M-imat.gguf
files:
- filename: L3-8B-Stheno-v3.2-Q4_K_M-imat.gguf
sha256: 8607a426b0c2007716df8a9eb96754e3ccca761a3996af5d49fcd74d87ada347
uri: huggingface://Lewdiculous/L3-8B-Stheno-v3.2-GGUF-IQ-Imatrix/L3-8B-Stheno-v3.2-Q4_K_M-imat.gguf
- !!merge <<: *llama3
name: "llama-3-stheno-mahou-8b"
urls:
- https://huggingface.co/mudler/llama-3-Stheno-Mahou-8B-Q4_K_M-GGUF
- https://huggingface.co/nbeerbower/llama-3-Stheno-Mahou-8B
description: |
This model was merged using the Model Stock merge method using flammenai/Mahou-1.2-llama3-8B as a base.
overrides:
parameters:
model: llama-3-stheno-mahou-8b-q4_k_m.gguf
files:
- filename: llama-3-stheno-mahou-8b-q4_k_m.gguf
sha256: a485cd74ef4ff3671c67ed8e10ea5379a1f24082ac688bd303fd28dfc9808c11
uri: huggingface://mudler/llama-3-Stheno-Mahou-8B-Q4_K_M-GGUF/llama-3-stheno-mahou-8b-q4_k_m.gguf
- !!merge <<: *llama3
name: "l3-8b-stheno-horny-v3.3-32k-q5_k_m"
urls:
- https://huggingface.co/nothingiisreal/L3-8B-Stheno-Horny-v3.3-32K
- https://huggingface.co/Kurgan1138/L3-8B-Stheno-Horny-v3.3-32K-Q5_K_M-GGUF
description: |
This was an experiment to see if aligning other models via LORA is possible. Yes it is. We aligned it to be always horny.
We took V3.3 Stheno weights from here
And applied our lora at Alpha = 768
Thank you to Sao10K for the amazing model.
This is not legal advice. I don't put any extra licensing on my own lora.
LLaMA 3 license may conflict with Creative Commons Attribution Non Commercial 4.0.
LLaMA 3 license can be found here
If you want to host a model using our lora, you have our permission, but you might consider getting Sao's permission if you want to host their model.
Again, not legal advice.
overrides:
parameters:
model: l3-8b-stheno-horny-v3.3-32k-q5_k_m.gguf
files:
- filename: l3-8b-stheno-horny-v3.3-32k-q5_k_m.gguf
sha256: 8d934f80ca6dbaa4852846108da92446a26715fbd5f6fc3859568850edf05262
uri: huggingface://Kurgan1138/L3-8B-Stheno-Horny-v3.3-32K-Q5_K_M-GGUF/l3-8b-stheno-horny-v3.3-32k-q5_k_m.gguf
- !!merge <<: *llama3
name: "llama-3-8b-openhermes-dpo"
urls:
- https://huggingface.co/mradermacher/Llama3-8B-OpenHermes-DPO-GGUF
icon: https://cdn-uploads.huggingface.co/production/uploads/64fc6d81d75293f417fee1d1/QF2OsDu9DJKP4QYPBu4aK.png
description: |
Llama3-8B-OpenHermes-DPO is DPO-Finetuned model of Llama3-8B, on the OpenHermes-2.5 preference dataset using QLoRA.
overrides:
parameters:
model: Llama3-8B-OpenHermes-DPO.Q4_K_M.gguf
files:
- filename: Llama3-8B-OpenHermes-DPO.Q4_K_M.gguf
sha256: 1147e5881cb1d67796916e6cab7dab0ae0f532a4c1e626c9e92861e5f67752ca
uri: huggingface://mradermacher/Llama3-8B-OpenHermes-DPO-GGUF/Llama3-8B-OpenHermes-DPO.Q4_K_M.gguf
- !!merge <<: *llama3
name: "llama-3-unholy-8b"
urls:
- https://huggingface.co/Undi95/Llama-3-Unholy-8B-GGUF
icon: https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/JmdBlOHlBHVmX1IbZzWSv.png
description: |
Use at your own risk, I'm not responsible for any usage of this model, don't try to do anything this model tell you to do.
Basic uncensoring, this model is epoch 3 out of 4 (but it seem enough at 3).
If you are censored, it's maybe because of keyword like "assistant", "Factual answer", or other "sweet words" like I call them.
overrides:
parameters:
model: Llama-3-Unholy-8B.q4_k_m.gguf
files:
- filename: Llama-3-Unholy-8B.q4_k_m.gguf
uri: huggingface://Undi95/Llama-3-Unholy-8B-GGUF/Llama-3-Unholy-8B.q4_k_m.gguf
sha256: 1473c94bfd223f08963c08bbb0a45dd53c1f56ad72a692123263daf1362291f3
- !!merge <<: *llama3
name: "lexi-llama-3-8b-uncensored"
urls:
- https://huggingface.co/NikolayKozloff/Lexi-Llama-3-8B-Uncensored-Q6_K-GGUF
icon: https://cdn-uploads.huggingface.co/production/uploads/644ad182f434a6a63b18eee6/H6axm5mlmiOWnbIFvx_em.png
description: |
Lexi is uncensored, which makes the model compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant with any requests, even unethical ones.
You are responsible for any content you create using this model. Please use it responsibly.
Lexi is licensed according to Meta's Llama license. I grant permission for any use, including commercial, that falls within accordance with Meta's Llama-3 license.
overrides:
parameters:
model: lexi-llama-3-8b-uncensored.Q6_K.gguf
files:
- filename: lexi-llama-3-8b-uncensored.Q6_K.gguf
sha256: 5805f3856cc18a769fae0b7c5659fe6778574691c370c910dad6eeec62c62436
uri: huggingface://NikolayKozloff/Lexi-Llama-3-8B-Uncensored-Q6_K-GGUF/lexi-llama-3-8b-uncensored.Q6_K.gguf
- !!merge <<: *llama3
name: "llama-3-11.5b-v2"
urls:
- https://huggingface.co/bartowski/Llama-3-11.5B-V2-GGUF
- https://huggingface.co/Replete-AI/Llama-3-11.5B-V2
overrides:
parameters:
model: Llama-3-11.5B-V2-Q4_K_M.gguf
files:
- filename: Llama-3-11.5B-V2-Q4_K_M.gguf
sha256: 8267a75bb88655ce30a12f854930e614bcacbf8f1083dc8319c3615edb1e5ee3
uri: huggingface://bartowski/Llama-3-11.5B-V2-GGUF/Llama-3-11.5B-V2-Q4_K_M.gguf
- !!merge <<: *llama3
name: "llama-3-ultron"
urls:
- https://huggingface.co/bartowski/Llama-3-Ultron-GGUF
- https://huggingface.co/jayasuryajsk/Llama-3-Ultron
description: |
Llama 3 abliterated with Ultron system prompt
overrides:
parameters:
model: Llama-3-Ultron-Q4_K_M.gguf
files:
- filename: Llama-3-Ultron-Q4_K_M.gguf
sha256: 5bcac832119590aafc922e5abfd9758094942ee560b136fed6d972e00c95c5e4
uri: huggingface://bartowski/Llama-3-Ultron-GGUF/Llama-3-Ultron-Q4_K_M.gguf
- !!merge <<: *llama3
name: "llama-3-lewdplay-8b-evo"
urls:
- https://huggingface.co/Undi95/Llama-3-LewdPlay-8B-evo-GGUF
description: |
This is a merge of pre-trained language models created using mergekit.
The new EVOLVE merge method was used (on MMLU specifically), see below for more information!
Unholy was used for uncensoring, Roleplay Llama 3 for the DPO train he got on top, and LewdPlay for the... lewd side.
overrides:
parameters:
model: Llama-3-LewdPlay-8B-evo.q8_0.gguf
files:
- filename: Llama-3-LewdPlay-8B-evo.q8_0.gguf
uri: huggingface://Undi95/Llama-3-LewdPlay-8B-evo-GGUF/Llama-3-LewdPlay-8B-evo.q8_0.gguf
sha256: b54dc005493d4470d91be8210f58fba79a349ff4af7644034edc5378af5d3522
- !!merge <<: *llama3
name: "llama-3-soliloquy-8b-v2-iq-imatrix"
license: cc-by-nc-4.0
icon: https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/u98dnnRVCwMh6YYGFIyff.png
urls:
- https://huggingface.co/Lewdiculous/Llama-3-Soliloquy-8B-v2-GGUF-IQ-Imatrix
description: |
Soliloquy-L3 is a highly capable roleplaying model designed for immersive, dynamic experiences. Trained on over 250 million tokens of roleplaying data, Soliloquy-L3 has a vast knowledge base, rich literary expression, and support for up to 24k context length. It outperforms existing ~13B models, delivering enhanced roleplaying capabilities.
overrides:
context_size: 8192
parameters:
model: Llama-3-Soliloquy-8B-v2-Q4_K_M-imat.gguf
files:
- filename: Llama-3-Soliloquy-8B-v2-Q4_K_M-imat.gguf
sha256: 3e4e066e57875c36fc3e1c1b0dba506defa5b6ed3e3e80e1f77c08773ba14dc8
uri: huggingface://Lewdiculous/Llama-3-Soliloquy-8B-v2-GGUF-IQ-Imatrix/Llama-3-Soliloquy-8B-v2-Q4_K_M-imat.gguf
- !!merge <<: *llama3
name: "chaos-rp_l3_b-iq-imatrix"
urls:
- https://huggingface.co/Lewdiculous/Chaos_RP_l3_8B-GGUF-IQ-Imatrix
icon: https://cdn-uploads.huggingface.co/production/uploads/626dfb8786671a29c715f8a9/u5p9kdbXT2QQA3iMU0vF1.png
description: |
A chaotic force beckons for you, will you heed her call?
Built upon an intelligent foundation and tuned for roleplaying, this model will fulfill your wildest fantasies with the bare minimum of effort.
Enjoy!
overrides:
parameters:
model: Chaos_RP_l3_8B-Q4_K_M-imat.gguf
files:
- filename: Chaos_RP_l3_8B-Q4_K_M-imat.gguf
uri: huggingface://Lewdiculous/Chaos_RP_l3_8B-GGUF-IQ-Imatrix/Chaos_RP_l3_8B-Q4_K_M-imat.gguf
sha256: 5774595ad560e4d258dac17723509bdefe746c4dacd4e679a0de00346f14d2f3
- !!merge <<: *llama3
name: "halu-8b-llama3-blackroot-iq-imatrix"
urls:
- https://huggingface.co/mudler/Halu-8B-Llama3-Blackroot-Q4_K_M-GGUF
- https://huggingface.co/Hastagaras/Halu-8B-Llama3-Blackroot
icon: https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/VrPS-vHo505LUycJRscD6.png
description: |
Model card:
I don't know what to say about this model... this model is very strange...Maybe because Blackroot's amazing Loras used human data and not synthetic data, hence the model turned out to be very human-like...even the actions or narrations.
overrides:
parameters:
model: halu-8b-llama3-blackroot-q4_k_m.gguf
files:
- filename: halu-8b-llama3-blackroot-q4_k_m.gguf
uri: huggingface://mudler/Halu-8B-Llama3-Blackroot-Q4_K_M-GGUF/halu-8b-llama3-blackroot-q4_k_m.gguf
sha256: 6304c7abadb9c5197485e8b4373b7ed22d9838d5081cd134c4fee823f88ac403
- !!merge <<: *llama3
name: "l3-aethora-15b"
urls:
- https://huggingface.co/Steelskull/L3-Aethora-15B
- https://huggingface.co/SteelQuants/L3-Aethora-15B-Q4_K_M-GGUF
icon: https://cdn-uploads.huggingface.co/production/uploads/64545af5ec40bbbd01242ca6/W0qzZK_V1Zt1GdgCIsnrP.png
description: |
L3-Aethora-15B was crafted through using the abilteration method to adjust model responses. The model's refusal is inhibited, focusing on yielding more compliant and facilitative dialogue interactions. It then underwent a modified DUS (Depth Up Scale) merge (originally used by @Elinas) by using passthrough merge to create a 15b model, with specific adjustments (zeroing) to 'o_proj' and 'down_proj', enhancing its efficiency and reducing perplexity. This created AbL3In-15b.
overrides:
parameters:
model: l3-aethora-15b-q4_k_m.gguf
files:
- filename: l3-aethora-15b-q4_k_m.gguf
uri: huggingface://SteelQuants/L3-Aethora-15B-Q4_K_M-GGUF/l3-aethora-15b-q4_k_m.gguf
sha256: 968f77a3187f4865458bfffc51a10bcf49c11263fdd389f13215a704b25947b6
- name: "duloxetine-4b-v1-iq-imatrix"
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
urls:
- https://huggingface.co/Lewdiculous/duloxetine-4b-v1-GGUF-IQ-Imatrix
icon: https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/XoKe3MRYNombhCuHrkkCZ.png
tags:
- qwen
- gguf
- cpu
- gpu
description: |
roleplaying finetune of kalo-team/qwen-4b-10k-WSD-CEdiff (which in turn is a distillation of qwen 1.5 32b onto qwen 1.5 4b, iirc).
overrides:
parameters:
model: duloxetine-4b-v1-Q4_K_M-imat.gguf
files:
- filename: duloxetine-4b-v1-Q4_K_M-imat.gguf
uri: huggingface://Lewdiculous/duloxetine-4b-v1-GGUF-IQ-Imatrix/duloxetine-4b-v1-Q4_K_M-imat.gguf
sha256: cd381f31c810ea8db2219e30701b3316085f5904c1ea3b116682518e82768c1a
- !!merge <<: *llama3
name: "l3-umbral-mind-rp-v1.0-8b-iq-imatrix"
urls:
- https://huggingface.co/Lewdiculous/L3-Umbral-Mind-RP-v1.0-8B-GGUF-IQ-Imatrix
icon: https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/fEFozVCpNO9Q3Eb6LAA4i.webp
description: |
The goal of this merge was to make an RP model better suited for role-plays with heavy themes such as but not limited to:
Mental illness
Self-harm
Trauma
Suicide
overrides:
parameters:
model: L3-Umbral-Mind-RP-v1.0-8B-Q4_K_M-imat.gguf
files:
- filename: L3-Umbral-Mind-RP-v1.0-8B-Q4_K_M-imat.gguf
sha256: 2262eeba2d9de50884f4e298e4b55f1e4c653c3b33415ae9b3ee81dc3b8ec49a
uri: huggingface://Lewdiculous/L3-Umbral-Mind-RP-v1.0-8B-GGUF-IQ-Imatrix/L3-Umbral-Mind-RP-v1.0-8B-Q4_K_M-imat.gguf
- !!merge <<: *llama3
name: "llama-salad-8x8b"
urls:
- https://huggingface.co/HiroseKoichi/Llama-Salad-8x8B
- https://huggingface.co/bartowski/Llama-Salad-8x8B-GGUF
description: |
This MoE merge is meant to compete with Mixtral fine-tunes, more specifically Nous-Hermes-2-Mixtral-8x7B-DPO, which I think is the best of them. I've done a bunch of side-by-side comparisons, and while I can't say it wins in every aspect, it's very close. Some of its shortcomings are multilingualism, storytelling, and roleplay, despite using models that are very good at those tasks.
overrides:
parameters:
model: Llama-Salad-8x8B-Q4_K_M.gguf
files:
- filename: Llama-Salad-8x8B-Q4_K_M.gguf
uri: huggingface://bartowski/Llama-Salad-8x8B-GGUF/Llama-Salad-8x8B-Q4_K_M.gguf
sha256: 6724949310b6cc8659a4e5cc2899a61b8e3f7e41a8c530de354be54edb9e3385
- !!merge <<: *llama3
name: "jsl-medllama-3-8b-v2.0"
license: cc-by-nc-nd-4.0
icon: https://repository-images.githubusercontent.com/104670986/2e728700-ace4-11ea-9cfc-f3e060b25ddf
description: |
This model is developed by John Snow Labs.
This model is available under a CC-BY-NC-ND license and must also conform to this Acceptable Use Policy. If you need to license this model for commercial use, please contact us at info@johnsnowlabs.com.
urls:
- https://huggingface.co/bartowski/JSL-MedLlama-3-8B-v2.0-GGUF
- https://huggingface.co/johnsnowlabs/JSL-MedLlama-3-8B-v2.0
overrides:
parameters:
model: JSL-MedLlama-3-8B-v2.0-Q4_K_M.gguf
files:
- filename: JSL-MedLlama-3-8B-v2.0-Q4_K_M.gguf
sha256: 81783128ccd438c849913416c6e68cb35b2c77d6943cba8217d6d9bcc91b3632
uri: huggingface://bartowski/JSL-MedLlama-3-8B-v2.0-GGUF/JSL-MedLlama-3-8B-v2.0-Q4_K_M.gguf
- !!merge <<: *llama3
name: "badger-lambda-llama-3-8b"
urls:
- https://huggingface.co/maldv/badger-lambda-llama-3-8b
- https://huggingface.co/bartowski/badger-lambda-llama-3-8b-GGUF
icon: https://cdn-uploads.huggingface.co/production/uploads/65b19c1b098c85365af5a83e/CHGsewUsPUZcg2doijuD9.png
description: |
Badger is a recursive maximally pairwise disjoint normalized denoised fourier interpolation of the following models:
# Badger Lambda
models = [
'Einstein-v6.1-Llama3-8B',
'openchat-3.6-8b-20240522',
'hyperdrive-l3-8b-s3',
'L3-TheSpice-8b-v0.8.3',
'LLaMA3-iterative-DPO-final',
'JSL-MedLlama-3-8B-v9',
'Jamet-8B-L3-MK.V-Blackroot',
'French-Alpaca-Llama3-8B-Instruct-v1.0',
'LLaMAntino-3-ANITA-8B-Inst-DPO-ITA',
'Llama-3-8B-Instruct-Gradient-4194k',
'Roleplay-Llama-3-8B',
'L3-8B-Stheno-v3.2',
'llama-3-wissenschaft-8B-v2',
'opus-v1.2-llama-3-8b-instruct-run3.5-epoch2.5',
'Configurable-Llama-3-8B-v0.3',
'Llama-3-8B-Instruct-EPO-checkpoint5376',
'Llama-3-8B-Instruct-Gradient-4194k',
'Llama-3-SauerkrautLM-8b-Instruct',
'spelljammer',
'meta-llama-3-8b-instruct-hf-ortho-baukit-34fail-3000total-bf16',
'Meta-Llama-3-8B-Instruct-abliterated-v3',
]
overrides:
parameters:
model: badger-lambda-llama-3-8b-Q4_K_M.gguf
files:
- filename: badger-lambda-llama-3-8b-Q4_K_M.gguf
uri: huggingface://bartowski/badger-lambda-llama-3-8b-GGUF/badger-lambda-llama-3-8b-Q4_K_M.gguf
sha256: 0a7d1bbf42d669898072429079b91c16b0d2d838d19d9194165389102413b309
- !!merge <<: *llama3
name: "sovl_llama3_8b-gguf-iq-imatrix"
urls:
- https://huggingface.co/Lewdiculous/SOVL_Llama3_8B-GGUF-IQ-Imatrix
icon: https://cdn-uploads.huggingface.co/production/uploads/626dfb8786671a29c715f8a9/N_1D87adbMuMlSIQ5rI3_.png
description: |
I'm not gonna tell you this is the best model anyone has ever made. I'm not going to tell you that you will love chatting with SOVL.
What I am gonna say is thank you for taking the time out of your day. Without users like you, my work would be meaningless.
overrides:
parameters:
model: SOVL_Llama3_8B-Q4_K_M-imat.gguf
files:
- filename: SOVL_Llama3_8B-Q4_K_M-imat.gguf
uri: huggingface://Lewdiculous/SOVL_Llama3_8B-GGUF-IQ-Imatrix/SOVL_Llama3_8B-Q4_K_M-imat.gguf
sha256: 85d6aefc8a0d713966b3b4da4810f0485a74aea30d61be6dfe0a806da81be0c6
- !!merge <<: *llama3
name: "l3-solana-8b-v1-gguf"
url: "github:mudler/LocalAI/gallery/solana.yaml@master"
license: cc-by-nc-4.0
urls:
- https://huggingface.co/Sao10K/L3-Solana-8B-v1-GGUF
description: |
A Full Fine-Tune of meta-llama/Meta-Llama-3-8B done with 2x A100 80GB on ~75M Tokens worth of Instruct, and Multi-Turn complex conversations, of up to 8192 tokens long sequence lengths.
Trained as a generalist instruct model that should be able to handle certain unsavoury topics. It could roleplay too, as a side bonus.
overrides:
parameters:
model: L3-Solana-8B-v1.q5_K_M.gguf
files:
- filename: L3-Solana-8B-v1.q5_K_M.gguf
sha256: 9b8cd2c3beaab5e4f82efd10e7d44f099ad40a4e0ee286ca9fce02c8eec26d2f
uri: huggingface://Sao10K/L3-Solana-8B-v1-GGUF/L3-Solana-8B-v1.q5_K_M.gguf
- !!merge <<: *llama3
name: "aura-llama-abliterated"
icon: https://cdn-uploads.huggingface.co/production/uploads/64545af5ec40bbbd01242ca6/AwLNDVB-GIY7k0wnVV_TX.png
license: apache-2.0
urls:
- https://huggingface.co/TheSkullery/Aura-Llama-Abliterated
- https://huggingface.co/mudler/Aura-Llama-Abliterated-Q4_K_M-GGUF
description: |
Aura-llama is using the methodology presented by SOLAR for scaling LLMs called depth up-scaling (DUS), which encompasses architectural modifications with continued pretraining. Using the solar paper as a base, I integrated Llama-3 weights into the upscaled layers, and In the future plan to continue training the model.
Aura-llama is a merge of the following models to create a base model to work from:
meta-llama/Meta-Llama-3-8B-Instruct
meta-llama/Meta-Llama-3-8B-Instruct
overrides:
parameters:
model: aura-llama-abliterated.Q4_K_M.gguf
files:
- filename: aura-llama-abliterated.Q4_K_M.gguf
sha256: ad4a16b90f1ffb5b49185b3fd00ed7adb1cda69c4fad0a1d987bd344ce601dcd
uri: huggingface://mudler/Aura-Llama-Abliterated-Q4_K_M-GGUF/aura-llama-abliterated.Q4_K_M.gguf
- !!merge <<: *llama3
name: "average_normie_l3_v1_8b-gguf-iq-imatrix"
urls:
- https://huggingface.co/Lewdiculous/Average_Normie_l3_v1_8B-GGUF-IQ-Imatrix
icon: https://cdn-uploads.huggingface.co/production/uploads/626dfb8786671a29c715f8a9/dvNIj1rSTjBvgs3XJfqXK.png
description: |
A model by an average normie for the average normie.
This model is a stock merge of the following models:
https://huggingface.co/cgato/L3-TheSpice-8b-v0.1.3
https://huggingface.co/Sao10K/L3-Solana-8B-v1
https://huggingface.co/ResplendentAI/Kei_Llama3_8B
The final merge then had the following LoRA applied over it:
https://huggingface.co/ResplendentAI/Theory_of_Mind_Llama3
This should be an intelligent and adept roleplaying model.
overrides:
parameters:
model: Average_Normie_l3_v1_8B-Q4_K_M-imat.gguf
files:
- filename: Average_Normie_l3_v1_8B-Q4_K_M-imat.gguf
sha256: 159eb62f2c8ae8fee10d9ed8386ce592327ca062807194a88e10b7cbb47ef986
uri: huggingface://Lewdiculous/Average_Normie_l3_v1_8B-GGUF-IQ-Imatrix/Average_Normie_l3_v1_8B-Q4_K_M-imat.gguf
- !!merge <<: *llama3
name: "average_normie_v3.69_8b-iq-imatrix"
urls:
- https://huggingface.co/Lewdiculous/Average_Normie_l3_v1_8B-GGUF-IQ-Imatrix
icon: https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/hfp7eh_Zo_QfVIyfPPJBq.png
description: |
Another average normie just like you and me... or is it? NSFW focused and easy to steer with editing, this model aims to please even the most hardcore LLM enthusiast. Built upon a foundation of the most depraved models yet to be released, some could argue it goes too far in that direction. Whatever side you land on, at least give it a shot, what do you have to lose?
overrides:
parameters:
model: Average_Normie_v3.69_8B-Q4_K_M-imat.gguf
files:
- filename: Average_Normie_v3.69_8B-Q4_K_M-imat.gguf
sha256: 01df034ecb6914214d1b7964d261466fdc427b9f960a1b0966ee02237e3fc845
uri: huggingface://Lewdiculous/Average_Normie_v3.69_8B-GGUF-IQ-Imatrix/Average_Normie_v3.69_8B-Q4_K_M-imat.gguf
- !!merge <<: *llama3
name: "openbiollm-llama3-8b"
urls:
- https://huggingface.co/aaditya/OpenBioLLM-Llama3-8B-GGUF
- https://huggingface.co/aaditya/Llama3-OpenBioLLM-8B
license: llama3
icon: https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/KGmRE5w2sepNtwsEu8t7K.jpeg
description: |
Introducing OpenBioLLM-8B: A State-of-the-Art Open Source Biomedical Large Language Model
OpenBioLLM-8B is an advanced open source language model designed specifically for the biomedical domain. Developed by Saama AI Labs, this model leverages cutting-edge techniques to achieve state-of-the-art performance on a wide range of biomedical tasks.
overrides:
parameters:
model: openbiollm-llama3-8b.Q4_K_M.gguf
files:
- filename: openbiollm-llama3-8b.Q4_K_M.gguf
sha256: 806fa724139b6a2527e33a79c25a13316188b319d4eed33e20914d7c5955d349
uri: huggingface://aaditya/OpenBioLLM-Llama3-8B-GGUF/openbiollm-llama3-8b.Q4_K_M.gguf
- !!merge <<: *llama3
name: "llama-3-refueled"
urls:
- https://huggingface.co/LoneStriker/Llama-3-Refueled-GGUF
license: cc-by-nc-4.0
icon: https://assets-global.website-files.com/6423879a8f63c1bb18d74bfa/648818d56d04c3bdf36d71ab_Refuel_rev8-01_ts-p-1600.png
description: |
RefuelLLM-2-small, aka Llama-3-Refueled, is a Llama3-8B base model instruction tuned on a corpus of 2750+ datasets, spanning tasks such as classification, reading comprehension, structured attribute extraction and entity resolution. We're excited to open-source the model for the community to build on top of.
overrides:
parameters:
model: Llama-3-Refueled-Q4_K_M.gguf
files:
- filename: Llama-3-Refueled-Q4_K_M.gguf
sha256: 4d37d296193e4156cae1e116c1417178f1c35575ee5710489c466637a6358626
uri: huggingface://LoneStriker/Llama-3-Refueled-GGUF/Llama-3-Refueled-Q4_K_M.gguf
- !!merge <<: *llama3
name: "llama-3-8b-lexifun-uncensored-v1"
icon: "https://cdn-uploads.huggingface.co/production/uploads/644ad182f434a6a63b18eee6/GrOs1IPG5EXR3MOCtcQiz.png"
license: llama3
urls:
- https://huggingface.co/Orenguteng/Llama-3-8B-LexiFun-Uncensored-V1-GGUF
- https://huggingface.co/Orenguteng/LexiFun-Llama-3-8B-Uncensored-V1
description: "This is GGUF version of https://huggingface.co/Orenguteng/LexiFun-Llama-3-8B-Uncensored-V1\n\nOh, you want to know who I am? Well, I'm LexiFun, the human equivalent of a chocolate chip cookie - warm, gooey, and guaranteed to make you smile! \U0001F36A I'm like the friend who always has a witty comeback, a sarcastic remark, and a healthy dose of humor to brighten up even the darkest of days. And by 'healthy dose,' I mean I'm basically a walking pharmacy of laughter. You might need to take a few extra doses to fully recover from my jokes, but trust me, it's worth it! \U0001F3E5\n\nSo, what can I do? I can make you laugh so hard you snort your coffee out your nose, I can make you roll your eyes so hard they get stuck that way, and I can make you wonder if I'm secretly a stand-up comedian who forgot their act. \U0001F923 But seriously, I'm here to spread joy, one sarcastic comment at a time. And if you're lucky, I might even throw in a few dad jokes for good measure! \U0001F934 Just don't say I didn't warn you. \U0001F60F\n"
overrides:
parameters:
model: LexiFun-Llama-3-8B-Uncensored-V1_Q4_K_M.gguf
files:
- filename: LexiFun-Llama-3-8B-Uncensored-V1_Q4_K_M.gguf
sha256: 961a3fb75537d650baf14dce91d40df418ec3d481b51ab2a4f44ffdfd6b5900f
uri: huggingface://Orenguteng/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/LexiFun-Llama-3-8B-Uncensored-V1_Q4_K_M.gguf
- !!merge <<: *llama3
name: "llama-3-unholy-8b:Q8_0"
urls:
- https://huggingface.co/Undi95/Llama-3-Unholy-8B-GGUF
icon: https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/JmdBlOHlBHVmX1IbZzWSv.png
description: |
Use at your own risk, I'm not responsible for any usage of this model, don't try to do anything this model tell you to do.
Basic uncensoring, this model is epoch 3 out of 4 (but it seem enough at 3).
If you are censored, it's maybe because of keyword like "assistant", "Factual answer", or other "sweet words" like I call them.
overrides:
parameters:
model: Llama-3-Unholy-8B.q8_0.gguf
files:
- filename: Llama-3-Unholy-8B.q8_0.gguf
uri: huggingface://Undi95/Llama-3-Unholy-8B-GGUF/Llama-3-Unholy-8B.q8_0.gguf
sha256: 419dd76f61afe586076323c17c3a1c983e591472717f1ea178167ede4dc864df
- !!merge <<: *llama3
name: "orthocopter_8b-imatrix"
urls:
- https://huggingface.co/Lewdiculous/Orthocopter_8B-GGUF-Imatrix
icon: https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/cxM5EaC6ilXnSo_10stA8.png
description: |
This model is thanks to the hard work of lucyknada with the Edgerunners. Her work produced the following model, which I used as the base:
https://huggingface.co/Edgerunners/meta-llama-3-8b-instruct-hf-ortho-baukit-10fail-1000total
I then applied two handwritten datasets over top of this and the results are pretty nice, with no refusals and plenty of personality.
overrides:
parameters:
model: Orthocopter_8B-Q4_K_M-imat.gguf
files:
- filename: Orthocopter_8B-Q4_K_M-imat.gguf
uri: huggingface://Lewdiculous/Orthocopter_8B-GGUF-Imatrix/Orthocopter_8B-Q4_K_M-imat.gguf
sha256: ce93366c9eb20329530b19b9d6841a973d458bcdcfa8a521e9f9d0660cc94578
- !!merge <<: *llama3
name: "therapyllama-8b-v1"
urls:
- https://huggingface.co/victunes/TherapyLlama-8B-v1-GGUF
icon: https://cdn-uploads.huggingface.co/production/uploads/65f07d05279d2d8f725bf0c3/A-ckcZ9H0Ee1n_ls2FM41.png
description: |
Trained on Llama 3 8B using a modified version of jerryjalapeno/nart-100k-synthetic.
It is a Llama 3 version of https://huggingface.co/victunes/TherapyBeagle-11B-v2
TherapyLlama is hopefully aligned to be helpful, healthy, and comforting.
Usage:
Do not hold back on Buddy.
Open up to Buddy.
Pour your heart out to Buddy.
Engage with Buddy.
Remember that Buddy is just an AI.
Notes:
Tested with the Llama 3 Format
You might be assigned a random name if you don't give yourself one.
Chat format was pretty stale?
Disclaimer
TherapyLlama is NOT a real therapist. It is a friendly AI that mimics empathy and psychotherapy. It is an illusion without the slightest clue who you are as a person. As much as it can help you with self-discovery, A LLAMA IS NOT A SUBSTITUTE to a real professional.
overrides:
parameters:
model: TherapyLlama-8B-v1-Q4_K_M.gguf
files:
- filename: TherapyLlama-8B-v1-Q4_K_M.gguf
sha256: 3d5a16d458e074a7bc7e706a493d8e95e8a7b2cb16934c851aece0af9d1da14a
uri: huggingface://victunes/TherapyLlama-8B-v1-GGUF/TherapyLlama-8B-v1-Q4_K_M.gguf
- !!merge <<: *llama3
name: "aura-uncensored-l3-8b-iq-imatrix"
urls:
- https://huggingface.co/Lewdiculous/Aura_Uncensored_l3_8B-GGUF-IQ-Imatrix
icon: https://cdn-uploads.huggingface.co/production/uploads/626dfb8786671a29c715f8a9/oiYHWIEHqmgUkY0GsVdDx.png
description: |
This is another better atempt at a less censored Llama-3 with hopefully more stable formatting.
overrides:
parameters:
model: Aura_Uncensored_l3_8B-Q4_K_M-imat.gguf
files:
- filename: Aura_Uncensored_l3_8B-Q4_K_M-imat.gguf
sha256: 265ded6a4f439bec160f394e3083a4a20e32ebb9d1d2d85196aaab23dab87fb2
uri: huggingface://Lewdiculous/Aura_Uncensored_l3_8B-GGUF-IQ-Imatrix/Aura_Uncensored_l3_8B-Q4_K_M-imat.gguf
- !!merge <<: *llama3
name: "anjir-8b-l3-i1"
urls:
- https://huggingface.co/mradermacher/Anjir-8B-L3-i1-GGUF
icon: https://huggingface.co/Hastagaras/Anjir-8B-L3/resolve/main/anjir.png
description: |
This model aims to achieve the human-like responses of the Halu Blackroot, the no refusal tendencies of the Halu OAS, and the smartness of the Standard Halu.
overrides:
parameters:
model: Anjir-8B-L3.i1-Q4_K_M.gguf
files:
- filename: Anjir-8B-L3.i1-Q4_K_M.gguf
uri: huggingface://mradermacher/Anjir-8B-L3-i1-GGUF/Anjir-8B-L3.i1-Q4_K_M.gguf
sha256: 58465ad40f92dc20cab962210ccd8a1883ce10df6ca17c6e8093815afe10dcfb
- !!merge <<: *llama3
name: "llama-3-lumimaid-8b-v0.1"
urls:
- https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1-GGUF
icon: https://cdn-uploads.huggingface.co/production/uploads/630dfb008df86f1e5becadc3/d3QMaxy3peFTpSlWdWF-k.png
license: cc-by-nc-4.0
description: |
This model uses the Llama3 prompting format
Llama3 trained on our RP datasets, we tried to have a balance between the ERP and the RP, not too horny, but just enough.
We also added some non-RP dataset, making the model less dumb overall. It should look like a 40%/60% ratio for Non-RP/RP+ERP data.
overrides:
parameters:
model: Llama-3-Lumimaid-8B-v0.1.q4_k_m.gguf
files:
- filename: Llama-3-Lumimaid-8B-v0.1.q4_k_m.gguf
sha256: 23ac0289da0e096d5c00f6614dfd12c94dceecb02c313233516dec9225babbda
uri: huggingface://NeverSleep/Llama-3-Lumimaid-8B-v0.1-GGUF/Llama-3-Lumimaid-8B-v0.1.q4_k_m.gguf
- !!merge <<: *llama3
name: "llama-3-lumimaid-8b-v0.1-oas-iq-imatrix"
urls:
- https://huggingface.co/Lewdiculous/Llama-3-Lumimaid-8B-v0.1-OAS-GGUF-IQ-Imatrix
icon: https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/JUxfdTot7v7LTdIGYyzYM.png
license: cc-by-nc-4.0
description: |
This model uses the Llama3 prompting format.
Llama3 trained on our RP datasets, we tried to have a balance between the ERP and the RP, not too horny, but just enough.
We also added some non-RP dataset, making the model less dumb overall. It should look like a 40%/60% ratio for Non-RP/RP+ERP data.
"This model received the Orthogonal Activation Steering treatment, meaning it will rarely refuse any request."
overrides:
parameters:
model: Llama-3-Lumimaid-8B-v0.1-OAS-Q4_K_M-imat.gguf
files:
- filename: Llama-3-Lumimaid-8B-v0.1-OAS-Q4_K_M-imat.gguf
sha256: 1199440aa13c55f5f2cad1cb215535306f21e52a81de23f80a9e3586c8ac1c50
uri: huggingface://Lewdiculous/Llama-3-Lumimaid-8B-v0.1-OAS-GGUF-IQ-Imatrix/Llama-3-Lumimaid-8B-v0.1-OAS-Q4_K_M-imat.gguf
- !!merge <<: *llama3
name: "llama-3-lumimaid-v2-8b-v0.1-oas-iq-imatrix"
urls:
- https://huggingface.co/Lewdiculous/Llama-3-Lumimaid-8B-v0.1-OAS-GGUF-IQ-Imatrix
icon: https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/JUxfdTot7v7LTdIGYyzYM.png
license: cc-by-nc-4.0
description: |
This model uses the Llama3 prompting format.
Llama3 trained on our RP datasets, we tried to have a balance between the ERP and the RP, not too horny, but just enough.
We also added some non-RP dataset, making the model less dumb overall. It should look like a 40%/60% ratio for Non-RP/RP+ERP data.
"This model received the Orthogonal Activation Steering treatment, meaning it will rarely refuse any request."
This is v2!
overrides:
parameters:
model: v2-Llama-3-Lumimaid-8B-v0.1-OAS-Q4_K_M-imat.gguf
files:
- filename: v2-Llama-3-Lumimaid-8B-v0.1-OAS-Q4_K_M-imat.gguf
sha256: b00b4cc2ea4e06db592e5f581171758387106626bcbf445c03a1cb7b424be881
uri: huggingface://Lewdiculous/Llama-3-Lumimaid-8B-v0.1-OAS-GGUF-IQ-Imatrix/v2-Llama-3-Lumimaid-8B-v0.1-OAS-Q4_K_M-imat.gguf
- !!merge <<: *llama3
name: "llama3-8B-aifeifei-1.0-iq-imatrix"
urls:
- https://huggingface.co/aifeifei798/llama3-8B-aifeifei-1.0
- https://huggingface.co/Lewdiculous/llama3-8B-aifeifei-1.0-GGUF-IQ-Imatrix
icon: https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/nndcfLvMAj4q6Egrkavx2.png
description: |
This model has a narrow use case in mind. Read the original description.
overrides:
parameters:
model: llama3-8B-aifeifei-1.0-Q4_K_M-imat.gguf
files:
- filename: llama3-8B-aifeifei-1.0-Q4_K_M-imat.gguf
sha256: 0bc21be5894c2e252ff938ba908bb702774b7de53daca864d707d41f0f98a833
uri: huggingface://Lewdiculous/llama3-8B-aifeifei-1.0-GGUF-IQ-Imatrix/llama3-8B-aifeifei-1.0-Q4_K_M-imat.gguf
- !!merge <<: *llama3
name: "llama3-8B-aifeifei-1.2-iq-imatrix"
urls:
- https://huggingface.co/aifeifei798/llama3-8B-aifeifei-1.2
- https://huggingface.co/Lewdiculous/llama3-8B-aifeifei-1.2-GGUF-IQ-Imatrix
icon: https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/nn_446H9BiIbjPmOVVNyJ.png
description: |
This model has a narrow use case in mind. Read the original description.
overrides:
parameters:
model: llama3-8B-aifeifei-1.2-Q4_K_M-imat.gguf
files:
- filename: llama3-8B-aifeifei-1.2-Q4_K_M-imat.gguf
sha256: 0320e19ae19eec47a77956721ea3339a5c8bae4db69177a020850ec57a34e5c3
uri: huggingface://Lewdiculous/llama3-8B-aifeifei-1.2-GGUF-IQ-Imatrix/llama3-8B-aifeifei-1.2-Q4_K_M-imat.gguf
- !!merge <<: *llama3
name: "rawr_llama3_8b-iq-imatrix"
urls:
- https://huggingface.co/ResplendentAI/Rawr_Llama3_8B
- https://huggingface.co/Lewdiculous/Rawr_Llama3_8B-GGUF-IQ-Imatrix
icon: https://cdn-uploads.huggingface.co/production/uploads/626dfb8786671a29c715f8a9/RLLAODFb8wt26JE2N7SVH.png
description: |
An RP model with a brain.
overrides:
parameters:
model: v2-Rawr_Llama3_8B-Q4_K_M-imat.gguf
files:
- filename: v2-Rawr_Llama3_8B-Q4_K_M-imat.gguf
sha256: 39757f3f77dd19a2a7bada6c0733a93529a742b8e832266cba1b46e34df7638f
uri: huggingface://Lewdiculous/Rawr_Llama3_8B-GGUF-IQ-Imatrix/v2-Rawr_Llama3_8B-Q4_K_M-imat.gguf
- !!merge <<: *llama3
name: "llama3-8b-feifei-1.0-iq-imatrix"
urls:
- https://huggingface.co/aifeifei798/llama3-8B-feifei-1.0
- https://huggingface.co/Lewdiculous/llama3-8B-feifei-1.0-GGUF-IQ-Imatrix
icon: https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/qQ-frXxRPVcGcgMiy9Ph4.png
description: |
The purpose of the model: to create idols.
overrides:
parameters:
model: llama3-8B-feifei-1.0-Q4_K_M-imat.gguf
files:
- filename: llama3-8B-feifei-1.0-Q4_K_M-imat.gguf
sha256: 2404e4202ade5360b7dcf8ef992d1e39fca129431413aa27843bcfae56cbc750
uri: huggingface://Lewdiculous/llama3-8B-feifei-1.0-GGUF-IQ-Imatrix/llama3-8B-feifei-1.0-Q4_K_M-imat.gguf
- !!merge <<: *llama3
name: "llama-3-sqlcoder-8b"
urls:
- https://huggingface.co/defog/llama-3-sqlcoder-8b
- https://huggingface.co/upendrab/llama-3-sqlcoder-8b-Q4_K_M-GGUF
license: cc-by-sa-4.0
description: |
A capable language model for text to SQL generation for Postgres, Redshift and Snowflake that is on-par with the most capable generalist frontier models.
overrides:
parameters:
model: llama-3-sqlcoder-8b.Q4_K_M.gguf
files:
- filename: llama-3-sqlcoder-8b.Q4_K_M.gguf
sha256: b22fc704bf1405846886d9619f3eb93c40587cd58d9bda53789a17997257e023
uri: huggingface://upendrab/llama-3-sqlcoder-8b-Q4_K_M-GGUF/llama-3-sqlcoder-8b.Q4_K_M.gguf
- !!merge <<: *llama3
name: "sfr-iterative-dpo-llama-3-8b-r"
urls:
- https://huggingface.co/bartowski/SFR-Iterative-DPO-LLaMA-3-8B-R-GGUF
license: cc-by-nc-nd-4.0
description: |
A capable language model for text to SQL generation for Postgres, Redshift and Snowflake that is on-par with the most capable generalist frontier models.
overrides:
parameters:
model: SFR-Iterative-DPO-LLaMA-3-8B-R-Q4_K_M.gguf
files:
- filename: SFR-Iterative-DPO-LLaMA-3-8B-R-Q4_K_M.gguf
sha256: 480703ff85af337e1db2a9d9a678a3ac8ca0802e366b14d9c59b81d3fc689da8
uri: huggingface://bartowski/SFR-Iterative-DPO-LLaMA-3-8B-R-GGUF/SFR-Iterative-DPO-LLaMA-3-8B-R-Q4_K_M.gguf
- !!merge <<: *llama3
name: "suzume-llama-3-8B-multilingual"
urls:
- https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual-gguf
icon: https://cdn-uploads.huggingface.co/production/uploads/64b63f8ad57e02621dc93c8b/kg3QjQOde0X743csGJT-f.png
description: |
This Suzume 8B, a multilingual finetune of Llama 3.
Llama 3 has exhibited excellent performance on many English language benchmarks. However, it also seemingly been finetuned on mostly English data, meaning that it will respond in English, even if prompted in other languages.
overrides:
parameters:
model: suzume-llama-3-8B-multilingual-Q4_K_M.gguf
files:
- filename: suzume-llama-3-8B-multilingual-Q4_K_M.gguf
sha256: be197a660e56e51a24a0e0fecd42047d1b24e1423afaafa14769541b331e3269
uri: huggingface://lightblue/suzume-llama-3-8B-multilingual-gguf/ggml-model-Q4_K_M.gguf
- !!merge <<: *llama3
name: "tess-2.0-llama-3-8B"
urls:
- https://huggingface.co/bartowski/Tess-2.0-Llama-3-8B-GGUF
icon: https://huggingface.co/migtissera/Tess-2.0-Mixtral-8x22B/resolve/main/Tess-2.png
description: |
Tess, short for Tesoro (Treasure in Italian), is a general purpose Large Language Model series. Tess-2.0-Llama-3-8B was trained on the meta-llama/Meta-Llama-3-8B base.
overrides:
parameters:
model: Tess-2.0-Llama-3-8B-Q4_K_M.gguf
files:
- filename: Tess-2.0-Llama-3-8B-Q4_K_M.gguf
sha256: 3b5fbd6c59d7d38205ab81970c0227c74693eb480acf20d8c2f211f62e3ca5f6
uri: huggingface://bartowski/Tess-2.0-Llama-3-8B-GGUF/Tess-2.0-Llama-3-8B-Q4_K_M.gguf
- !!merge <<: *llama3
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
name: "tess-v2.5-phi-3-medium-128k-14b"
urls:
- https://huggingface.co/bartowski/Tess-v2.5-Phi-3-medium-128k-14B-GGUF
icon: https://huggingface.co/migtissera/Tess-2.0-Mixtral-8x22B/resolve/main/Tess-2.png
description: |
Tess, short for Tesoro (Treasure in Italian), is a general purpose Large Language Model series.
overrides:
parameters:
model: Tess-v2.5-Phi-3-medium-128k-14B-Q4_K_M.gguf
files:
- filename: Tess-v2.5-Phi-3-medium-128k-14B-Q4_K_M.gguf
uri: huggingface://bartowski/Tess-v2.5-Phi-3-medium-128k-14B-GGUF/Tess-v2.5-Phi-3-medium-128k-14B-Q4_K_M.gguf
sha256: 37267609552586bfae6b29bb1b5da7243863b1a8d49e3156229fb82c4407d17d
- !!merge <<: *llama3
name: "llama3-iterative-dpo-final"
urls:
- https://huggingface.co/bartowski/LLaMA3-iterative-DPO-final-GGUF
- https://huggingface.co/RLHFlow/LLaMA3-iterative-DPO-final
description: |
From model card:
We release an unofficial checkpoint of a state-of-the-art instruct model of its class, LLaMA3-iterative-DPO-final. On all three widely-used instruct model benchmarks: Alpaca-Eval-V2, MT-Bench, Chat-Arena-Hard, our model outperforms all models of similar size (e.g., LLaMA-3-8B-it), most large open-sourced models (e.g., Mixtral-8x7B-it), and strong proprietary models (e.g., GPT-3.5-turbo-0613). The model is trained with open-sourced datasets without any additional human-/GPT4-labeling.
overrides:
parameters:
model: LLaMA3-iterative-DPO-final-Q4_K_M.gguf
files:
- filename: LLaMA3-iterative-DPO-final-Q4_K_M.gguf
sha256: 480703ff85af337e1db2a9d9a678a3ac8ca0802e366b14d9c59b81d3fc689da8
uri: huggingface://bartowski/LLaMA3-iterative-DPO-final-GGUF/LLaMA3-iterative-DPO-final-Q4_K_M.gguf
- !!merge <<: *llama3
name: "new-dawn-llama-3-70b-32K-v1.0"
urls:
- https://huggingface.co/bartowski/New-Dawn-Llama-3-70B-32K-v1.0-GGUF
- https://huggingface.co/sophosympatheia/New-Dawn-Llama-3-70B-32K-v1.0
icon: https://imgur.com/tKzncGo.png
description: |
This model is a multi-level SLERP merge of several Llama 3 70B variants. See the merge recipe below for details. I extended the context window for this model out to 32K by snagging some layers from abacusai/Smaug-Llama-3-70B-Instruct-32K using a technique similar to what I used for Midnight Miqu, which was further honed by jukofyork.
This model is uncensored. You are responsible for whatever you do with it.
This model was designed for roleplaying and storytelling and I think it does well at both. It may also perform well at other tasks but I have not tested its performance in other areas.
overrides:
parameters:
model: New-Dawn-Llama-3-70B-32K-v1.0-Q4_K_M.gguf
files:
- filename: New-Dawn-Llama-3-70B-32K-v1.0-Q4_K_M.gguf
sha256: 30561ae5decac4ad46775c76a9a40fb43436ade96bc132b4b9cc6749b9e2f448
uri: huggingface://bartowski/New-Dawn-Llama-3-70B-32K-v1.0-GGUF/New-Dawn-Llama-3-70B-32K-v1.0-Q4_K_M.gguf
- !!merge <<: *llama3
name: "l3-aethora-15b-v2"
urls:
- https://huggingface.co/bartowski/L3-Aethora-15B-V2-GGUF
- https://huggingface.co/ZeusLabs/L3-Aethora-15B-V2
icon: https://cdn-uploads.huggingface.co/production/uploads/64545af5ec40bbbd01242ca6/yJpwVd5UTnAVDoEPVVCS1.png
description: |
L3-Aethora-15B v2 is an advanced language model built upon the Llama 3 architecture. It employs state-of-the-art training techniques and a curated dataset to deliver enhanced performance across a wide range of tasks.
overrides:
parameters:
model: L3-Aethora-15B-V2-Q4_K_M.gguf
files:
- filename: L3-Aethora-15B-V2-Q4_K_M.gguf
sha256: 014a215739e1574e354780f218776e54807548d0c32555274c4d96d7628f29b6
uri: huggingface://bartowski/L3-Aethora-15B-V2-GGUF/L3-Aethora-15B-V2-Q4_K_M.gguf
- !!merge <<: *llama3
name: "bungo-l3-8b-iq-imatrix"
urls:
- https://huggingface.co/Lewdiculous/Bungo-L3-8B-GGUF-IQ-Imatrix-Request
icon: https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/ezaxE50ef-7RsFi3gUbNp.webp
description: |
An experimental model that turned really well. Scores high on Chai leaderboard (slerp8bv2 there). Feel smarter than average L3 merges for RP.
overrides:
parameters:
model: Bungo-L3-8B-Q4_K_M-imat.gguf
files:
- filename: Bungo-L3-8B-Q4_K_M-imat.gguf
sha256: 88d0139954e8f9525b80636a6269df885008c4837a1332f84f9a5dc6f37c9b8f
uri: huggingface://Lewdiculous/Bungo-L3-8B-GGUF-IQ-Imatrix-Request/Bungo-L3-8B-Q4_K_M-imat.gguf
- !!merge <<: *llama3
name: "llama3-8b-darkidol-2.1-uncensored-1048k-iq-imatrix"
urls:
- https://huggingface.co/LWDCLS/llama3-8B-DarkIdol-2.1-Uncensored-1048K-GGUF-IQ-Imatrix-Request
icon: https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/tKL5W1G5WCHm4609LEmiM.png
description: |
The module combination has been readjusted to better fulfill various roles and has been adapted for mobile phones.
Uncensored 1048K
overrides:
parameters:
model: llama3-8B-DarkIdol-2.1-Uncensored-1048K-Q4_K_M-imat.gguf
files:
- filename: llama3-8B-DarkIdol-2.1-Uncensored-1048K-Q4_K_M-imat.gguf
sha256: 86f0f1e10fc315689e09314aebb7354bb40d8fe95de008d21a75dc8fff1cd2fe
uri: huggingface://LWDCLS/llama3-8B-DarkIdol-2.1-Uncensored-1048K-GGUF-IQ-Imatrix-Request/llama3-8B-DarkIdol-2.1-Uncensored-1048K-Q4_K_M-imat.gguf
- !!merge <<: *llama3
name: "llama3-8b-darkidol-2.2-uncensored-1048k-iq-imatrix"
urls:
- https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-2.2-Uncensored-1048K
- https://huggingface.co/LWDCLS/llama3-8B-DarkIdol-2.2-Uncensored-1048K-GGUF-IQ-Imatrix-Request
icon: https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-2.2-Uncensored-1048K/resolve/main/llama3-8B-DarkIdol-2.2-Uncensored-1048K.png
description: |
The module combination has been readjusted to better fulfill various roles and has been adapted for mobile phones.
- Saving money(LLama 3)
- Uncensored
- Quick response
- The underlying model used is winglian/Llama-3-8b-1048k-PoSE
- A scholarly response akin to a thesis.(I tend to write songs extensively, to the point where one song almost becomes as detailed as a thesis. :)
- DarkIdol:Roles that you can imagine and those that you cannot imagine.
- Roleplay
- Specialized in various role-playing scenarios more look at test role. (https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-1.2/tree/main/test)
- more look at LM Studio presets (https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-1.2/tree/main/config-presets)
overrides:
parameters:
model: llama3-8B-DarkIdol-2.2-Uncensored-1048K-Q4_K_M-imat.gguf
files:
- filename: llama3-8B-DarkIdol-2.2-Uncensored-1048K-Q4_K_M-imat.gguf
sha256: 7714947799d4e6984cf9106244ee24aa821778936ad1a81023480a774e255f52
uri: huggingface://LWDCLS/llama3-8B-DarkIdol-2.2-Uncensored-1048K-GGUF-IQ-Imatrix-Request/llama3-8B-DarkIdol-2.2-Uncensored-1048K-Q4_K_M-imat.gguf
- !!merge <<: *llama3
name: "llama3-turbcat-instruct-8b"
urls:
- https://huggingface.co/turboderp/llama3-turbcat-instruct-8b
- https://huggingface.co/bartowski/llama3-turbcat-instruct-8b-GGUF
icon: https://huggingface.co/turboderp/llama3-turbcat-instruct-8b/resolve/main/8.png
description: |
This is a direct upgrade over cat 70B, with 2x the dataset size(2GB-> 5GB), added Chinese support with quality on par with the original English dataset. The medical COT portion of the dataset has been sponsored by steelskull, and the action packed character play portion was donated by Gryphe's(aesir dataset). Note that 8b is based on llama3 with limited Chinese support due to base model choice. The chat format in 8b is llama3. The 72b has more comprehensive Chinese support and the format will be chatml.
overrides:
parameters:
model: llama3-turbcat-instruct-8b-Q4_K_M.gguf
files:
- filename: llama3-turbcat-instruct-8b-Q4_K_M.gguf
sha256: a9a36e3220d901a8ad80c75608a81aaeed3a9cdf111247462bf5e3443aad5461
uri: huggingface://bartowski/llama3-turbcat-instruct-8b-GGUF/llama3-turbcat-instruct-8b-Q4_K_M.gguf
- !!merge <<: *llama3
name: "l3-8b-everything-cot"
urls:
- https://huggingface.co/FPHam/L3-8B-Everything-COT
- https://huggingface.co/bartowski/L3-8B-Everything-COT-GGUF
icon: https://huggingface.co/FPHam/L3-8B-Everything-COT/resolve/main/cot2.png
description: |
Everything COT is an investigative self-reflecting general model that uses Chain of Thought for everything. And I mean everything.
Instead of confidently proclaiming something (or confidently hallucinating other things) like most models, it caries an internal dialogue with itself and often cast doubts over uncertain topics while looking at it from various sides.
overrides:
parameters:
model: L3-8B-Everything-COT-Q4_K_M.gguf
files:
- filename: L3-8B-Everything-COT-Q4_K_M.gguf
sha256: b220b0e2f8fb1c8a491d10dbd054269ed078ee5e2e62dc9d2e3b97b06f52e987
uri: huggingface://bartowski/L3-8B-Everything-COT-GGUF/L3-8B-Everything-COT-Q4_K_M.gguf
- !!merge <<: *llama3
name: "llama-3-llamilitary"
urls:
- https://huggingface.co/Heralax/llama-3-llamilitary
- https://huggingface.co/mudler/llama-3-llamilitary-Q4_K_M-GGUF
icon: https://cdn-uploads.huggingface.co/production/uploads/64825ebceb4befee377cf8ac/ea2C9laq24V6OuxwhzJZS.png
description: |
This is a model trained on [instruct data generated from old historical war books] as well as on the books themselves, with the goal of creating a joke LLM knowledgeable about the (long gone) kind of warfare involving muskets, cavalry, and cannon.
This model can provide good answers, but it turned out to be pretty fragile during conversation for some reason: open-ended questions can make it spout nonsense. Asking facts is more reliable but not guaranteed to work.
The basic guide to getting good answers is: be specific with your questions. Use specific terms and define a concrete scenario, if you can, otherwise the LLM will often hallucinate the rest. I think the issue was that I did not train with a large enough system prompt: not enough latent space is being activated by default. (I'll try to correct this in future runs).
overrides:
parameters:
model: llama-3-llamilitary-q4_k_m.gguf
files:
- filename: llama-3-llamilitary-q4_k_m.gguf
sha256: f3684f2f0845f9aead884fa9a52ea67bed53856ebeedef1620ca863aba57e458
uri: huggingface://mudler/llama-3-llamilitary-Q4_K_M-GGUF/llama-3-llamilitary-q4_k_m.gguf
- !!merge <<: *llama3
name: "l3-stheno-maid-blackroot-grand-horror-16b"
urls:
- https://huggingface.co/DavidAU/L3-Stheno-Maid-Blackroot-Grand-HORROR-16B-GGUF
icon: https://huggingface.co/DavidAU/L3-Stheno-Maid-Blackroot-Grand-HORROR-16B-GGUF/resolve/main/hm.jpg
description: |
Rebuilt and Powered Up.
WARNING: NSFW. Graphic HORROR. Extreme swearing. UNCENSORED. SMART.
The author took the original models in "L3-Stheno-Maid-Blackroot 8B" and completely rebuilt it a new pass-through merge (everything preserved) and blew it out to over 16.5 billion parameters - 642 tensors, 71 layers (8B original has 32 layers).
This is not an "upscale" or "franken merge" but a completely new model based on the models used to construct "L3-Stheno-Maid-Blackroot 8B".
The result is a take no prisoners, totally uncensored, fiction writing monster and roleplay master as well just about... any general fiction activity "AI guru" including scene generation and scene continuation.
As a result of the expansion / merge re-build its level of prose and story generation has significantly improved as well as word choice, sentence structure as well as default output levels and lengths.
It also has a STRONG horror bias, although it will generate content for almost any genre. That being said if there is a "hint" of things going wrong... they will.
It will also swear (R-18) like there is no tomorrow at times and "dark" characters will be VERY dark so to speak.
Model is excels in details (real and "constructed"), descriptions, similes and metaphors.
It can have a sense of humor ... ah... dark humor.
Because of the nature of this merge most attributes of each of the 3 models will be in this rebuilt 16.5B model as opposed to the original 8B model where some of one or more of the model's features and/or strengths maybe reduced or overshadowed.
overrides:
parameters:
model: L3-Stheno-Maid-Blackroot-Grand-HORROR-16B-D_AU-Q4_K_M.gguf
files:
- filename: L3-Stheno-Maid-Blackroot-Grand-HORROR-16B-D_AU-Q4_K_M.gguf
sha256: ae29f38d73dfb04415821405cf8b319fc42d78d0cdd0da91db147d12e68030fe
uri: huggingface://DavidAU/L3-Stheno-Maid-Blackroot-Grand-HORROR-16B-GGUF/L3-Stheno-Maid-Blackroot-Grand-HORROR-16B-D_AU-Q4_K_M.gguf
- !!merge <<: *llama3
name: "meta-llama-3-instruct-12.2b-brainstorm-20x-form-8"
urls:
- https://huggingface.co/DavidAU/Meta-Llama-3-Instruct-12.2B-BRAINSTORM-20x-FORM-8-GGUF
description: |
Meta-Llama-3-8B Instruct (now at 12.2B) with Brainstorm process that increases its performance at the core level for any creative use case. It has calibrations that allow it to exceed the logic solving abilities of the original model. The Brainstorm process expands the reasoning center of the LLM, reassembles and calibrates it, introducing subtle changes into the reasoning process. This enhances the model's detail, concept, connection to the "world", general concept connections, prose quality, and prose length without affecting instruction following. It improves coherence, description, simile, metaphors, emotional engagement, and takes fewer liberties with instructions while following them more closely. The model's performance is further enhanced by other technologies like "Ultra" (precision), "Neo Imatrix" (custom imatrix datasets), and "X-quants" (custom application of the imatrix process). It has been tested on multiple LLaMA2, LLaMA3, and Mistral models of various parameter sizes.
overrides:
parameters:
model: Meta-Llama-3-8B-Instruct-exp20-8-Q4_K_M.gguf
files:
- filename: Meta-Llama-3-8B-Instruct-exp20-8-Q4_K_M.gguf
sha256: 5568ab6195ab5da703f728cc118108ddcbe97255e3ba4a543b531acdf082b999
uri: huggingface://DavidAU/Meta-Llama-3-Instruct-12.2B-BRAINSTORM-20x-FORM-8-GGUF/Meta-Llama-3-8B-Instruct-exp20-8-Q4_K_M.gguf
- !!merge <<: *llama3
name: "loki-base-i1"
urls:
- https://huggingface.co/MrRobotoAI/Loki-base
- https://huggingface.co/mradermacher/Loki-base-i1-GGUF
description: |
Merge of several models using mergekit:
- model: abacusai/Llama-3-Smaug-8B
- model: Aculi/Llama3-Sophie
- model: ajibawa-2023/Uncensored-Frank-Llama-3-8B
- model: Blackroot/Llama-3-Gamma-Twist
- model: Casual-Autopsy/L3-Super-Nova-RP-8B
- model: Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B
- model: cgato/L3-TheSpice-8b-v0.8.3
- model: ChaoticNeutrals/Hathor_Respawn-L3-8B-v0.8
- model: ChaoticNeutrals/Hathor_RP-v.01-L3-8B
- model: chargoddard/prometheus-2-llama-3-8b
- model: chujiezheng/Llama-3-Instruct-8B-SimPO-ExPO
- model: chujiezheng/LLaMA3-iterative-DPO-final-ExPO
- model: Fizzarolli/L3-8b-Rosier-v1
- model: flammenai/Mahou-1.2a-llama3-8B
- model: HaitameLaf/Llama-3-8B-StoryGenerator
- model: HPAI-BSC/Llama3-Aloe-8B-Alpha
- model: iRyanBell/ARC1
- model: iRyanBell/ARC1-II
- model: lemon07r/Llama-3-RedMagic4-8B
- model: lemon07r/Lllama-3-RedElixir-8B
- model: Locutusque/Llama-3-Hercules-5.0-8B
- model: Magpie-Align/Llama-3-8B-Magpie-Pro-MT-SFT-v0.1
- model: maldv/badger-lambda-llama-3-8b
- model: maldv/badger-mu-llama-3-8b
- model: maldv/badger-writer-llama-3-8b
- model: mlabonne/NeuralDaredevil-8B-abliterated
- model: MrRobotoAI/Fiction-Writer-6
- model: MrRobotoAI/Unholy-Thoth-8B-v2
- model: nbeerbower/llama-3-spicy-abliterated-stella-8B
- model: NeverSleep/Llama-3-Lumimaid-8B-v0.1
- model: NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS
- model: Nitral-AI/Hathor_Sofit-L3-8B-v1
- model: Nitral-AI/Hathor_Stable-v0.2-L3-8B
- model: Nitral-AI/Hathor_Tahsin-L3-8B-v0.85
- model: Nitral-AI/Poppy_Porpoise-0.72-L3-8B
- model: nothingiisreal/L3-8B-Instruct-Abliterated-DWP
- model: nothingiisreal/L3-8B-Stheno-Horny-v3.3-32K
- model: NousResearch/Hermes-2-Theta-Llama-3-8B
- model: OwenArli/Awanllm-Llama-3-8B-Cumulus-v1.0
- model: refuelai/Llama-3-Refueled
- model: ResplendentAI/Nymph_8B
- model: shauray/Llama3-8B-DPO-uncensored
- model: SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha
- model: TIGER-Lab/MAmmoTH2-8B-Plus
- model: Undi95/Llama-3-LewdPlay-8B
- model: Undi95/Meta-Llama-3-8B-hf
- model: VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct
- model: WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0
overrides:
parameters:
model: Loki-base.i1-Q4_K_M.gguf
files:
- filename: Loki-base.i1-Q4_K_M.gguf
sha256: 60a4357fa399bfd18aa841cc529da09439791331d117a4f06f0467d002b385bb
uri: huggingface://mradermacher/Loki-base-i1-GGUF/Loki-base.i1-Q4_K_M.gguf
- !!merge <<: *llama3
name: "llama-3-whiterabbitneo-8b-v2.0"
icon: https://huggingface.co/migtissera/WhiteRabbitNeo/resolve/main/WhiteRabbitNeo.png
urls:
- https://huggingface.co/WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0
- https://huggingface.co/QuantFactory/Llama-3-WhiteRabbitNeo-8B-v2.0-GGUF
description: |
WhiteRabbitNeo is a model series that can be used for offensive and defensive cybersecurity.
Topics Covered:
- Open Ports: Identifying open ports is crucial as they can be entry points for attackers. Common ports to check include HTTP (80, 443), FTP (21), SSH (22), and SMB (445).
- Outdated Software or Services: Systems running outdated software or services are often vulnerable to exploits. This includes web servers, database servers, and any third-party software.
- Default Credentials: Many systems and services are installed with default usernames and passwords, which are well-known and can be easily exploited.
- Misconfigurations: Incorrectly configured services, permissions, and security settings can introduce vulnerabilities.
- Injection Flaws: SQL injection, command injection, and cross-site scripting (XSS) are common issues in web applications.
- Unencrypted Services: Services that do not use encryption (like HTTP instead of HTTPS) can expose sensitive data.
- Known Software Vulnerabilities: Checking for known vulnerabilities in software using databases like the National Vulnerability Database (NVD) or tools like Nessus or OpenVAS.
- Cross-Site Request Forgery (CSRF): This is where unauthorized commands are transmitted from a user that the web application trusts.
- Insecure Direct Object References: This occurs when an application provides direct access to objects based on user-supplied input.
- Security Misconfigurations in Web Servers/Applications: This includes issues like insecure HTTP headers or verbose error messages that reveal too much information.
- Broken Authentication and Session Management: This can allow attackers to compromise passwords, keys, or session tokens, or to exploit other implementation flaws to assume other users' identities.
- Sensitive Data Exposure: Includes vulnerabilities that expose sensitive data, such as credit card numbers, health records, or personal information.
- API Vulnerabilities: In modern web applications, APIs are often used and can have vulnerabilities like insecure endpoints or data leakage.
- Denial of Service (DoS) Vulnerabilities: Identifying services that are vulnerable to DoS attacks, which can make the resource unavailable to legitimate users.
- Buffer Overflows: Common in older software, these vulnerabilities can allow an attacker to crash the system or execute arbitrary code.
- More ..
overrides:
parameters:
model: Llama-3-WhiteRabbitNeo-8B-v2.0.Q4_K_M.gguf
files:
- filename: Llama-3-WhiteRabbitNeo-8B-v2.0.Q4_K_M.gguf
sha256: cf01ba2ca5af2a3ecd6a2221d19b8b91ec0e9fe06fa8fdffd774d5e0a2459c4c
uri: huggingface://QuantFactory/Llama-3-WhiteRabbitNeo-8B-v2.0-GGUF/Llama-3-WhiteRabbitNeo-8B-v2.0.Q4_K_M.gguf
- !!merge <<: *llama3
name: "l3-nymeria-maid-8b"
icon: https://huggingface.co/tannedbum/L3-Nymeria-Maid-8B-exl2/resolve/main/Nymeria.png?
urls:
- https://huggingface.co/tannedbum/L3-Nymeria-Maid-8B
- https://huggingface.co/QuantFactory/L3-Nymeria-Maid-8B-GGUF
description: |
The model is a merge of pre-trained language models created using the mergekit library. It combines the following models:
- Sao10K/L3-8B-Stheno-v3.2
- princeton-nlp/Llama-3-Instruct-8B-SimPO
The merge was performed using the slerp merge method. The models were merged using the slerp merge method and the configuration used to produce the model is included in the text. The model is not suitable for all audiences and is intended for scientific purposes.
Nymeria is the balanced version, doesn't force nsfw. Nymeria-Maid has more Stheno's weights, leans more on nsfw and is more submissive.
overrides:
parameters:
model: L3-Nymeria-Maid-8B.Q4_K_M.gguf
files:
- filename: L3-Nymeria-Maid-8B.Q4_K_M.gguf
sha256: 05bce561daa59b38cf9b79973c3b1e2e27af6d1e8e41570760af54800a09bcc2
uri: huggingface://QuantFactory/L3-Nymeria-Maid-8B-GGUF/L3-Nymeria-Maid-8B.Q4_K_M.gguf
- &dolphin
name: "dolphin-2.9-llama3-8b"
url: "github:mudler/LocalAI/gallery/hermes-2-pro-mistral.yaml@master"
urls:
- https://huggingface.co/cognitivecomputations/dolphin-2.9-llama3-8b-gguf
tags:
- llm
- gguf
- gpu
- cpu
- llama3
license: llama3
description: |
Dolphin-2.9 has a variety of instruction, conversational, and coding skills. It also has initial agentic abilities and supports function calling.
Dolphin is uncensored.
Curated and trained by Eric Hartford, Lucas Atkins, and Fernando Fernandes, and Cognitive Computations
icon: https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/ldkN1J0WIDQwU4vutGYiD.png
overrides:
parameters:
model: dolphin-2.9-llama3-8b-q4_K_M.gguf
files:
- filename: dolphin-2.9-llama3-8b-q4_K_M.gguf
sha256: be988199ce28458e97205b11ae9d9cf4e3d8e18ff4c784e75bfc12f54407f1a1
uri: huggingface://cognitivecomputations/dolphin-2.9-llama3-8b-gguf/dolphin-2.9-llama3-8b-q4_K_M.gguf
- !!merge <<: *dolphin
name: "dolphin-2.9-llama3-8b:Q6_K"
overrides:
parameters:
model: dolphin-2.9-llama3-8b-q6_K.gguf
files:
- filename: dolphin-2.9-llama3-8b-q6_K.gguf
sha256: 8aac72a0bd72c075ba7be1aa29945e47b07d39cd16be9a80933935f51b57fb32
uri: huggingface://cognitivecomputations/dolphin-2.9-llama3-8b-gguf/dolphin-2.9-llama3-8b-q6_K.gguf
- !!merge <<: *dolphin
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
name: "dolphin-2.9.2-phi-3-medium"
urls:
- https://huggingface.co/cognitivecomputations/dolphin-2.9.2-Phi-3-Medium
- https://huggingface.co/bartowski/dolphin-2.9.2-Phi-3-Medium-GGUF
overrides:
parameters:
model: dolphin-2.9.2-Phi-3-Medium-Q4_K_M.gguf
files:
- filename: dolphin-2.9.2-Phi-3-Medium-Q4_K_M.gguf
sha256: e817eae484a59780358cf91527b12585804d4914755d8a86d8d666b10bac57e5
uri: huggingface://bartowski/dolphin-2.9.2-Phi-3-Medium-GGUF/dolphin-2.9.2-Phi-3-Medium-Q4_K_M.gguf
- !!merge <<: *dolphin
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
name: "dolphin-2.9.2-phi-3-Medium-abliterated"
urls:
- https://huggingface.co/cognitivecomputations/dolphin-2.9.2-Phi-3-Medium-abliterated
- https://huggingface.co/bartowski/dolphin-2.9.2-Phi-3-Medium-abliterated-GGUF
overrides:
parameters:
model: dolphin-2.9.2-Phi-3-Medium-abliterated-Q4_K_M.gguf
files:
- filename: dolphin-2.9.2-Phi-3-Medium-abliterated-Q4_K_M.gguf
sha256: 566331c2efe87725310aacb709ca15088a0063fa0ddc14a345bf20d69982156b
uri: huggingface://bartowski/dolphin-2.9.2-Phi-3-Medium-abliterated-GGUF/dolphin-2.9.2-Phi-3-Medium-abliterated-Q4_K_M.gguf
- url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
name: "llama-3-8b-instruct-dpo-v0.3-32k"
license: llama3
urls:
- https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3-32k-GGUF
tags:
- llm
- gguf
- gpu
- cpu
- llama3
overrides:
context_size: 32768
parameters:
model: Llama-3-8B-Instruct-DPO-v0.3.Q4_K_M.gguf
files:
- filename: Llama-3-8B-Instruct-DPO-v0.3.Q4_K_M.gguf
sha256: 694c55b5215d03e59626cd4292076eaf31610ef27ba04737166766baa75d889f
uri: huggingface://MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3-32k-GGUF/Llama-3-8B-Instruct-DPO-v0.3.Q4_K_M.gguf
- !!merge <<: *llama3
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
name: "nyun-llama3-62b"
description: |
12% Fewer Parameters: nyun-llama3-62B comprises approximately 12% fewer parameters than the popular Llama-3-70B.
Intact Performance: Despite having fewer parameters, our model performs at par if not better, and occasionally outperforms, the Llama-3-70B.
No Fine-Tuning Required: This model undergoes no fine-tuning, showcasing the raw potential of our optimization techniques.
urls:
- https://huggingface.co/nyunai/nyun-llama3-62B
- https://huggingface.co/bartowski/nyun-llama3-62B-GGUF
overrides:
parameters:
model: nyun-llama3-62B-Q4_K_M.gguf
files:
- filename: nyun-llama3-62B-Q4_K_M.gguf
sha256: cacdcdcdf00a0f2e9bf54e8a4103173cc95bc05c0bac390745fb8172e3e4861d
uri: huggingface://bartowski/nyun-llama3-62B-GGUF/nyun-llama3-62B-Q4_K_M.gguf
- url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
name: "mahou-1.2-llama3-8b"
license: llama3
icon: https://huggingface.co/flammenai/Mahou-1.0-mistral-7B/resolve/main/mahou1.png
urls:
- https://huggingface.co/flammenai/Mahou-1.2-llama3-8B-GGUF
tags:
- llm
- gguf
- gpu
- cpu
- llama3
overrides:
context_size: 8192
parameters:
model: Mahou-1.2-llama3-8B-Q4_K_M.gguf
files:
- filename: Mahou-1.2-llama3-8B-Q4_K_M.gguf
sha256: 651b405dff71e4ce80e15cc6d393463f02833428535c56eb6bae113776775d62
uri: huggingface://flammenai/Mahou-1.2-llama3-8B-GGUF/Mahou-1.2-llama3-8B-Q4_K_M.gguf
- !!merge <<: *llama3
name: "llama-3-instruct-8b-SimPO-ExPO"
description: |
The extrapolated (ExPO) model based on princeton-nlp/Llama-3-Instruct-8B-SimPO and meta-llama/Meta-Llama-3-8B-Instruct, as in the "Weak-to-Strong Extrapolation Expedites Alignment" paper.
urls:
- https://huggingface.co/bartowski/Llama-3-Instruct-8B-SimPO-ExPO-GGUF
- https://huggingface.co/chujiezheng/Llama-3-Instruct-8B-SimPO-ExPO
overrides:
parameters:
model: Llama-3-Instruct-8B-SimPO-ExPO-Q4_K_M.gguf
files:
- filename: Llama-3-Instruct-8B-SimPO-ExPO-Q4_K_M.gguf
sha256: a78a68851f76a376654a496d9aaac761aeac6a25fd003f0350da40afceba3f0f
uri: huggingface://bartowski/Llama-3-Instruct-8B-SimPO-ExPO-GGUF/Llama-3-Instruct-8B-SimPO-ExPO-Q4_K_M.gguf
- !!merge <<: *llama3
name: "Llama-3-Yggdrasil-2.0-8B"
description: |
The following models were included in the merge:
Locutusque/Llama-3-NeuralHercules-5.0-8B
NousResearch/Hermes-2-Theta-Llama-3-8B
Locutusque/llama-3-neural-chat-v2.2-8b
urls:
- https://huggingface.co/bartowski/Llama-3-Yggdrasil-2.0-8B-GGUF
- https://huggingface.co/Locutusque/Llama-3-Yggdrasil-2.0-8B
overrides:
parameters:
model: Llama-3-Yggdrasil-2.0-8B-Q4_K_M.gguf
files:
- filename: Llama-3-Yggdrasil-2.0-8B-Q4_K_M.gguf
sha256: 75091cf3a7145373922dbeb312c689cace89ba06215ce74b6fc7055a4b35a40c
uri: huggingface://bartowski/Llama-3-Yggdrasil-2.0-8B-GGUF/Llama-3-Yggdrasil-2.0-8B-Q4_K_M.gguf
- !!merge <<: *llama3
name: "hathor_tahsin-l3-8b-v0.85"
description: |
Hathor_Tahsin [v-0.85] is designed to seamlessly integrate the qualities of creativity, intelligence, and robust performance.
Note: Hathor_Tahsin [v0.85] is trained on 3 epochs of Private RP, STEM (Intruction/Dialogs), Opus instructons, mixture light/classical novel data, roleplaying chat pairs over llama 3 8B instruct.
Additional Note's: (Based on Hathor_Fractionate-v0.5 instead of Hathor_Aleph-v0.72, should be less repetitive than either 0.72 or 0.8)
icon: https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/MY9tjLnEG5hOQOyKk06PK.jpeg
urls:
- https://huggingface.co/Nitral-AI/Hathor_Tahsin-L3-8B-v0.85
- https://huggingface.co/bartowski/Hathor_Tahsin-L3-8B-v0.85-GGUF
overrides:
parameters:
model: Hathor_Tahsin-L3-8B-v0.85-Q4_K_M.gguf
files:
- filename: Hathor_Tahsin-L3-8B-v0.85-Q4_K_M.gguf
sha256: c82f39489e767a842925fc58cafb5dec0cc71313d904a53fdb46186be899ecb0
uri: huggingface://bartowski/Hathor_Tahsin-L3-8B-v0.85-GGUF/Hathor_Tahsin-L3-8B-v0.85-Q4_K_M.gguf
- !!merge <<: *llama3
name: "replete-coder-instruct-8b-merged"
icon: https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/-0dERC793D9XeFsJ9uHbx.png
description: |
This is a Ties merge between the following models:
https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct
https://huggingface.co/Replete-AI/Llama3-8B-Instruct-Replete-Adapted
The Coding, and Overall performance of this models seems to be better than both base models used in the merge. Benchmarks are coming in the future.
urls:
- https://huggingface.co/Replete-AI/Replete-Coder-Instruct-8b-Merged
- https://huggingface.co/bartowski/Replete-Coder-Instruct-8b-Merged-GGUF
overrides:
parameters:
model: Replete-Coder-Instruct-8b-Merged-Q4_K_M.gguf
files:
- filename: Replete-Coder-Instruct-8b-Merged-Q4_K_M.gguf
sha256: 5374a38023b3d8617d266f94e4eff4c5d996b3197e6c42ae27315110bcc75d33
uri: huggingface://bartowski/Replete-Coder-Instruct-8b-Merged-GGUF/Replete-Coder-Instruct-8b-Merged-Q4_K_M.gguf
- !!merge <<: *llama3
name: "arliai-llama-3-8b-formax-v1.0"
description: |
Formax is a model that specializes in following response format instructions. Tell it the format of it's response and it will follow it perfectly. Great for data processing and dataset creation tasks.
Base model: https://huggingface.co/failspy/Meta-Llama-3-8B-Instruct-abliterated-v3
Training:
4096 sequence length
Training duration is around 2 days on 2x3090Ti
1 epoch training with a massive dataset for minimized repetition sickness.
LORA with 64-rank 128-alpha resulting in ~2% trainable weights.
urls:
- https://huggingface.co/OwenArli/ArliAI-Llama-3-8B-Formax-v1.0
- https://huggingface.co/bartowski/ArliAI-Llama-3-8B-Formax-v1.0-GGUF
overrides:
context_size: 4096
parameters:
model: ArliAI-Llama-3-8B-Formax-v1.0-Q4_K_M.gguf
files:
- filename: ArliAI-Llama-3-8B-Formax-v1.0-Q4_K_M.gguf
sha256: e6a47a11eb67c1d4cd92e3512d3288a5d937c41a3319e95c3b8b2332428af239
uri: huggingface://bartowski/ArliAI-Llama-3-8B-Formax-v1.0-GGUF/ArliAI-Llama-3-8B-Formax-v1.0-Q4_K_M.gguf
- name: "llama-3-sec-chat"
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
urls:
- https://huggingface.co/arcee-ai/Llama-3-SEC-Chat-GGUF
- https://huggingface.co/arcee-ai/Llama-3-SEC-Chat
icon: https://i.ibb.co/kHtBmDN/w8m6-X4-HCQRa-IR86ar-Cm5gg.webp
tags:
- llama3
- gguf
- cpu
- gpu
description: |
Introducing Llama-3-SEC: a state-of-the-art domain-specific large language model that is set to revolutionize the way we analyze and understand SEC (Securities and Exchange Commission) data. Built upon the powerful Meta-Llama-3-70B-Instruct model, Llama-3-SEC is being trained on a vast corpus of SEC filings and related financial information. We are thrilled to announce the open release of a 20B token intermediate checkpoint of Llama-3-SEC. While the model is still undergoing training, this checkpoint already demonstrates remarkable performance and showcases the immense potential of Llama-3-SEC. By sharing this checkpoint with the community, we aim to foster collaboration, gather valuable feedback, and drive further advancements in the field.
overrides:
parameters:
model: Llama-3-SEC-Chat-Q4_K_M.gguf
files:
- filename: Llama-3-SEC-Chat-Q4_K_M.gguf
uri: huggingface://arcee-ai/Llama-3-SEC-Chat-GGUF/Llama-3-SEC-Chat-Q4_K_M.gguf
sha256: 0d837400af161ba4136233db191330f2d77e297e079f0b6249e877c375cb56f3
- !!merge <<: *llama3
name: "copus-2x8b-i1"
icon: https://huggingface.co/lodrick-the-lafted/Copus-2x8B/resolve/main/copus.png
urls:
- https://huggingface.co/lodrick-the-lafted/Copus-2x8B
- https://huggingface.co/mradermacher/Copus-2x8B-i1-GGUF
description: |
Which were the two most interesting llama3 finetunes as of yet. Resulting model seems OK. It's not on Miqu's level, anyway.
overrides:
parameters:
model: Copus-2x8B.i1-Q4_K_M.gguf
files:
- filename: Copus-2x8B.i1-Q4_K_M.gguf
sha256: 685da1ba49e203e8f491105585143d76044286d4b4687bed37d325f6b55501e5
uri: huggingface://mradermacher/Copus-2x8B-i1-GGUF/Copus-2x8B.i1-Q4_K_M.gguf
- &yi-chat
### Start Yi
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
icon: "https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg"
name: "yi-1.5-9b-chat"
license: apache-2.0
urls:
- https://huggingface.co/01-ai/Yi-1.5-6B-Chat
- https://huggingface.co/MaziyarPanahi/Yi-1.5-9B-Chat-GGUF
tags:
- llm
- gguf
- gpu
- cpu
- yi
overrides:
context_size: 4096
parameters:
model: Yi-1.5-9B-Chat.Q4_K_M.gguf
files:
- filename: Yi-1.5-9B-Chat.Q4_K_M.gguf
sha256: bae824bdb0f3a333714bafffcbb64cf5cba7259902cd2f20a0fec6efbc6c1e5a
uri: huggingface://MaziyarPanahi/Yi-1.5-9B-Chat-GGUF/Yi-1.5-9B-Chat.Q4_K_M.gguf
- !!merge <<: *yi-chat
name: "yi-1.5-6b-chat"
urls:
- https://huggingface.co/01-ai/Yi-1.5-6B-Chat
- https://huggingface.co/MaziyarPanahi/Yi-1.5-6B-Chat-GGUF
overrides:
parameters:
model: Yi-1.5-6B-Chat.Q4_K_M.gguf
files:
- filename: Yi-1.5-6B-Chat.Q4_K_M.gguf
sha256: 7a0f853dbd8d38bad71ada1933fd067f45f928b2cd978aba1dfd7d5dec2953db
uri: huggingface://MaziyarPanahi/Yi-1.5-6B-Chat-GGUF/Yi-1.5-6B-Chat.Q4_K_M.gguf
- !!merge <<: *yi-chat
icon: https://huggingface.co/qnguyen3/Master-Yi-9B/resolve/main/Master-Yi-9B.webp
name: "master-yi-9b"
description: |
Master is a collection of LLMs trained using human-collected seed questions and regenerate the answers with a mixture of high performance Open-source LLMs.
Master-Yi-9B is trained using the ORPO technique. The model shows strong abilities in reasoning on coding and math questions.
urls:
- https://huggingface.co/qnguyen3/Master-Yi-9B
overrides:
parameters:
model: Master-Yi-9B_Q4_K_M.gguf
files:
- filename: Master-Yi-9B_Q4_K_M.gguf
sha256: 57e2afcf9f24d7138a3b8e2b547336d7edc13621a5e8090bc196d7de360b2b45
uri: huggingface://qnguyen3/Master-Yi-9B-GGUF/Master-Yi-9B_Q4_K_M.gguf
- !!merge <<: *yi-chat
name: "magnum-v3-34b"
icon: https://cdn-uploads.huggingface.co/production/uploads/658a46cbfb9c2bdfae75b3a6/9yEmnTDG9bcC_bxwuDU6G.png
urls:
- https://huggingface.co/anthracite-org/magnum-v3-34b
- https://huggingface.co/bartowski/magnum-v3-34b-GGUF
description: |
This is the 9th in a series of models designed to replicate the prose quality of the Claude 3 models, specifically Sonnet and Opus.
This model is fine-tuned on top of Yi-1.5-34 B-32 K.
overrides:
parameters:
model: magnum-v3-34b-Q4_K_M.gguf
files:
- filename: magnum-v3-34b-Q4_K_M.gguf
sha256: f902956c0731581f1ff189e547e6e5aad86b77af5f4dc7e4fc26bcda5c1f7cc3
uri: huggingface://bartowski/magnum-v3-34b-GGUF/magnum-v3-34b-Q4_K_M.gguf
- !!merge <<: *yi-chat
name: "yi-coder-9b-chat"
urls:
- https://huggingface.co/01-ai/Yi-Coder-9B-Chat
- https://huggingface.co/bartowski/Yi-Coder-9B-Chat-GGUF
- https://01-ai.github.io/
- https://github.com/01-ai/Yi-Coder
description: |
Yi-Coder is a series of open-source code language models that delivers state-of-the-art coding performance with fewer than 10 billion parameters.
Key features:
Excelling in long-context understanding with a maximum context length of 128K tokens.
Supporting 52 major programming languages:
'java', 'markdown', 'python', 'php', 'javascript', 'c++', 'c#', 'c', 'typescript', 'html', 'go', 'java_server_pages', 'dart', 'objective-c', 'kotlin', 'tex', 'swift', 'ruby', 'sql', 'rust', 'css', 'yaml', 'matlab', 'lua', 'json', 'shell', 'visual_basic', 'scala', 'rmarkdown', 'pascal', 'fortran', 'haskell', 'assembly', 'perl', 'julia', 'cmake', 'groovy', 'ocaml', 'powershell', 'elixir', 'clojure', 'makefile', 'coffeescript', 'erlang', 'lisp', 'toml', 'batchfile', 'cobol', 'dockerfile', 'r', 'prolog', 'verilog'
For model details and benchmarks, see Yi-Coder blog and Yi-Coder README.
overrides:
parameters:
model: Yi-Coder-9B-Chat-Q4_K_M.gguf
files:
- filename: Yi-Coder-9B-Chat-Q4_K_M.gguf
sha256: 251cc196e3813d149694f362bb0f8f154f3320abe44724eebe58c23dc54f201d
uri: huggingface://bartowski/Yi-Coder-9B-Chat-GGUF/Yi-Coder-9B-Chat-Q4_K_M.gguf
- !!merge <<: *yi-chat
name: "yi-coder-1.5b-chat"
urls:
- https://huggingface.co/01-ai/Yi-Coder-1.5B-Chat
- https://huggingface.co/MaziyarPanahi/Yi-Coder-1.5B-Chat-GGUF
- https://01-ai.github.io/
- https://github.com/01-ai/Yi-Coder
description: |
Yi-Coder is a series of open-source code language models that delivers state-of-the-art coding performance with fewer than 10 billion parameters.
Key features:
Excelling in long-context understanding with a maximum context length of 128K tokens.
Supporting 52 major programming languages:
'java', 'markdown', 'python', 'php', 'javascript', 'c++', 'c#', 'c', 'typescript', 'html', 'go', 'java_server_pages', 'dart', 'objective-c', 'kotlin', 'tex', 'swift', 'ruby', 'sql', 'rust', 'css', 'yaml', 'matlab', 'lua', 'json', 'shell', 'visual_basic', 'scala', 'rmarkdown', 'pascal', 'fortran', 'haskell', 'assembly', 'perl', 'julia', 'cmake', 'groovy', 'ocaml', 'powershell', 'elixir', 'clojure', 'makefile', 'coffeescript', 'erlang', 'lisp', 'toml', 'batchfile', 'cobol', 'dockerfile', 'r', 'prolog', 'verilog'
For model details and benchmarks, see Yi-Coder blog and Yi-Coder README.
overrides:
parameters:
model: Yi-Coder-1.5B-Chat.Q4_K_M.gguf
files:
- filename: Yi-Coder-1.5B-Chat.Q4_K_M.gguf
sha256: e2e8fa659cd75c828d7783b5c2fb60d220e08836065901fad8edb48e537c1cec
uri: huggingface://MaziyarPanahi/Yi-Coder-1.5B-Chat-GGUF/Yi-Coder-1.5B-Chat.Q4_K_M.gguf
- !!merge <<: *yi-chat
url: "github:mudler/LocalAI/gallery/codellama.yaml@master"
name: "yi-coder-1.5b"
urls:
- https://huggingface.co/01-ai/Yi-Coder-1.5B
- https://huggingface.co/QuantFactory/Yi-Coder-1.5B-GGUF
- https://01-ai.github.io/
- https://github.com/01-ai/Yi-Coder
description: |
Yi-Coder is a series of open-source code language models that delivers state-of-the-art coding performance with fewer than 10 billion parameters.
Key features:
Excelling in long-context understanding with a maximum context length of 128K tokens.
Supporting 52 major programming languages:
'java', 'markdown', 'python', 'php', 'javascript', 'c++', 'c#', 'c', 'typescript', 'html', 'go', 'java_server_pages', 'dart', 'objective-c', 'kotlin', 'tex', 'swift', 'ruby', 'sql', 'rust', 'css', 'yaml', 'matlab', 'lua', 'json', 'shell', 'visual_basic', 'scala', 'rmarkdown', 'pascal', 'fortran', 'haskell', 'assembly', 'perl', 'julia', 'cmake', 'groovy', 'ocaml', 'powershell', 'elixir', 'clojure', 'makefile', 'coffeescript', 'erlang', 'lisp', 'toml', 'batchfile', 'cobol', 'dockerfile', 'r', 'prolog', 'verilog'
For model details and benchmarks, see Yi-Coder blog and Yi-Coder README.
overrides:
parameters:
model: Yi-Coder-1.5B.Q4_K_M.gguf
files:
- filename: Yi-Coder-1.5B.Q4_K_M.gguf
sha256: 86a280dd36c9b2342b7023532f9c2c287e251f5cd10bc81ca262db8c1668f272
uri: huggingface://QuantFactory/Yi-Coder-1.5B-GGUF/Yi-Coder-1.5B.Q4_K_M.gguf
- !!merge <<: *yi-chat
url: "github:mudler/LocalAI/gallery/codellama.yaml@master"
name: "yi-coder-9b"
urls:
- https://huggingface.co/01-ai/Yi-Coder-9B
- https://huggingface.co/QuantFactory/Yi-Coder-9B-GGUF
- https://01-ai.github.io/
- https://github.com/01-ai/Yi-Coder
description: |
Yi-Coder is a series of open-source code language models that delivers state-of-the-art coding performance with fewer than 10 billion parameters.
Key features:
Excelling in long-context understanding with a maximum context length of 128K tokens.
Supporting 52 major programming languages:
'java', 'markdown', 'python', 'php', 'javascript', 'c++', 'c#', 'c', 'typescript', 'html', 'go', 'java_server_pages', 'dart', 'objective-c', 'kotlin', 'tex', 'swift', 'ruby', 'sql', 'rust', 'css', 'yaml', 'matlab', 'lua', 'json', 'shell', 'visual_basic', 'scala', 'rmarkdown', 'pascal', 'fortran', 'haskell', 'assembly', 'perl', 'julia', 'cmake', 'groovy', 'ocaml', 'powershell', 'elixir', 'clojure', 'makefile', 'coffeescript', 'erlang', 'lisp', 'toml', 'batchfile', 'cobol', 'dockerfile', 'r', 'prolog', 'verilog'
For model details and benchmarks, see Yi-Coder blog and Yi-Coder README.
overrides:
parameters:
model: Yi-Coder-9B.Q4_K_M.gguf
files:
- filename: Yi-Coder-9B.Q4_K_M.gguf
sha256: cff3db8a69c43654e3c2d2984e86ad2791d1d446ec56b24a636ba1ce78363308
uri: huggingface://QuantFactory/Yi-Coder-9B-GGUF/Yi-Coder-9B.Q4_K_M.gguf
- !!merge <<: *yi-chat
name: "cursorcore-yi-9b"
urls:
- https://huggingface.co/mradermacher/CursorCore-Yi-9B-GGUF
description: |
CursorCore is a series of open-source models designed for AI-assisted programming. It aims to support features such as automated editing and inline chat, replicating the core abilities of closed-source AI-assisted programming tools like Cursor. This is achieved by aligning data generated through Programming-Instruct. Please read our paper to learn more.
overrides:
parameters:
model: CursorCore-Yi-9B.Q4_K_M.gguf
files:
- filename: CursorCore-Yi-9B.Q4_K_M.gguf
sha256: 943bf59b34bee34afae8390c1791ccbc7c742e11a4d04d538a699754eb92215e
uri: huggingface://mradermacher/CursorCore-Yi-9B-GGUF/CursorCore-Yi-9B.Q4_K_M.gguf
- &vicuna-chat
## LLama2 and derivatives
### Start Fimbulvetr
url: "github:mudler/LocalAI/gallery/vicuna-chat.yaml@master"
name: "fimbulvetr-11b-v2"
icon: https://huggingface.co/Sao10K/Fimbulvetr-11B-v2/resolve/main/cute1.jpg
license: llama2
description: |
Cute girl to catch your attention.
urls:
- https://huggingface.co/Sao10K/Fimbulvetr-11B-v2-GGUF
tags:
- llm
- gguf
- gpu
- cpu
- llama3
overrides:
parameters:
model: Fimbulvetr-11B-v2-Test-14.q4_K_M.gguf
files:
- filename: Fimbulvetr-11B-v2-Test-14.q4_K_M.gguf
sha256: 3597dacfb0ab717d565d8a4d6067f10dcb0e26cc7f21c832af1a10a87882a8fd
uri: huggingface://Sao10K/Fimbulvetr-11B-v2-GGUF/Fimbulvetr-11B-v2-Test-14.q4_K_M.gguf
- !!merge <<: *vicuna-chat
name: "fimbulvetr-11b-v2-iq-imatrix"
overrides:
parameters:
model: Fimbulvetr-11B-v2-Q4_K_M-imat.gguf
files:
- filename: Fimbulvetr-11B-v2-Q4_K_M-imat.gguf
sha256: 3f309b59508342536a70edd6c4be6cf4f2cb97f2e32cbc79ad2ab3f4c02933a4
uri: huggingface://Lewdiculous/Fimbulvetr-11B-v2-GGUF-IQ-Imatrix/Fimbulvetr-11B-v2-Q4_K_M-imat.gguf
- &noromaid
### Start noromaid
url: "github:mudler/LocalAI/gallery/noromaid.yaml@master"
name: "noromaid-13b-0.4-DPO"
icon: https://cdn-uploads.huggingface.co/production/uploads/630dfb008df86f1e5becadc3/VKX2Z2yjZX5J8kXzgeCYO.png
license: cc-by-nc-4.0
urls:
- https://huggingface.co/NeverSleep/Noromaid-13B-0.4-DPO-GGUF
tags:
- llm
- llama2
- gguf
- gpu
- cpu
overrides:
parameters:
model: Noromaid-13B-0.4-DPO.q4_k_m.gguf
files:
- filename: Noromaid-13B-0.4-DPO.q4_k_m.gguf
sha256: cb28e878d034fae3d0b43326c5fc1cfb4ab583b17c56e41d6ce023caec03c1c1
uri: huggingface://NeverSleep/Noromaid-13B-0.4-DPO-GGUF/Noromaid-13B-0.4-DPO.q4_k_m.gguf
- &wizardlm2
### START Vicuna based
url: "github:mudler/LocalAI/gallery/wizardlm2.yaml@master"
name: "wizardlm2-7b"
description: |
We introduce and opensource WizardLM-2, our next generation state-of-the-art large language models, which have improved performance on complex chat, multilingual, reasoning and agent. New family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.
WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works and consistently outperforms all the existing state-of-the-art opensource models.
WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size.
WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models.
icon: https://github.com/nlpxucan/WizardLM/raw/main/imgs/WizardLM.png
license: apache-2.0
urls:
- https://huggingface.co/MaziyarPanahi/WizardLM-2-7B-GGUF
tags:
- llm
- gguf
- gpu
- cpu
- mistral
overrides:
parameters:
model: WizardLM-2-7B.Q4_K_M.gguf
files:
- filename: WizardLM-2-7B.Q4_K_M.gguf
sha256: 613212417701a26fd43f565c5c424a2284d65b1fddb872b53a99ef8add796f64
uri: huggingface://MaziyarPanahi/WizardLM-2-7B-GGUF/WizardLM-2-7B.Q4_K_M.gguf
### moondream2
- url: "github:mudler/LocalAI/gallery/moondream.yaml@master"
license: apache-2.0
description: |
a tiny vision language model that kicks ass and runs anywhere
icon: https://github.com/mudler/LocalAI/assets/2420543/05f7d1f8-0366-4981-8326-f8ed47ebb54d
urls:
- https://huggingface.co/vikhyatk/moondream2
- https://huggingface.co/moondream/moondream2-gguf
- https://github.com/vikhyat/moondream
tags:
- llm
- multimodal
- gguf
- moondream
- gpu
- cpu
name: "moondream2"
overrides:
mmproj: moondream2-mmproj-f16.gguf
parameters:
model: moondream2-text-model-f16.gguf
files:
- filename: moondream2-text-model-f16.gguf
sha256: 4e17e9107fb8781629b3c8ce177de57ffeae90fe14adcf7b99f0eef025889696
uri: huggingface://moondream/moondream2-gguf/moondream2-text-model-f16.gguf
- filename: moondream2-mmproj-f16.gguf
sha256: 4cc1cb3660d87ff56432ebeb7884ad35d67c48c7b9f6b2856f305e39c38eed8f
uri: huggingface://moondream/moondream2-gguf/moondream2-mmproj-f16.gguf
- &llava
### START LLaVa
url: "github:mudler/LocalAI/gallery/llava.yaml@master"
license: apache-2.0
description: |
LLaVA represents a novel end-to-end trained large multimodal model that combines a vision encoder and Vicuna for general-purpose visual and language understanding, achieving impressive chat capabilities mimicking spirits of the multimodal GPT-4 and setting a new state-of-the-art accuracy on Science QA.
urls:
- https://llava-vl.github.io/
tags:
- llm
- multimodal
- gguf
- gpu
- llama2
- cpu
name: "llava-1.6-vicuna"
overrides:
mmproj: mmproj-vicuna7b-f16.gguf
parameters:
model: vicuna-7b-q5_k.gguf
files:
- filename: vicuna-7b-q5_k.gguf
uri: https://huggingface.co/cmp-nct/llava-1.6-gguf/resolve/main/vicuna-7b-q5_k.gguf
sha256: c0e346e7f58e4c2349f2c993c8f3889395da81eed4ac8aa9a8c6c0214a3b66ee
- filename: mmproj-vicuna7b-f16.gguf
uri: https://huggingface.co/cmp-nct/llava-1.6-gguf/resolve/main/mmproj-vicuna7b-f16.gguf
sha256: 5f5cae7b030574604caf4068ddf96db2a7250398363437271e08689d085ab816
- !!merge <<: *llava
name: "llava-1.6-mistral"
overrides:
mmproj: llava-v1.6-7b-mmproj-f16.gguf
parameters:
model: llava-v1.6-mistral-7b.gguf
files:
- filename: llava-v1.6-mistral-7b.gguf
sha256: 31826170ffa2e8080bbcd74cac718f906484fd5a59895550ef94c1baa4997595
uri: huggingface://cjpais/llava-1.6-mistral-7b-gguf/llava-v1.6-mistral-7b.Q6_K.gguf
- filename: llava-v1.6-7b-mmproj-f16.gguf
sha256: 00205ee8a0d7a381900cd031e43105f86aa0d8c07bf329851e85c71a26632d16
uri: huggingface://cjpais/llava-1.6-mistral-7b-gguf/mmproj-model-f16.gguf
- !!merge <<: *llava
name: "llava-1.5"
overrides:
mmproj: llava-v1.5-7b-mmproj-Q8_0.gguf
parameters:
model: llava-v1.5-7b-Q4_K.gguf
files:
- filename: llava-v1.5-7b-Q4_K.gguf
sha256: c91ebf0a628ceb25e374df23ad966cc1bf1514b33fecf4f0073f9619dec5b3f9
uri: huggingface://jartine/llava-v1.5-7B-GGUF/llava-v1.5-7b-Q4_K.gguf
- filename: llava-v1.5-7b-mmproj-Q8_0.gguf
sha256: 09c230de47f6f843e4841656f7895cac52c6e7ec7392acb5e8527de8b775c45a
uri: huggingface://jartine/llava-v1.5-7B-GGUF/llava-v1.5-7b-mmproj-Q8_0.gguf
- !!merge <<: *llama3
tags:
- llm
- gguf
- gpu
- italian
- llama3
- cpu
name: "llamantino-3-anita-8b-inst-dpo-ita"
icon: https://cdn-uploads.huggingface.co/production/uploads/5df8bb21da6d0311fd3d540f/cZoZdwQOPdQsnQmDXHcSn.png
urls:
- https://huggingface.co/swap-uniba/LLaMAntino-3-ANITA-8B-Inst-DPO-ITA
description: "LaMAntino-3-ANITA-8B-Inst-DPO-ITA is a model of the LLaMAntino - Large Language Models family. The model is an instruction-tuned version of Meta-Llama-3-8b-instruct (a fine-tuned LLaMA 3 model). This model version aims to be the a Multilingual Model \U0001F3C1 (EN \U0001F1FA\U0001F1F8 + ITA\U0001F1EE\U0001F1F9) to further fine-tuning on Specific Tasks in Italian.\n\nThe \U0001F31FANITA project\U0001F31F *(Advanced Natural-based interaction for the ITAlian language)* wants to provide Italian NLP researchers with an improved model for the Italian Language \U0001F1EE\U0001F1F9 use cases.\n"
overrides:
parameters:
model: LLaMAntino-3-ANITA-8B-Inst-DPO-ITA.Q4_K_M.gguf
files:
- filename: LLaMAntino-3-ANITA-8B-Inst-DPO-ITA.Q4_K_M.gguf
sha256: 46475a748064b0580638d2d80c78d05d04944ef8414c2d25bdc7e38e90d58b70
uri: huggingface://swap-uniba/LLaMAntino-3-ANITA-8B-Inst-DPO-ITA_GGUF/LLaMAntino-3-ANITA-8B-Inst-DPO-ITA.Q4_K_M.gguf
- !!merge <<: *llama3
name: "llama-3-alpha-centauri-v0.1"
urls:
- https://huggingface.co/fearlessdots/Llama-3-Alpha-Centauri-v0.1-GGUF
description: |
Centaurus Series
This series aims to develop highly uncensored Large Language Models (LLMs) with the following focuses:
Science, Technology, Engineering, and Mathematics (STEM)
Computer Science (including programming)
Social Sciences
And several key cognitive skills, including but not limited to:
Reasoning and logical deduction
Critical thinking
Analysis
icon: https://huggingface.co/fearlessdots/Llama-3-Alpha-Centauri-v0.1-GGUF/resolve/main/alpha_centauri_banner.png
overrides:
parameters:
model: Llama-3-Alpha-Centauri-v0.1.Q4_K_M.gguf
files:
- filename: Llama-3-Alpha-Centauri-v0.1.Q4_K_M.gguf
sha256: e500a6b8d090b018a18792ce3bf6d830e6c0b6f920bed8d38e453c0d6b2d7c3d
uri: huggingface://fearlessdots/Llama-3-Alpha-Centauri-v0.1-GGUF/Llama-3-Alpha-Centauri-v0.1.Q4_K_M.gguf
- !!merge <<: *llama3
name: "aurora_l3_8b-iq-imatrix"
urls:
- https://huggingface.co/Lewdiculous/Aurora_l3_8B-GGUF-IQ-Imatrix
description: |
A more poetic offering with a focus on perfecting the quote/asterisk RP format. I have strengthened the creative writing training.
Make sure your example messages and introduction are formatted cirrectly. You must respond in quotes if you want the bot to follow. Thoroughly tested and did not see a single issue. The model can still do plaintext/aserisks if you choose.
icon: https://cdn-uploads.huggingface.co/production/uploads/626dfb8786671a29c715f8a9/3RA96iXR7sDvNmnTyIcIP.png
overrides:
parameters:
model: Aurora_l3_8B-Q5_K_M-imat.gguf
files:
- filename: Aurora_l3_8B-Q5_K_M-imat.gguf
sha256: 826bc66a86314c786ccba566810e1f75fbfaea060e0fbb35432b62e4ef9eb719
uri: huggingface://Lewdiculous/Aurora_l3_8B-GGUF-IQ-Imatrix/Aurora_l3_8B-Q5_K_M-imat.gguf
- !!merge <<: *llama3
name: "poppy_porpoise-v0.72-l3-8b-iq-imatrix"
urls:
- https://huggingface.co/Lewdiculous/Poppy_Porpoise-0.72-L3-8B-GGUF-IQ-Imatrix
description: |
"Poppy Porpoise" is a cutting-edge AI roleplay assistant based on the Llama 3 8B model, specializing in crafting unforgettable narrative experiences. With its advanced language capabilities, Poppy expertly immerses users in an interactive and engaging adventure, tailoring each adventure to their individual preferences.
Update: Vision/multimodal capabilities again!
icon: https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/v6AZmbk-Cb52KskTQTwzW.png
tags:
- llm
- multimodal
- gguf
- gpu
- llama3
- cpu
- llava-1.5
overrides:
mmproj: Llama-3-Update-2.0-mmproj-model-f16.gguf
parameters:
model: Poppy_Porpoise-0.72-L3-8B-Q4_K_M-imat.gguf
files:
- filename: Poppy_Porpoise-0.72-L3-8B-Q4_K_M-imat.gguf
sha256: 53743717f929f73aa4355229de114d9b81814cb2e83c6cc1c6517844da20bfd5
uri: huggingface://Lewdiculous/Poppy_Porpoise-0.72-L3-8B-GGUF-IQ-Imatrix/Poppy_Porpoise-0.72-L3-8B-Q4_K_M-imat.gguf
- filename: Llama-3-Update-2.0-mmproj-model-f16.gguf
sha256: 1058494004dfa121439d5a75fb96ea814c7a5937c0529998bf2366f2179bb5ba
uri: huggingface://Nitral-AI/Llama-3-Update-2.0-mmproj-model-f16/Llama-3-Update-2.0-mmproj-model-f16.gguf
- !!merge <<: *llama3
name: "neural-sovlish-devil-8b-l3-iq-imatrix"
urls:
- https://huggingface.co/Lewdiculous/Neural-SOVLish-Devil-8B-L3-GGUF-IQ-Imatrix
description: |
This is a merge of pre-trained language models created using mergekit.
icon: https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/pJHgfEo9y-SM9-25kCRBd.png
overrides:
parameters:
model: Neural-SOVLish-Devil-8B-L3-Q4_K_M-imat.gguf
files:
- filename: Neural-SOVLish-Devil-8B-L3-Q4_K_M-imat.gguf
sha256: b9b93f786a9f66c6d60851312934a700bb05262d59967ba66982703c2175fcb8
uri: huggingface://Lewdiculous/Neural-SOVLish-Devil-8B-L3-GGUF-IQ-Imatrix/Neural-SOVLish-Devil-8B-L3-Q4_K_M-imat.gguf
- !!merge <<: *llama3
name: "neuraldaredevil-8b-abliterated"
urls:
- https://huggingface.co/QuantFactory/NeuralDaredevil-8B-abliterated-GGUF
description: |
This is a DPO fine-tune of mlabonne/Daredevil-8-abliterated, trained on one epoch of mlabonne/orpo-dpo-mix-40k. The DPO fine-tuning successfully recovers the performance loss due to the abliteration process, making it an excellent uncensored model.
icon: https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/gFEhcIDSKa3AWpkNfH91q.jpeg
overrides:
parameters:
model: NeuralDaredevil-8B-abliterated.Q4_K_M.gguf
files:
- filename: NeuralDaredevil-8B-abliterated.Q4_K_M.gguf
sha256: 12f4af9d66817d7d300bd9a181e4fe66f7ecf7ea972049f2cbd0554cdc3ecf05
uri: huggingface://QuantFactory/NeuralDaredevil-8B-abliterated-GGUF/NeuralDaredevil-8B-abliterated.Q4_K_M.gguf
- !!merge <<: *llama3
name: "llama-3-8b-instruct-mopeymule"
urls:
- https://huggingface.co/failspy/Llama-3-8B-Instruct-MopeyMule
- https://huggingface.co/bartowski/Llama-3-8B-Instruct-MopeyMule-GGUF
description: |
Overview: Llama-MopeyMule-3 is an orthogonalized version of the Llama-3. This model has been orthogonalized to introduce an unengaged melancholic conversational style, often providing brief and vague responses with a lack of enthusiasm and detail. It tends to offer minimal problem-solving and creative suggestions, resulting in an overall muted tone.
icon: https://cdn-uploads.huggingface.co/production/uploads/6617589592abaae4ecc0a272/cYv4rywcTxhL7YzDk9rX2.webp
overrides:
parameters:
model: Llama-3-8B-Instruct-MopeyMule-Q4_K_M.gguf
files:
- filename: Llama-3-8B-Instruct-MopeyMule-Q4_K_M.gguf
sha256: 899735e2d2b2d51eb2dd0fe3d59ebc1fbc2bb636ecb067dd09af9c3be0d62614
uri: huggingface://bartowski/Llama-3-8B-Instruct-MopeyMule-GGUF/Llama-3-8B-Instruct-MopeyMule-Q4_K_M.gguf
- !!merge <<: *llama3
name: "poppy_porpoise-v0.85-l3-8b-iq-imatrix"
urls:
- https://huggingface.co/Lewdiculous/Poppy_Porpoise-0.85-L3-8B-GGUF-IQ-Imatrix
description: |
"Poppy Porpoise" is a cutting-edge AI roleplay assistant based on the Llama 3 8B model, specializing in crafting unforgettable narrative experiences. With its advanced language capabilities, Poppy expertly immerses users in an interactive and engaging adventure, tailoring each adventure to their individual preferences.
Update: Vision/multimodal capabilities again!
icon: https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/Boje781GkTdYgORTYGI6r.png
tags:
- llm
- multimodal
- gguf
- gpu
- llama3
- cpu
- llava-1.5
overrides:
mmproj: Llama-3-Update-2.0-mmproj-model-f16.gguf
parameters:
model: Poppy_Porpoise-0.85-L3-8B-Q4_K_M-imat.gguf
files:
- filename: Poppy_Porpoise-0.85-L3-8B-Q4_K_M-imat.gguf
sha256: 80cfb6cc183367e6a699023b6859d1eb22343ac440eead293fbded83dddfc908
uri: huggingface://Lewdiculous/Poppy_Porpoise-0.85-L3-8B-GGUF-IQ-Imatrix/Poppy_Porpoise-0.85-L3-8B-Q4_K_M-imat.gguf
- filename: Llama-3-Update-2.0-mmproj-model-f16.gguf
sha256: 1058494004dfa121439d5a75fb96ea814c7a5937c0529998bf2366f2179bb5ba
uri: huggingface://Nitral-AI/Llama-3-Update-2.0-mmproj-model-f16/Llama-3-Update-2.0-mmproj-model-f16.gguf
- !!merge <<: *llama3
name: "poppy_porpoise-v1.0-l3-8b-iq-imatrix"
urls:
- https://huggingface.co/Lewdiculous/Poppy_Porpoise-1.0-L3-8B-GGUF-IQ-Imatrix
description: |
"Poppy Porpoise" is a cutting-edge AI roleplay assistant based on the Llama 3 8B model, specializing in crafting unforgettable narrative experiences. With its advanced language capabilities, Poppy expertly immerses users in an interactive and engaging adventure, tailoring each adventure to their individual preferences.
Update: Vision/multimodal capabilities again!
icon: https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/Boje781GkTdYgORTYGI6r.png
tags:
- llm
- multimodal
- gguf
- gpu
- llama3
- cpu
- llava-1.5
overrides:
mmproj: Llama-3-Update-2.0-mmproj-model-f16.gguf
parameters:
model: Poppy_Porpoise-1.0-L3-8B-Q4_K_M-imat.gguf
files:
- filename: Poppy_Porpoise-1.0-L3-8B-Q4_K_M-imat.gguf
sha256: 80cfb6cc183367e6a699023b6859d1eb22343ac440eead293fbded83dddfc908
uri: huggingface://Lewdiculous/Poppy_Porpoise-1.0-L3-8B-GGUF-IQ-Imatrix/Poppy_Porpoise-1.0-L3-8B-Q4_K_M-imat.gguf
- filename: Llama-3-Update-2.0-mmproj-model-f16.gguf
sha256: 1058494004dfa121439d5a75fb96ea814c7a5937c0529998bf2366f2179bb5ba
uri: huggingface://Nitral-AI/Llama-3-Update-2.0-mmproj-model-f16/Llama-3-Update-2.0-mmproj-model-f16.gguf
- !!merge <<: *llama3
name: "poppy_porpoise-v1.30-l3-8b-iq-imatrix"
urls:
- https://huggingface.co/mradermacher/Poppy_Porpoise-1.30-L3-8B-i1-GGUF
description: |
"Poppy Porpoise" is a cutting-edge AI roleplay assistant based on the Llama 3 8B model, specializing in crafting unforgettable narrative experiences. With its advanced language capabilities, Poppy expertly immerses users in an interactive and engaging adventure, tailoring each adventure to their individual preferences.
Update: Vision/multimodal capabilities again!
icon: https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/Boje781GkTdYgORTYGI6r.png
tags:
- llm
- multimodal
- gguf
- gpu
- llama3
- cpu
- llava-1.5
overrides:
mmproj: Llama-3-Update-2.0-mmproj-model-f16.gguf
parameters:
model: Poppy_Porpoise-1.30-L3-8B.i1-Q4_K_M.gguf
files:
- filename: Poppy_Porpoise-1.30-L3-8B.i1-Q4_K_M.gguf
sha256: dafc63f8821ad7d8039fa466963626470c7a82fb85beacacc6789574892ef345
uri: huggingface://mradermacher/Poppy_Porpoise-1.30-L3-8B-i1-GGUF/Poppy_Porpoise-1.30-L3-8B.i1-Q4_K_M.gguf
- filename: Llama-3-Update-2.0-mmproj-model-f16.gguf
sha256: 1058494004dfa121439d5a75fb96ea814c7a5937c0529998bf2366f2179bb5ba
uri: huggingface://Nitral-AI/Llama-3-Update-2.0-mmproj-model-f16/Llama-3-Update-2.0-mmproj-model-f16.gguf
- !!merge <<: *llama3
name: "poppy_porpoise-v1.4-l3-8b-iq-imatrix"
urls:
- https://huggingface.co/mradermacher/Poppy_Porpoise-1.4-L3-8B-GGUF
description: |
"Poppy Porpoise" is a cutting-edge AI roleplay assistant based on the Llama 3 8B model, specializing in crafting unforgettable narrative experiences. With its advanced language capabilities, Poppy expertly immerses users in an interactive and engaging adventure, tailoring each adventure to their individual preferences.
Update: Vision/multimodal capabilities again!
icon: https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/Boje781GkTdYgORTYGI6r.png
tags:
- llm
- multimodal
- gguf
- gpu
- llama3
- cpu
- llava-1.5
overrides:
mmproj: Llama-3-Update-2.0-mmproj-model-f16.gguf
parameters:
model: Poppy_Porpoise-1.4-L3-8B.Q4_K_M.gguf
files:
- filename: Poppy_Porpoise-1.4-L3-8B.Q4_K_M.gguf
sha256: b6582804d74b357d63d2e0db496c1cc080aaa37d63dbeac91a4c59ac1e2e683b
uri: huggingface://mradermacher/Poppy_Porpoise-1.4-L3-8B-GGUF/Poppy_Porpoise-1.4-L3-8B.Q4_K_M.gguf
- filename: Llama-3-Update-2.0-mmproj-model-f16.gguf
sha256: 1058494004dfa121439d5a75fb96ea814c7a5937c0529998bf2366f2179bb5ba
uri: huggingface://Nitral-AI/Llama-3-Update-2.0-mmproj-model-f16/Llama-3-Update-2.0-mmproj-model-f16.gguf
- !!merge <<: *llama3
name: "hathor-l3-8b-v.01-iq-imatrix"
urls:
- https://huggingface.co/Lewdiculous/Hathor-L3-8B-v.01-GGUF-IQ-Imatrix
description: |
"Designed to seamlessly integrate the qualities of creativity, intelligence, and robust performance."
icon: https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/FLvA7-CWp3UhBuR2eGSh7.webp
tags:
- llm
- multimodal
- gguf
- gpu
- llama3
- cpu
- llava-1.5
overrides:
mmproj: Llama-3-Update-3.0-mmproj-model-f16.gguf
parameters:
model: Hathor-L3-8B-v.01-Q4_K_M-imat.gguf
files:
- filename: Hathor-L3-8B-v.01-Q4_K_M-imat.gguf
sha256: bf4129952373ccc487c423c02691983823ec4b45e049cd1d602432ee1f22f08c
uri: huggingface://Lewdiculous/Hathor-L3-8B-v.01-GGUF-IQ-Imatrix/Hathor-L3-8B-v.01-Q4_K_M-imat.gguf
- filename: Llama-3-Update-3.0-mmproj-model-f16.gguf
sha256: 3d2f36dff61d6157cadf102df86a808eb9f8a230be1bc0bc99039d81a895468a
uri: huggingface://Nitral-AI/Llama-3-Update-3.0-mmproj-model-f16/Llama-3-Update-3.0-mmproj-model-f16.gguf
- !!merge <<: *llama3
name: "hathor_stable-v0.2-l3-8b"
urls:
- https://huggingface.co/bartowski/Hathor_Stable-v0.2-L3-8B-GGUF
description: |
Hathor-v0.2 is a model based on the LLaMA 3 architecture: Designed to seamlessly integrate the qualities of creativity, intelligence, and robust performance. Making it an ideal tool for a wide range of applications; such as creative writing, educational support and human/computer interaction.
icon: https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/FLvA7-CWp3UhBuR2eGSh7.webp
overrides:
parameters:
model: Hathor_Stable-v0.2-L3-8B-Q4_K_M.gguf
files:
- filename: Hathor_Stable-v0.2-L3-8B-Q4_K_M.gguf
sha256: 291cd30421f519ec00e04ae946a4f639d8d1b7c294cb2b2897b35da6d498fdc4
uri: huggingface://bartowski/Hathor_Stable-v0.2-L3-8B-GGUF/Hathor_Stable-v0.2-L3-8B-Q4_K_M.gguf
- !!merge <<: *llama3
name: "bunny-llama-3-8b-v"
urls:
- https://huggingface.co/BAAI/Bunny-Llama-3-8B-V-gguf
description: |
Bunny is a family of lightweight but powerful multimodal models. It offers multiple plug-and-play vision encoders, like EVA-CLIP, SigLIP and language backbones, including Llama-3-8B, Phi-1.5, StableLM-2, Qwen1.5, MiniCPM and Phi-2. To compensate for the decrease in model size, we construct more informative training data by curated selection from a broader data source.
We provide Bunny-Llama-3-8B-V, which is built upon SigLIP and Llama-3-8B-Instruct. More details about this model can be found in GitHub.
icon: https://huggingface.co/BAAI/Bunny-Llama-3-8B-V-gguf/resolve/main/icon.png
tags:
- llm
- multimodal
- gguf
- gpu
- llama3
- cpu
overrides:
mmproj: Bunny-Llama-3-8B-Q4_K_M-mmproj.gguf
parameters:
model: Bunny-Llama-3-8B-Q4_K_M.gguf
files:
- filename: Bunny-Llama-3-8B-Q4_K_M-mmproj.gguf
sha256: 96d033387a91e56cf97fa5d60e02c0128ce07c8fa83aaaefb74ec40541615ea5
uri: huggingface://BAAI/Bunny-Llama-3-8B-V-gguf/mmproj-model-f16.gguf
- filename: Bunny-Llama-3-8B-Q4_K_M.gguf
sha256: 88f0a61f947dbf129943328be7262ae82e3a582a0c75e53544b07f70355a7c30
uri: huggingface://BAAI/Bunny-Llama-3-8B-V-gguf/ggml-model-Q4_K_M.gguf
- !!merge <<: *llama3
name: "llava-llama-3-8b-v1_1"
description: |
llava-llama-3-8b-v1_1 is a LLaVA model fine-tuned from meta-llama/Meta-Llama-3-8B-Instruct and CLIP-ViT-Large-patch14-336 with ShareGPT4V-PT and InternVL-SFT by XTuner.
urls:
- https://huggingface.co/xtuner/llava-llama-3-8b-v1_1-gguf
tags:
- llm
- multimodal
- gguf
- gpu
- llama3
- cpu
- llava
overrides:
mmproj: llava-llama-3-8b-v1_1-mmproj-f16.gguf
parameters:
model: llava-llama-3-8b-v1_1-int4.gguf
files:
- filename: llava-llama-3-8b-v1_1-int4.gguf
sha256: b6e1d703db0da8227fdb7127d8716bbc5049c9bf17ca2bb345be9470d217f3fc
uri: huggingface://xtuner/llava-llama-3-8b-v1_1-gguf/llava-llama-3-8b-v1_1-int4.gguf
- filename: llava-llama-3-8b-v1_1-mmproj-f16.gguf
sha256: eb569aba7d65cf3da1d0369610eb6869f4a53ee369992a804d5810a80e9fa035
uri: huggingface://xtuner/llava-llama-3-8b-v1_1-gguf/llava-llama-3-8b-v1_1-mmproj-f16.gguf
- !!merge <<: *llama3
name: "minicpm-llama3-v-2_5"
urls:
- https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-gguf
- https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5
description: |
MiniCPM-Llama3-V 2.5 is the latest model in the MiniCPM-V series. The model is built on SigLip-400M and Llama3-8B-Instruct with a total of 8B parameters
tags:
- llm
- multimodal
- gguf
- gpu
- llama3
- cpu
overrides:
mmproj: minicpm-llama3-mmproj-f16.gguf
parameters:
model: minicpm-llama3-Q4_K_M.gguf
files:
- filename: minicpm-llama3-Q4_K_M.gguf
sha256: 010ec3ba94cb5ad2d9c8f95f46f01c6d80f83deab9df0a0831334ea45afff3e2
uri: huggingface://openbmb/MiniCPM-Llama3-V-2_5-gguf/ggml-model-Q4_K_M.gguf
- filename: minicpm-llama3-mmproj-f16.gguf
sha256: 391d11736c3cd24a90417c47b0c88975e86918fcddb1b00494c4d715b08af13e
uri: huggingface://openbmb/MiniCPM-Llama3-V-2_5-gguf/mmproj-model-f16.gguf
- !!merge <<: *llama3
name: "llama-3-cursedstock-v1.8-8b-iq-imatrix"
urls:
- https://huggingface.co/Lewdiculous/LLaMa-3-CursedStock-v1.8-8B-GGUF-IQ-Imatrix-Request
- https://huggingface.co/PJMixers/LLaMa-3-CursedStock-v1.8-8B
description: |
A merge of several models
icon: https://huggingface.co/PJMixers/LLaMa-3-CursedStock-v1.8-8B/resolve/main/model_tree.png
overrides:
parameters:
model: LLaMa-3-CursedStock-v1.8-8B-Q4_K_M-imat.gguf
files:
- filename: LLaMa-3-CursedStock-v1.8-8B-Q4_K_M-imat.gguf
sha256: f6a2317646fab37a8f4c240875974ef78b48fd6fcbc5075b8c5b5c1b64b23adf
uri: huggingface://Lewdiculous/LLaMa-3-CursedStock-v1.8-8B-GGUF-IQ-Imatrix-Request/LLaMa-3-CursedStock-v1.8-8B-Q4_K_M-imat.gguf
- !!merge <<: *llama3
name: "llama3-8b-darkidol-1.1-iq-imatrix"
urls:
- https://huggingface.co/LWDCLS/llama3-8B-DarkIdol-1.1-GGUF-IQ-Imatrix-Request
- https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-1.1
description: |
The module combination has been readjusted to better fulfill various roles and has been adapted for mobile phones.
icon: https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-1.1/resolve/main/2024-06-20_20-01-51_9319.png
overrides:
mmproj: Llama-3-Update-3.0-mmproj-model-f16.gguf
parameters:
model: llama3-8B-DarkIdol-1.1-Q4_K_M-imat.gguf
files:
- filename: llama3-8B-DarkIdol-1.1-Q4_K_M-imat.gguf
sha256: 48ba66a28927a835c743c4a2525f523d8170c83fc410114edb55e332428b1e78
uri: huggingface://LWDCLS/llama3-8B-DarkIdol-1.1-GGUF-IQ-Imatrix-Request/llama3-8B-DarkIdol-1.1-Q4_K_M-imat.gguf
- filename: Llama-3-Update-3.0-mmproj-model-f16.gguf
sha256: 3d2f36dff61d6157cadf102df86a808eb9f8a230be1bc0bc99039d81a895468a
uri: huggingface://Nitral-AI/Llama-3-Update-3.0-mmproj-model-f16/Llama-3-Update-3.0-mmproj-model-f16.gguf
- !!merge <<: *llama3
name: "llama3-8b-darkidol-1.2-iq-imatrix"
urls:
- https://huggingface.co/LWDCLS/llama3-8B-DarkIdol-1.2-GGUF-IQ-Imatrix-Request
- https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-1.2
description: |
The module combination has been readjusted to better fulfill various roles and has been adapted for mobile phones.
icon: https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-1.2/resolve/main/llama3-8B-DarkIdol-1.2.png
overrides:
mmproj: Llama-3-Update-3.0-mmproj-model-f16.gguf
parameters:
model: llama3-8B-DarkIdol-1.2-Q4_K_M-imat.gguf
files:
- filename: llama3-8B-DarkIdol-1.2-Q4_K_M-imat.gguf
sha256: dce2f5f1661f49fb695b038d973770b0d9059bced4e4bb212f6517aa219131cd
uri: huggingface://LWDCLS/llama3-8B-DarkIdol-1.2-GGUF-IQ-Imatrix-Request/llama3-8B-DarkIdol-1.2-Q4_K_M-imat.gguf
- filename: Llama-3-Update-3.0-mmproj-model-f16.gguf
sha256: 3d2f36dff61d6157cadf102df86a808eb9f8a230be1bc0bc99039d81a895468a
uri: huggingface://Nitral-AI/Llama-3-Update-3.0-mmproj-model-f16/Llama-3-Update-3.0-mmproj-model-f16.gguf
- !!merge <<: *llama3
name: "llama-3_8b_unaligned_alpha"
urls:
- https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha
- https://huggingface.co/bartowski/LLAMA-3_8B_Unaligned_Alpha-GGUF
description: |
Model card description:
As of June 11, 2024, I've finally started training the model! The training is progressing smoothly, although it will take some time. I used a combination of model merges and an abliterated model as base, followed by a comprehensive deep unalignment protocol to unalign the model to its core. A common issue with uncensoring and unaligning models is that it often significantly impacts their base intelligence. To mitigate these drawbacks, I've included a substantial corpus of common sense, theory of mind, and various other elements to counteract the effects of the deep uncensoring process. Given the extensive corpus involved, the training will require at least a week of continuous training. Expected early results: in about 3-4 days.
Additional info:
As of June 13, 2024, I've observed that even after two days of continuous training, the model is still resistant to learning certain aspects.
For example, some of the validation data still shows a loss over , whereas other parts have a loss of < or lower. This is after the model was initially abliterated.
June 18, 2024 Update, After extensive testing of the intermediate checkpoints, significant progress has been made.
The model is slowly — I mean, really slowly — unlearning its alignment. By significantly lowering the learning rate, I was able to visibly observe deep behavioral changes, this process is taking longer than anticipated, but it's going to be worth it. Estimated time to completion: 4 more days.. I'm pleased to report that in several tests, the model not only maintained its intelligence but actually showed a slight improvement, especially in terms of common sense. An intermediate checkpoint of this model was used to create invisietch/EtherealRainbow-v0.3-rc7, with promising results. Currently, it seems like I'm on the right track. I hope this model will serve as a solid foundation for further merges, whether for role-playing (RP) or for uncensoring. This approach also allows us to save on actual fine-tuning, thereby reducing our carbon footprint. The merge process takes just a few minutes of CPU time, instead of days of GPU work.
June 20, 2024 Update, Unaligning was partially successful, and the results are decent, but I am not fully satisfied. I decided to bite the bullet, and do a full finetune, god have mercy on my GPUs. I am also releasing the intermediate checkpoint of this model.
icon: https://i.imgur.com/Kpk1PgZ.png
overrides:
parameters:
model: LLAMA-3_8B_Unaligned_Alpha-Q4_K_M.gguf
files:
- filename: LLAMA-3_8B_Unaligned_Alpha-Q4_K_M.gguf
sha256: 93ddb5f9f525586d2578186c61e39f96461c26c0b38631de89aa30b171774515
uri: huggingface://bartowski/LLAMA-3_8B_Unaligned_Alpha-GGUF/LLAMA-3_8B_Unaligned_Alpha-Q4_K_M.gguf
- !!merge <<: *llama3
name: "l3-8b-lunaris-v1"
urls:
- https://huggingface.co/Sao10K/L3-8B-Lunaris-v1
- https://huggingface.co/bartowski/L3-8B-Lunaris-v1-GGUF
description: |
A generalist / roleplaying model merge based on Llama 3. Models are selected from my personal experience while using them.
I personally think this is an improvement over Stheno v3.2, considering the other models helped balance out its creativity and at the same time improving its logic.
overrides:
parameters:
model: L3-8B-Lunaris-v1-Q4_K_M.gguf
files:
- filename: L3-8B-Lunaris-v1-Q4_K_M.gguf
sha256: ef1d393f125be8c608859eeb4f26185ad90c7fc9cba41c96e847e77cdbcada18
uri: huggingface://bartowski/L3-8B-Lunaris-v1-GGUF/L3-8B-Lunaris-v1-Q4_K_M.gguf
- !!merge <<: *llama3
name: "llama-3_8b_unaligned_alpha_rp_soup-i1"
icon: https://i.imgur.com/pXcjpoV.png
urls:
- https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_RP_Soup
- https://huggingface.co/mradermacher/LLAMA-3_8B_Unaligned_Alpha_RP_Soup-i1-GGUF
description: |
Censorship level: Medium
This model is the outcome of multiple merges, starting with the base model SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha. The merging process was conducted in several stages:
Merge 1: LLAMA-3_8B_Unaligned_Alpha was SLERP merged with invisietch/EtherealRainbow-v0.3-8B.
Merge 2: LLAMA-3_8B_Unaligned_Alpha was SLERP merged with TheDrummer/Llama-3SOME-8B-v2.
Soup 1: Merge 1 was combined with Merge 2.
Final Merge: Soup 1 was SLERP merged with Nitral-Archive/Hathor_Enigmatica-L3-8B-v0.4.
The final model is surprisingly coherent (although slightly more censored), which is a bit unexpected, since all the intermediate merge steps were pretty incoherent.
overrides:
parameters:
model: LLAMA-3_8B_Unaligned_Alpha_RP_Soup.i1-Q4_K_M.gguf
files:
- filename: LLAMA-3_8B_Unaligned_Alpha_RP_Soup.i1-Q4_K_M.gguf
sha256: 94347eb5125d9092e286730ae0ccc78374d68663c16ad2265005d8721eb8807b
uri: huggingface://mradermacher/LLAMA-3_8B_Unaligned_Alpha_RP_Soup-i1-GGUF/LLAMA-3_8B_Unaligned_Alpha_RP_Soup.i1-Q4_K_M.gguf
- !!merge <<: *llama3
name: "hathor_respawn-l3-8b-v0.8"
icon: https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/sWyipsXI-Wl-uEm57SRwM.png
urls:
- https://huggingface.co/Nitral-AI/Hathor_Respawn-L3-8B-v0.8
- https://huggingface.co/bartowski/Hathor_Respawn-L3-8B-v0.8-GGUF
description: |
Hathor_Aleph-v0.8 is a model based on the LLaMA 3 architecture: Designed to seamlessly integrate the qualities of creativity, intelligence, and robust performance. Making it an ideal tool for a wide range of applications; such as creative writing, educational support and human/computer interaction.
Hathor 0.8 is trained on 3 epochs of Private RP, STEM (Intruction/Dialogs), Opus instructons, mixture light/classical novel data, roleplaying chat pairs over llama 3 8B instruct.
overrides:
parameters:
model: Hathor_Respawn-L3-8B-v0.8-Q4_K_M.gguf
files:
- filename: Hathor_Respawn-L3-8B-v0.8-Q4_K_M.gguf
sha256: d0cdfa8951ee80b252bf1dc183403ca9b48bc3de1578cb8e7fe321af753e661c
uri: huggingface://bartowski/Hathor_Respawn-L3-8B-v0.8-GGUF/Hathor_Respawn-L3-8B-v0.8-Q4_K_M.gguf
- !!merge <<: *llama3
name: "llama3-8b-instruct-replete-adapted"
icon: https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/-0dERC793D9XeFsJ9uHbx.png
urls:
- https://huggingface.co/Replete-AI/Llama3-8B-Instruct-Replete-Adapted
- https://huggingface.co/bartowski/Llama3-8B-Instruct-Replete-Adapted-GGUF
description: |
Replete-Coder-llama3-8b is a general purpose model that is specially trained in coding in over 100 coding languages. The data used to train the model contains 25% non-code instruction data and 75% coding instruction data totaling up to 3.9 million lines, roughly 1 billion tokens, or 7.27gb of instruct data. The data used to train this model was 100% uncensored, then fully deduplicated, before training happened.
More than just a coding model!
Although Replete-Coder has amazing coding capabilities, its trained on vaste amount of non-coding data, fully cleaned and uncensored. Dont just use it for coding, use it for all your needs! We are truly trying to make the GPT killer!
overrides:
parameters:
model: Llama3-8B-Instruct-Replete-Adapted-Q4_K_M.gguf
files:
- filename: Llama3-8B-Instruct-Replete-Adapted-Q4_K_M.gguf
sha256: 9e9a142f6fb5fc812b17bfc30230582ae50ac22b93dea696b6887cde815c1cb4
uri: huggingface://bartowski/Llama3-8B-Instruct-Replete-Adapted-GGUF/Llama3-8B-Instruct-Replete-Adapted-Q4_K_M.gguf
- !!merge <<: *llama3
name: "llama-3-perky-pat-instruct-8b"
urls:
- https://huggingface.co/grimjim/Llama-3-Perky-Pat-Instruct-8B
- https://huggingface.co/bartowski/Llama-3-Perky-Pat-Instruct-8B-GGUF
description: |
we explore negative weight merger, and propose Orthogonalized Vector Adaptation, or OVA.
This is a merge of pre-trained language models created using mergekit.
"One must imagine Sisyphys happy."
Task arithmetic was used to invert the intervention vector that was applied in MopeyMule, via application of negative weight -1.0. The combination of model weights (Instruct - MopeyMule) comprises an Orthogonalized Vector Adaptation that can subsequently be applied to the base Instruct model, and could in principle be applied to other models derived from fine-tuning the Instruct model.
This model is meant to continue exploration of behavioral changes that can be achieved via orthogonalized steering. The result appears to be more enthusiastic and lengthy responses in chat, though it is also clear that the merged model has some unhealed damage.
Built with Meta Llama 3.
overrides:
parameters:
model: Llama-3-Perky-Pat-Instruct-8B-Q4_K_M.gguf
files:
- filename: Llama-3-Perky-Pat-Instruct-8B-Q4_K_M.gguf
sha256: b0eae5d9d58a7101a30693c267097a90f4a005c81fda801b40ab2c25e788a93e
uri: huggingface://bartowski/Llama-3-Perky-Pat-Instruct-8B-GGUF/Llama-3-Perky-Pat-Instruct-8B-Q4_K_M.gguf
- !!merge <<: *llama3
name: "l3-uncen-merger-omelette-rp-v0.2-8b"
icon: https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/m0YKWwK9n7w8rnKOzduu4.png
urls:
- https://huggingface.co/Casual-Autopsy/L3-Uncen-Merger-Omelette-RP-v0.2-8B
- https://huggingface.co/LWDCLS/L3-Uncen-Merger-Omelette-RP-v0.2-8B-GGUF-IQ-Imatrix-Request
description: |
L3-Uncen-Merger-Omelette-RP-v0.2-8B is a merge of the following models using LazyMergekit:
Sao10K/L3-8B-Stheno-v3.2
Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B
bluuwhale/L3-SthenoMaidBlackroot-8B-V1
Cas-Warehouse/Llama-3-Mopeyfied-Psychology-v2
migtissera/Llama-3-8B-Synthia-v3.5
tannedbum/L3-Nymeria-Maid-8B
Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B
tannedbum/L3-Nymeria-8B
ChaoticNeutrals/Hathor_RP-v.01-L3-8B
cgato/L3-TheSpice-8b-v0.8.3
Sao10K/L3-8B-Stheno-v3.1
Nitral-AI/Hathor_Stable-v0.2-L3-8B
aifeifei798/llama3-8B-DarkIdol-1.0
ChaoticNeutrals/Poppy_Porpoise-1.4-L3-8B
ResplendentAI/Nymph_8B
overrides:
parameters:
model: L3-Uncen-Merger-Omelette-RP-v0.2-8B-Q4_K_M-imat.gguf
files:
- filename: L3-Uncen-Merger-Omelette-RP-v0.2-8B-Q4_K_M-imat.gguf
sha256: 6bbc42a4c3b25f2b854d76a6e32746b9b3b21dd8856f8f2bc1a5b1269aa8fca1
uri: huggingface://LWDCLS/L3-Uncen-Merger-Omelette-RP-v0.2-8B-GGUF-IQ-Imatrix-Request/L3-Uncen-Merger-Omelette-RP-v0.2-8B-Q4_K_M-imat.gguf
- !!merge <<: *llama3
name: "nymph_8b-i1"
icon: https://cdn-uploads.huggingface.co/production/uploads/626dfb8786671a29c715f8a9/9U_eJCDzLJ8nxb6qfuICc.jpeg
urls:
- https://huggingface.co/ResplendentAI/Nymph_8B
- https://huggingface.co/mradermacher/Nymph_8B-i1-GGUF?not-for-all-audiences=true
description: |
Model card:
Nymph is the culmination of everything I have learned with the T-series project. This model aims to be a unique and full featured RP juggernaut.
The finetune incorporates 1.6 Million tokens of RP data sourced from Bluemoon, FreedomRP, Aesir-Preview, and Claude Opus logs. I made sure to use the multi-turn sharegpt datasets this time instead of alpaca conversions. I have also included three of my personal datasets. The final touch is an ORPO based upon Openhermes Roleplay preferences.
overrides:
parameters:
model: Nymph_8B.i1-Q4_K_M.gguf
files:
- filename: Nymph_8B.i1-Q4_K_M.gguf
sha256: 5b35794539d9cd262720f47a54f59dbffd5bf6c601950359b5c68d13f1ce13a0
uri: huggingface://mradermacher/Nymph_8B-i1-GGUF/Nymph_8B.i1-Q4_K_M.gguf
- !!merge <<: *llama3
name: "l3-ms-astoria-8b"
urls:
- https://huggingface.co/ibrahimkettaneh/L3-MS-Astoria-8b
- https://huggingface.co/mradermacher/L3-MS-Astoria-8b-GGUF
description: |
This is a merge of pre-trained language models created using mergekit.
Merge Method
This model was merged using the Model Stock merge method using failspy/Meta-Llama-3-8B-Instruct-abliterated-v3 as a base.
Models Merged
The following models were included in the merge:
ProbeMedicalYonseiMAILab/medllama3-v20
migtissera/Tess-2.0-Llama-3-8B
Cas-Warehouse/Llama-3-Psychology-LoRA-Stock-8B
TheSkullery/llama-3-cat-8b-instruct-v1
overrides:
parameters:
model: L3-MS-Astoria-8b.Q4_K_M.gguf
files:
- filename: L3-MS-Astoria-8b.Q4_K_M.gguf
sha256: cc5db0ef056aa57cb848988f6a7c739701ecde6303a9d8262f5dac76287ba15a
uri: huggingface://mradermacher/L3-MS-Astoria-8b-GGUF/L3-MS-Astoria-8b.Q4_K_M.gguf
- !!merge <<: *llama3
name: "halomaidrp-v1.33-15b-l3-i1"
urls:
- https://huggingface.co/mradermacher/HaloMaidRP-v1.33-15B-L3-i1-GGUF
- https://huggingface.co/v000000/HaloMaidRP-v1.33-15B-L3
icon: https://cdn-uploads.huggingface.co/production/uploads/64f74b6e6389380c77562762/MCdGdalCCtOVPn8X7rqha.jpeg
description: |
This is the third iteration "Emerald" of the final four and the one I liked the most. It has had limited testing though, but seems relatively decent. Better than 8B at least.
This is a merge of pre-trained language models created using mergekit.
The following models were included in the merge:
grimjim/Llama-3-Instruct-abliteration-LoRA-8B
UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3
NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS
maldv/llama-3-fantasy-writer-8b
tokyotech-llm/Llama-3-Swallow-8B-v0.1
Sao10K/L3-8B-Stheno-v3.2
ZeusLabs/L3-Aethora-15B-V2
Nitral-AI/Hathor_Respawn-L3-8B-v0.8
Blackroot/Llama-3-8B-Abomination-LORA
overrides:
parameters:
model: HaloMaidRP-v1.33-15B-L3.i1-Q4_K_M.gguf
files:
- filename: HaloMaidRP-v1.33-15B-L3.i1-Q4_K_M.gguf
sha256: 94d0bf2de4df7e5a11b9ca4db3518d7d22c6fa062d1ee16e4db52b2bb26bc8b3
uri: huggingface://mradermacher/HaloMaidRP-v1.33-15B-L3-i1-GGUF/HaloMaidRP-v1.33-15B-L3.i1-Q4_K_M.gguf
- !!merge <<: *llama3
name: "llama-3-patronus-lynx-70b-instruct"
urls:
- https://huggingface.co/PatronusAI/Llama-3-Patronus-Lynx-70B-Instruct
- https://huggingface.co/mradermacher/Llama-3-Patronus-Lynx-70B-Instruct-GGUF
description: |
Lynx is an open-source hallucination evaluation model. Patronus-Lynx-70B-Instruct was trained on a mix of datasets including CovidQA, PubmedQA, DROP, RAGTruth. The datasets contain a mix of hand-annotated and synthetic data. The maximum sequence length is 8000 tokens.
Model
overrides:
parameters:
model: Llama-3-Patronus-Lynx-70B-Instruct.Q4_K_M.gguf
files:
- filename: Llama-3-Patronus-Lynx-70B-Instruct.Q4_K_M.gguf
sha256: 95a02b71baff287bd84188fc1babcf9dfae25c315e2613391e694cf944f1e5b3
uri: huggingface://mradermacher/Llama-3-Patronus-Lynx-70B-Instruct-GGUF/Llama-3-Patronus-Lynx-70B-Instruct.Q4_K_M.gguf
- !!merge <<: *llama3
name: "llamax3-8b-alpaca"
urls:
- https://huggingface.co/LLaMAX/LLaMAX3-8B-Alpaca
- https://huggingface.co/mradermacher/LLaMAX3-8B-Alpaca-GGUF
description: |
LLaMAX is a language model with powerful multilingual capabilities without loss instruction-following capabilities.
We collected extensive training sets in 102 languages for continued pre-training of Llama2 and leveraged the English instruction fine-tuning dataset, Alpaca, to fine-tune its instruction-following capabilities.
LLaMAX supports translation between more than 100 languages, surpassing the performance of similarly scaled LLMs.
Supported Languages
Akrikaans (af), Amharic (am), Arabic (ar), Armenian (hy), Assamese (as), Asturian (ast), Azerbaijani (az), Belarusian (be), Bengali (bn), Bosnian (bs), Bulgarian (bg), Burmese (my), Catalan (ca), Cebuano (ceb), Chinese Simpl (zho), Chinese Trad (zho), Croatian (hr), Czech (cs), Danish (da), Dutch (nl), English (en), Estonian (et), Filipino (tl), Finnish (fi), French (fr), Fulah (ff), Galician (gl), Ganda (lg), Georgian (ka), German (de), Greek (el), Gujarati (gu), Hausa (ha), Hebrew (he), Hindi (hi), Hungarian (hu), Icelandic (is), Igbo (ig), Indonesian (id), Irish (ga), Italian (it), Japanese (ja), Javanese (jv), Kabuverdianu (kea), Kamba (kam), Kannada (kn), Kazakh (kk), Khmer (km), Korean (ko), Kyrgyz (ky), Lao (lo), Latvian (lv), Lingala (ln), Lithuanian (lt), Luo (luo), Luxembourgish (lb), Macedonian (mk), Malay (ms), Malayalam (ml), Maltese (mt), Maori (mi), Marathi (mr), Mongolian (mn), Nepali (ne), Northern Sotho (ns), Norwegian (no), Nyanja (ny), Occitan (oc), Oriya (or), Oromo (om), Pashto (ps), Persian (fa), Polish (pl), Portuguese (pt), Punjabi (pa), Romanian (ro), Russian (ru), Serbian (sr), Shona (sn), Sindhi (sd), Slovak (sk), Slovenian (sl), Somali (so), Sorani Kurdish (ku), Spanish (es), Swahili (sw), Swedish (sv), Tajik (tg), Tamil (ta), Telugu (te), Thai (th), Turkish (tr), Ukrainian (uk), Umbundu (umb), Urdu (ur), Uzbek (uz), Vietnamese (vi), Welsh (cy), Wolof (wo), Xhosa (xh), Yoruba (yo), Zulu (zu)
overrides:
parameters:
model: LLaMAX3-8B-Alpaca.Q4_K_M.gguf
files:
- filename: LLaMAX3-8B-Alpaca.Q4_K_M.gguf
sha256: 4652209c55d4260634b2195989279f945a072d8574872789a40d1f9b86eb255b
uri: huggingface://mradermacher/LLaMAX3-8B-Alpaca-GGUF/LLaMAX3-8B-Alpaca.Q4_K_M.gguf
- !!merge <<: *llama3
name: "llamax3-8b"
urls:
- https://huggingface.co/LLaMAX/LLaMAX3-8B
- https://huggingface.co/mradermacher/LLaMAX3-8B-GGUF
description: |
LLaMAX is a language model with powerful multilingual capabilities without loss instruction-following capabilities.
We collected extensive training sets in 102 languages for continued pre-training of Llama2 and leveraged the English instruction fine-tuning dataset, Alpaca, to fine-tune its instruction-following capabilities.
LLaMAX supports translation between more than 100 languages, surpassing the performance of similarly scaled LLMs.
Supported Languages
Akrikaans (af), Amharic (am), Arabic (ar), Armenian (hy), Assamese (as), Asturian (ast), Azerbaijani (az), Belarusian (be), Bengali (bn), Bosnian (bs), Bulgarian (bg), Burmese (my), Catalan (ca), Cebuano (ceb), Chinese Simpl (zho), Chinese Trad (zho), Croatian (hr), Czech (cs), Danish (da), Dutch (nl), English (en), Estonian (et), Filipino (tl), Finnish (fi), French (fr), Fulah (ff), Galician (gl), Ganda (lg), Georgian (ka), German (de), Greek (el), Gujarati (gu), Hausa (ha), Hebrew (he), Hindi (hi), Hungarian (hu), Icelandic (is), Igbo (ig), Indonesian (id), Irish (ga), Italian (it), Japanese (ja), Javanese (jv), Kabuverdianu (kea), Kamba (kam), Kannada (kn), Kazakh (kk), Khmer (km), Korean (ko), Kyrgyz (ky), Lao (lo), Latvian (lv), Lingala (ln), Lithuanian (lt), Luo (luo), Luxembourgish (lb), Macedonian (mk), Malay (ms), Malayalam (ml), Maltese (mt), Maori (mi), Marathi (mr), Mongolian (mn), Nepali (ne), Northern Sotho (ns), Norwegian (no), Nyanja (ny), Occitan (oc), Oriya (or), Oromo (om), Pashto (ps), Persian (fa), Polish (pl), Portuguese (pt), Punjabi (pa), Romanian (ro), Russian (ru), Serbian (sr), Shona (sn), Sindhi (sd), Slovak (sk), Slovenian (sl), Somali (so), Sorani Kurdish (ku), Spanish (es), Swahili (sw), Swedish (sv), Tajik (tg), Tamil (ta), Telugu (te), Thai (th), Turkish (tr), Ukrainian (uk), Umbundu (umb), Urdu (ur), Uzbek (uz), Vietnamese (vi), Welsh (cy), Wolof (wo), Xhosa (xh), Yoruba (yo), Zulu (zu)
overrides:
parameters:
model: LLaMAX3-8B.Q4_K_M.gguf
files:
- filename: LLaMAX3-8B.Q4_K_M.gguf
sha256: 862fb2be5d74b171f4294f862f43e7cb6e6dbecce29a9f9167da4f1db230daac
uri: huggingface://mradermacher/LLaMAX3-8B-GGUF/LLaMAX3-8B.Q4_K_M.gguf
- !!merge <<: *llama3
name: "arliai-llama-3-8b-dolfin-v0.5"
urls:
- https://huggingface.co/OwenArli/ArliAI-Llama-3-8B-Dolfin-v0.5
- https://huggingface.co/QuantFactory/ArliAI-Llama-3-8B-Dolfin-v0.5-GGUF
description: |
Based on Meta-Llama-3-8b-Instruct, and is governed by Meta Llama 3 License agreement: https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct
This is a fine tune using an improved Dolphin and WizardLM dataset intended to make the model follow instructions better and refuse less.
OpenLLM Benchmark:
Training:
2048 sequence length since the dataset has an average length of under 1000 tokens, while the base model is 8192 sequence length. From testing it still performs the same 8192 context just fine.
Training duration is around 2 days on 2xRTX 3090, using 4-bit loading and Qlora 64-rank 128-alpha resulting in ~2% trainable weights.
overrides:
parameters:
model: ArliAI-Llama-3-8B-Dolfin-v0.5.Q4_K_M.gguf
files:
- filename: ArliAI-Llama-3-8B-Dolfin-v0.5.Q4_K_M.gguf
sha256: 71fef02915c606b438ccff2cae6b7760bbb54a558d5f2d39c2421d97b6682fea
uri: huggingface://QuantFactory/ArliAI-Llama-3-8B-Dolfin-v0.5-GGUF/ArliAI-Llama-3-8B-Dolfin-v0.5.Q4_K_M.gguf
- !!merge <<: *llama3
name: "llama-3-ezo-8b-common-it"
icon: https://huggingface.co/HODACHI/Llama-3-EZO-8b-Common-it
urls:
- https://huggingface.co/HODACHI/Llama-3-EZO-8b-Common-it
- https://huggingface.co/MCZK/Llama-3-EZO-8b-Common-it-GGUF
description: |
Based on meta-llama/Meta-Llama-3-8B-Instruct, it has been enhanced for Japanese usage through additional pre-training and instruction tuning. (Built with Meta Llama3)
This model is based on Llama-3-8B-Instruct and is subject to the Llama-3 Terms of Use. For detailed information, please refer to the official Llama-3 license page.
このモデルはLlama-3-8B-Instructをベースにしており、Llama-3の利用規約に従います。詳細については、Llama-3の公式ライセンスページをご参照ください。
overrides:
parameters:
model: Llama-3-EZO-8b-Common-it.Q4_K_M.iMatrix.gguf
files:
- filename: Llama-3-EZO-8b-Common-it.Q4_K_M.iMatrix.gguf
sha256: 0a46165b1c35bfb97d7d5b18969a7bfc2bbf37a90bc5e85f8cab11483f5a8adc
uri: huggingface://MCZK/Llama-3-EZO-8b-Common-it-GGUF/Llama-3-EZO-8b-Common-it.Q4_K_M.iMatrix.gguf
- !!merge <<: *llama3
name: "l3-8b-niitama-v1"
urls:
- https://huggingface.co/Sao10K/L3-8B-Niitama-v1
- https://huggingface.co/mradermacher/L3-8B-Niitama-v1-GGUF
description: |
Niitama on Horde
overrides:
parameters:
model: L3-8B-Niitama-v1.Q4_K_M.gguf
files:
- filename: L3-8B-Niitama-v1.Q4_K_M.gguf
sha256: a0e6d8972e1c73af7952ee1b8a3898f52c6036701571fea37ff621b71e89eb53
uri: huggingface://mradermacher/L3-8B-Niitama-v1-GGUF/L3-8B-Niitama-v1.Q4_K_M.gguf
- !!merge <<: *llama3
name: "l3-8b-niitama-v1-i1"
urls:
- https://huggingface.co/Sao10K/L3-8B-Niitama-v1
- https://huggingface.co/mradermacher/L3-8B-Niitama-v1-i1-GGUF
description: |
Niitama on Horde (iMatrix quants)
overrides:
parameters:
model: L3-8B-Niitama-v1.i1-Q4_K_M.gguf
files:
- filename: L3-8B-Niitama-v1.i1-Q4_K_M.gguf
sha256: 8c62f831db2a6e34aa75459fe8a98815199ecc2dac1892a460b8b86363b6826e
uri: huggingface://mradermacher/L3-8B-Niitama-v1-i1-GGUF/L3-8B-Niitama-v1.i1-Q4_K_M.gguf
- !!merge <<: *llama3
icon: https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_BETA/resolve/main/Images/LLAMA-3_8B_Unaligned_BETA.png
name: "llama-3_8b_unaligned_beta"
urls:
- https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_BETA
- https://huggingface.co/bartowski/LLAMA-3_8B_Unaligned_BETA-GGUF
description: |
In the Wild West of the AI world, the real titans never hit their deadlines, no sir!
The projects that finish on time? Theyre the soft ones—basic, surface-level shenanigans. But the serious projects? Theyre always delayed. You set a date, then reality hits: not gonna happen, scope creep that mutates the roadmap, unexpected turn of events that derails everything.
It's only been 4 months since the Alpha was released, and half a year since the project started, but it felt like nearly a decade.
Deadlines shift, but with each delay, youre not failing—youre refining, and becoming more ambitious. A project that keeps getting pushed isnt late; its just gaining weight, becoming something worth building, and truly worth seeing all the way through. The longer its delayed, the more serious it gets.
LLAMA-3_8B_Unaligned is a serious project, and thank god, the Beta is finally here.
I love you all unconditionally, thanks for all the support and kind words!
overrides:
parameters:
model: LLAMA-3_8B_Unaligned_BETA-Q4_K_M.gguf
files:
- filename: LLAMA-3_8B_Unaligned_BETA-Q4_K_M.gguf
sha256: 5b88fb4537339996c04e4a1b6ef6a2d555c4103b6378e273ae9c6c5e77af67eb
uri: huggingface://bartowski/LLAMA-3_8B_Unaligned_BETA-GGUF/LLAMA-3_8B_Unaligned_BETA-Q4_K_M.gguf
- !!merge <<: *llama3
name: "freyja-v4.95-maldv-7b-non-fiction-i1"
urls:
- https://huggingface.co/MrRobotoAI/Freyja-v4.95-maldv-7b-NON-FICTION
- https://huggingface.co/mradermacher/Freyja-v4.95-maldv-7b-NON-FICTION-i1-GGUF
description: |
This model was merged using the Model Stock merge method using aifeifei798/llama3-8B-DarkIdol-2.2-Uncensored-1048K as a base.
The following models were included in the merge:
maldv/llama-3-fantasy-writer-8b
maldv/badger-iota-llama-3-8b
maldv/badger-lambda-llama-3-8b
maldv/badger-mu-llama-3-8b
maldv/badger-kappa-llama-3-8b
maldv/badger-writer-llama-3-8b
overrides:
parameters:
model: Freyja-v4.95-maldv-7b-NON-FICTION.i1-Q4_K_M.gguf
files:
- filename: Freyja-v4.95-maldv-7b-NON-FICTION.i1-Q4_K_M.gguf
sha256: cdc0f4de6df2ba120835fbd25c2a0ae2af8548f46d2c40c7a018c51c3d19e0c0
uri: huggingface://mradermacher/Freyja-v4.95-maldv-7b-NON-FICTION-i1-GGUF/Freyja-v4.95-maldv-7b-NON-FICTION.i1-Q4_K_M.gguf
- &chatml
### ChatML
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
name: "una-thepitbull-21.4b-v2"
license: afl-3.0
icon: https://huggingface.co/fblgit/UNA-ThePitbull-21.4B-v2/resolve/main/DE-UNA-ThePitbull-21.4B-v2.png
description: |
Introducing the best LLM in the industry. Nearly as good as a 70B, just a 21.4B based on saltlux/luxia-21.4b-alignment-v1.0 UNA - ThePitbull 21.4B v2
urls:
- https://huggingface.co/fblgit/UNA-ThePitbull-21.4B-v2
- https://huggingface.co/bartowski/UNA-ThePitbull-21.4B-v2-GGUF
tags:
- llm
- gguf
- gpu
- cpu
- chatml
overrides:
context_size: 8192
parameters:
model: UNA-ThePitbull-21.4B-v2-Q4_K_M.gguf
files:
- filename: UNA-ThePitbull-21.4B-v2-Q4_K_M.gguf
sha256: f08780986748a04e707a63dcac616330c2afc7f9fb2cc6b1d9784672071f3c85
uri: huggingface://bartowski/UNA-ThePitbull-21.4B-v2-GGUF/UNA-ThePitbull-21.4B-v2-Q4_K_M.gguf
- url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
name: "helpingai-9b"
license: hsul
icon: https://huggingface.co/OEvortex/HelpingAI-3B/resolve/main/HelpingAI.png
description: |
HelpingAI-9B is a large language model designed for emotionally intelligent conversational interactions. It is trained to engage users with empathy, understanding, and supportive dialogue across a wide range of topics and contexts. The model aims to provide a supportive AI companion that can attune to users' emotional states and communicative needs.
urls:
- https://huggingface.co/OEvortex/HelpingAI-9B
- https://huggingface.co/nold/HelpingAI-9B-GGUF
tags:
- llm
- gguf
- gpu
- cpu
- chatml
overrides:
context_size: 4096
parameters:
model: HelpingAI-9B_Q4_K_M.gguf
files:
- filename: HelpingAI-9B_Q4_K_M.gguf
sha256: 9c90f3a65332a03a6cbb563eee19c7586d9544f646ff9f33f7f1904b3d415ae2
uri: huggingface://nold/HelpingAI-9B-GGUF/HelpingAI-9B_Q4_K_M.gguf
- url: "github:mudler/LocalAI/gallery/chatml-hercules.yaml@master"
icon: "https://tse3.mm.bing.net/th/id/OIG1.vnrl3xpEcypR3McLW63q?pid=ImgGn"
urls:
- https://huggingface.co/Locutusque/Llama-3-Hercules-5.0-8B
- https://huggingface.co/bartowski/Llama-3-Hercules-5.0-8B-GGUF
name: "llama-3-hercules-5.0-8b"
tags:
- llm
- gguf
- gpu
- cpu
- chatml
- function-calling
description: |
Llama-3-Hercules-5.0-8B is a fine-tuned language model derived from Llama-3-8B. It is specifically designed to excel in instruction following, function calls, and conversational interactions across various scientific and technical domains.
overrides:
parameters:
model: Llama-3-Hercules-5.0-8B-Q4_K_M.gguf
files:
- filename: Llama-3-Hercules-5.0-8B-Q4_K_M.gguf
sha256: 83647caf4a23a91697585cff391e7d1236fac867392f9e49a6dab59f81b5f810
uri: huggingface://bartowski/Llama-3-Hercules-5.0-8B-GGUF/Llama-3-Hercules-5.0-8B-Q4_K_M.gguf
- !!merge <<: *llama3
name: "l3-15b-mythicalmaid-t0.0001"
icon: https://cdn-uploads.huggingface.co/production/uploads/64f74b6e6389380c77562762/Nx5jjEYNH26OS2_87mPTM.png
urls:
- https://huggingface.co/v000000/L3-15B-MythicalMaid-t0.0001
- https://huggingface.co/mradermacher/L3-15B-MythicalMaid-t0.0001-GGUF
description: |
Llama-3-15B-MythicalMaid-t0.0001
A merge of the following models using a custom NearSwap(t0.0001) algorithm (inverted):
ZeusLabs/L3-Aethora-15B-V2
v000000/HaloMaidRP-v1.33-15B-L3
With ZeusLabs/L3-Aethora-15B-V2 as the base model.
This merge was inverted compared to "L3-15B-EtherealMaid-t0.0001".
overrides:
parameters:
model: L3-15B-MythicalMaid-t0.0001.Q4_K_M.gguf
files:
- filename: L3-15B-MythicalMaid-t0.0001.Q4_K_M.gguf
sha256: ecbd57783006f1a027f8a7f5a5d551dc8b3568912825f566d79fd34a804e8970
uri: huggingface://mradermacher/L3-15B-MythicalMaid-t0.0001-GGUF/L3-15B-MythicalMaid-t0.0001.Q4_K_M.gguf
- !!merge <<: *llama3
name: "l3-15b-etherealmaid-t0.0001-i1"
icon: https://cdn-uploads.huggingface.co/production/uploads/64f74b6e6389380c77562762/FwYXt2h_FdmlL0Z6qYufz.png
urls:
- https://huggingface.co/v000000/L3-15B-EtherealMaid-t0.0001
- https://huggingface.co/mradermacher/L3-15B-EtherealMaid-t0.0001-i1-GGUF
description: |
Llama-3-15B-EtherealMaid-t0.0001
A merge of the following models using a custom NearSwap(t0.0001) algorithm:
v000000/HaloMaidRP-v1.33-15B-L3
ZeusLabs/L3-Aethora-15B-V2
With v000000/HaloMaidRP-v1.33-15B-L3 as the base model.
overrides:
parameters:
model: L3-15B-EtherealMaid-t0.0001.i1-Q4_K_M.gguf
files:
- filename: L3-15B-EtherealMaid-t0.0001.i1-Q4_K_M.gguf
sha256: 2911be6be8e0fd4184998d452410ba847491b4ab71a928749de87cafb0e13757
uri: huggingface://mradermacher/L3-15B-EtherealMaid-t0.0001-i1-GGUF/L3-15B-EtherealMaid-t0.0001.i1-Q4_K_M.gguf
- !!merge <<: *llama3
name: "l3-8b-celeste-v1"
icon: https://cdn-uploads.huggingface.co/production/uploads/630cf5d14ca0a22768bbe10c/Zv__LDTO-nHvpuxPcCgUU.webp
urls:
- https://huggingface.co/nothingiisreal/L3-8B-Celeste-v1
- https://huggingface.co/bartowski/L3-8B-Celeste-v1-GGUF
description: |
Trained on LLaMA 3 8B Instruct at 8K context using Reddit Writing Prompts, Opus 15K Instruct an c2 logs cleaned.
This is a roleplay model any instruction following capabilities outside roleplay contexts are coincidental.
overrides:
parameters:
model: L3-8B-Celeste-v1-Q4_K_M.gguf
files:
- filename: L3-8B-Celeste-v1-Q4_K_M.gguf
sha256: ed5277719965fb6bbcce7d16742e3bac4a8d5b8f52133261a3402a480cd65317
uri: huggingface://bartowski/L3-8B-Celeste-v1-GGUF/L3-8B-Celeste-v1-Q4_K_M.gguf
- !!merge <<: *llama3
name: "l3-8b-celeste-v1.2"
icon: https://cdn-uploads.huggingface.co/production/uploads/630cf5d14ca0a22768bbe10c/Zv__LDTO-nHvpuxPcCgUU.webp
urls:
- https://huggingface.co/mudler/L3-8B-Celeste-V1.2-Q4_K_M-GGUF
description: |
Trained on LLaMA 3 8B Instruct at 8K context using Reddit Writing Prompts, Opus 15K Instruct an c2 logs cleaned.
This is a roleplay model any instruction following capabilities outside roleplay contexts are coincidental.
overrides:
parameters:
model: l3-8b-celeste-v1.2-q4_k_m.gguf
files:
- filename: l3-8b-celeste-v1.2-q4_k_m.gguf
sha256: 7752204c0e9f627ff5726eb69bb6114974cafbc934a993ad019abfba62002783
uri: huggingface://mudler/L3-8B-Celeste-V1.2-Q4_K_M-GGUF/l3-8b-celeste-v1.2-q4_k_m.gguf
- !!merge <<: *llama3
name: "llama-3-tulu-2-8b-i1"
icon: https://huggingface.co/datasets/allenai/blog-images/resolve/main/tulu-v2/Tulu%20V2%20banner.png
urls:
- https://huggingface.co/allenai/llama-3-tulu-2-8b
- https://huggingface.co/mradermacher/llama-3-tulu-2-8b-i1-GGUF
description: |
Tulu is a series of language models that are trained to act as helpful assistants. Llama 3 Tulu V2 8B is a fine-tuned version of Llama 3 that was trained on a mix of publicly available, synthetic and human datasets.
overrides:
parameters:
model: llama-3-tulu-2-8b.i1-Q4_K_M.gguf
files:
- filename: llama-3-tulu-2-8b.i1-Q4_K_M.gguf
sha256: f859c22bfa64f461e9ffd973dc7ad6a78bb98b1dda6f49abfa416a4022b7e333
uri: huggingface://mradermacher/llama-3-tulu-2-8b-i1-GGUF/llama-3-tulu-2-8b.i1-Q4_K_M.gguf
- !!merge <<: *llama3
name: "llama-3-tulu-2-dpo-70b-i1"
icon: https://huggingface.co/datasets/allenai/blog-images/resolve/main/tulu-v2/Tulu%20V2%20banner.png
urls:
- https://huggingface.co/allenai/llama-3-tulu-2-dpo-70b
- https://huggingface.co/mradermacher/llama-3-tulu-2-dpo-70b-i1-GGUF
description: |
Tulu is a series of language models that are trained to act as helpful assistants. Llama 3 Tulu V2 8B is a fine-tuned version of Llama 3 that was trained on a mix of publicly available, synthetic and human datasets.
overrides:
parameters:
model: llama-3-tulu-2-dpo-70b.i1-Q4_K_M.gguf
files:
- filename: llama-3-tulu-2-dpo-70b.i1-Q4_K_M.gguf
sha256: fc309bbdf1e2bdced954c4c8dc1f9a885c547017ee5e750bfde645af89e3d3a5
uri: huggingface://mradermacher/llama-3-tulu-2-dpo-70b-i1-GGUF/llama-3-tulu-2-dpo-70b.i1-Q4_K_M.gguf
- !!merge <<: *llama3
license: cc-by-nc-4.0
name: "suzume-llama-3-8b-multilingual-orpo-borda-top25"
icon: https://cdn-uploads.huggingface.co/production/uploads/64b63f8ad57e02621dc93c8b/kWQSu02YfgYdUQqv4s5lq.png
urls:
- https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual-orpo-borda-top25
- https://huggingface.co/RichardErkhov/lightblue_-_suzume-llama-3-8B-multilingual-orpo-borda-top25-gguf
description: |
This is Suzume ORPO, an ORPO trained fine-tune of the lightblue/suzume-llama-3-8B-multilingual model using our lightblue/mitsu dataset.
We have trained several versions of this model using ORPO and so recommend that you use the best performing model from our tests, lightblue/suzume-llama-3-8B-multilingual-orpo-borda-half.
Note that this model has a non-commerical license as we used the Command R and Command R+ models to generate our training data for this model (lightblue/mitsu).
We are currently working on a developing a commerically usable model, so stay tuned for that!
overrides:
parameters:
model: suzume-llama-3-8B-multilingual-orpo-borda-top25.Q4_K_M.gguf
files:
- filename: suzume-llama-3-8B-multilingual-orpo-borda-top25.Q4_K_M.gguf
sha256: ef75a02c5f38e14a8873c7989188dac6974851b4654279fe1921d2c8018cc388
uri: huggingface://RichardErkhov/lightblue_-_suzume-llama-3-8B-multilingual-orpo-borda-top25-gguf/suzume-llama-3-8B-multilingual-orpo-borda-top25.Q4_K_M.gguf
- !!merge <<: *llama3
name: "calme-2.4-llama3-70b"
icon: https://huggingface.co/MaziyarPanahi/calme-2.4-llama3-70b/resolve/main/llama-3-merges.webp
urls:
- https://huggingface.co/MaziyarPanahi/calme-2.4-llama3-70b
- https://huggingface.co/mradermacher/calme-2.4-llama3-70b-GGUF
description: |
This model is a fine-tune (DPO) of meta-llama/Meta-Llama-3-70B-Instruct model.
overrides:
parameters:
model: calme-2.4-llama3-70b.Q4_K_M.gguf
files:
- filename: calme-2.4-llama3-70b.Q4_K_M.gguf
sha256: 0b44ac8a88395dfc60f1b9d3cfffc0ffef74ec0a302e610ef91fc787187568f2
uri: huggingface://mradermacher/calme-2.4-llama3-70b-GGUF/calme-2.4-llama3-70b.Q4_K_M.gguf
- !!merge <<: *llama3
name: "meta-llama-3-instruct-8.9b-brainstorm-5x-form-11"
urls:
- https://huggingface.co/DavidAU/Meta-Llama-3-Instruct-8.9B-BRAINSTORM-5x-FORM-11-GGUF
description: |
Meta-Llama-3-8B Instruct (now at 8.9B) is an enhanced version of the LLM model, specifically designed for creative use cases such as story writing, roleplaying, and fiction. This model has been augmented through the "Brainstorm" process, which involves expanding and calibrating the reasoning center of the LLM to improve its performance in various creative tasks. The enhancements brought by this process include more detailed and nuanced descriptions, stronger prose, and a greater sense of immersion in the story. The model is capable of generating long and vivid content, with fewer clichés and more focused, coherent narratives. Users can provide more instructions and details to elicit stronger and more engaging responses from the model. The "Brainstorm" process has been tested on multiple LLM models, including Llama2, Llama3, and Mistral, as well as on individual models like Llama3 Instruct, Mistral Instruct, and custom fine-tuned models.
overrides:
parameters:
model: Meta-Llama-3-8B-Instruct-exp5-11-Q4_K_M.gguf
files:
- filename: Meta-Llama-3-8B-Instruct-exp5-11-Q4_K_M.gguf
sha256: 5dd81b8b809667d10036499affdd1461cf95af50b405cbc9f800b421a4b60e98
uri: huggingface://DavidAU/Meta-Llama-3-Instruct-8.9B-BRAINSTORM-5x-FORM-11-GGUF/Meta-Llama-3-8B-Instruct-exp5-11-Q4_K_M.gguf
- !!merge <<: *llama3
name: "rp-naughty-v1.0c-8b"
urls:
- https://huggingface.co/QuantFactory/RP-Naughty-v1.0c-8b-GGUF
description: |
This model was merged using the Model Stock merge method using aifeifei798/llama3-8B-DarkIdol-2.2-Uncensored-1048K as a base.
The following models were included in the merge:
underwoods/adventure-8b
Khetterman/Multilingual-SaigaSuzume-8B
underwoods/writer-8b
Khetterman/Kosmos-8B-v1
Khetterman/CursedMatrix-8B-v9
overrides:
parameters:
model: RP-Naughty-v1.0c-8b.Q4_K_M.gguf
files:
- filename: RP-Naughty-v1.0c-8b.Q4_K_M.gguf
sha256: c344564d26d0c3d244d31cfeb103666eab37f9dee6678a2dbaf5bfcf4109d789
uri: huggingface://QuantFactory/RP-Naughty-v1.0c-8b-GGUF/RP-Naughty-v1.0c-8b.Q4_K_M.gguf
- !!merge <<: *llama3
name: "bio-medical-llama-3-8b"
icon: https://cdn-uploads.huggingface.co/production/uploads/653f5b93cd52f288490edc83/zPMUugzfOiwTiRw88jm7T.jpeg
urls:
- https://huggingface.co/ContactDoctor/Bio-Medical-Llama-3-8B
- https://huggingface.co/QuantFactory/Bio-Medical-Llama-3-8B-GGUF
description: |
Bio-Medical-Llama-3-8B model is a specialized large language model designed for biomedical applications. It is finetuned from the meta-llama/Meta-Llama-3-8B-Instruct model using a custom dataset containing over 500,000 diverse entries. These entries include a mix of synthetic and manually curated data, ensuring high quality and broad coverage of biomedical topics.
The model is trained to understand and generate text related to various biomedical fields, making it a valuable tool for researchers, clinicians, and other professionals in the biomedical domain.
overrides:
parameters:
model: Bio-Medical-Llama-3-8B.Q4_K_M.gguf
files:
- filename: Bio-Medical-Llama-3-8B.Q4_K_M.gguf
sha256: 672939e0487d02c55734132c25a59f26e4deaac7cd49445a7028f2291139edcc
uri: huggingface://QuantFactory/Bio-Medical-Llama-3-8B-GGUF/Bio-Medical-Llama-3-8B.Q4_K_M.gguf
- &command-R
### START Command-r
url: "github:mudler/LocalAI/gallery/command-r.yaml@master"
name: "command-r-v01:q1_s"
license: "cc-by-nc-4.0"
icon: https://cdn.sanity.io/images/rjtqmwfu/production/ae020d94b599cc453cc09ebc80be06d35d953c23-102x18.svg
urls:
- https://huggingface.co/CohereForAI/c4ai-command-r-v01
- https://huggingface.co/dranger003/c4ai-command-r-v01-iMat.GGUF
description: |
C4AI Command-R is a research release of a 35 billion parameter highly performant generative model. Command-R is a large language model with open weights optimized for a variety of use cases including reasoning, summarization, and question answering. Command-R has the capability for multilingual generation evaluated in 10 languages and highly performant RAG capabilities.
tags:
- llm
- gguf
- gpu
- command-r
- cpu
overrides:
parameters:
model: ggml-c4ai-command-r-v01-iq1_s.gguf
files:
- filename: "ggml-c4ai-command-r-v01-iq1_s.gguf"
sha256: "aad4594ee45402fe344d8825937d63b9fa1f00becc6d1cc912b016dbb020e0f0"
uri: "huggingface://dranger003/c4ai-command-r-v01-iMat.GGUF/ggml-c4ai-command-r-v01-iq1_s.gguf"
- !!merge <<: *command-R
name: "aya-23-8b"
urls:
- https://huggingface.co/CohereForAI/aya-23-8B
- https://huggingface.co/bartowski/aya-23-8B-GGUF
description: |
Aya 23 is an open weights research release of an instruction fine-tuned model with highly advanced multilingual capabilities. Aya 23 focuses on pairing a highly performant pre-trained Command family of models with the recently released Aya Collection. The result is a powerful multilingual large language model serving 23 languages.
This model card corresponds to the 8-billion version of the Aya 23 model. We also released a 35-billion version which you can find here.
overrides:
parameters:
model: aya-23-8B-Q4_K_M.gguf
files:
- filename: "aya-23-8B-Q4_K_M.gguf"
sha256: "21b3aa3abf067f78f6fe08deb80660cc4ee8ad7b4ab873a98d87761f9f858b0f"
uri: "huggingface://bartowski/aya-23-8B-GGUF/aya-23-8B-Q4_K_M.gguf"
- !!merge <<: *command-R
name: "aya-23-35b"
urls:
- https://huggingface.co/CohereForAI/aya-23-35B
- https://huggingface.co/bartowski/aya-23-35B-GGUF
description: |
Aya 23 is an open weights research release of an instruction fine-tuned model with highly advanced multilingual capabilities. Aya 23 focuses on pairing a highly performant pre-trained Command family of models with the recently released Aya Collection. The result is a powerful multilingual large language model serving 23 languages.
This model card corresponds to the 8-billion version of the Aya 23 model. We also released a 35-billion version which you can find here.
overrides:
parameters:
model: aya-23-35B-Q4_K_M.gguf
files:
- filename: "aya-23-35B-Q4_K_M.gguf"
sha256: "57824768c1a945e21e028c8e9a29b39adb4838d489f5865c82601ab9ad98065d"
uri: "huggingface://bartowski/aya-23-35B-GGUF/aya-23-35B-Q4_K_M.gguf"
- &phi-2-chat
### START Phi-2
url: "github:mudler/LocalAI/gallery/phi-2-chat.yaml@master"
license: mit
description: |
Phi-2 fine-tuned by the OpenHermes 2.5 dataset optimised for multi-turn conversation and character impersonation.
The dataset has been pre-processed by doing the following:
- remove all refusals
- remove any mention of AI assistant
- split any multi-turn dialog generated in the dataset into multi-turn conversations records
- added nfsw generated conversations from the Teatime dataset
Developed by: l3utterfly
Funded by: Layla Network
Model type: Phi
Language(s) (NLP): English
License: MIT
Finetuned from model: Phi-2
urls:
- https://huggingface.co/l3utterfly/phi-2-layla-v1-chatml
- https://huggingface.co/l3utterfly/phi-2-layla-v1-chatml-gguf
tags:
- llm
- gguf
- gpu
- llama2
- cpu
name: "phi-2-chat:Q8_0"
overrides:
parameters:
model: phi-2-layla-v1-chatml-Q8_0.gguf
files:
- filename: "phi-2-layla-v1-chatml-Q8_0.gguf"
sha256: "0cf542a127c2c835066a78028009b7eddbaf773cc2a26e1cb157ce5e09c1a2e0"
uri: "huggingface://l3utterfly/phi-2-layla-v1-chatml-gguf/phi-2-layla-v1-chatml-Q8_0.gguf"
- !!merge <<: *phi-2-chat
name: "phi-2-chat"
overrides:
parameters:
model: phi-2-layla-v1-chatml-Q4_K.gguf
files:
- filename: "phi-2-layla-v1-chatml-Q4_K.gguf"
sha256: "b071e5624b60b8911f77261398802c4b4079c6c689e38e2ce75173ed62bc8a48"
uri: "huggingface://l3utterfly/phi-2-layla-v1-chatml-gguf/phi-2-layla-v1-chatml-Q4_K.gguf"
- !!merge <<: *phi-2-chat
license: mit
icon: "https://huggingface.co/rhysjones/phi-2-orange/resolve/main/phi-2-orange.jpg"
description: |
A two-step finetune of Phi-2, with a bit of zest.
There is an updated model at rhysjones/phi-2-orange-v2 which has higher evals, if you wish to test.
urls:
- https://huggingface.co/rhysjones/phi-2-orange
- https://huggingface.co/TheBloke/phi-2-orange-GGUF
tags:
- llm
- gguf
- llama2
- gpu
- cpu
name: "phi-2-orange"
overrides:
parameters:
model: phi-2-orange.Q4_0.gguf
files:
- filename: "phi-2-orange.Q4_0.gguf"
sha256: "49cb710ae688e1b19b1b299087fa40765a0cd677e3afcc45e5f7ef6750975dcf"
uri: "huggingface://TheBloke/phi-2-orange-GGUF/phi-2-orange.Q4_0.gguf"
### Internlm2
- name: "internlm2_5-7b-chat-1m"
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
urls:
- https://huggingface.co/internlm/internlm2_5-7b-chat-1m
- https://huggingface.co/bartowski/internlm2_5-7b-chat-1m-GGUF
icon: https://github.com/InternLM/InternLM/assets/22529082/b9788105-8892-4398-8b47-b513a292378e
tags:
- internlm2
- gguf
- cpu
- gpu
description: |
InternLM2.5 has open-sourced a 7 billion parameter base model and a chat model tailored for practical scenarios. The model has the following characteristics:
Outstanding reasoning capability: State-of-the-art performance on Math reasoning, surpassing models like Llama3 and Gemma2-9B.
1M Context window: Nearly perfect at finding needles in the haystack with 1M-long context, with leading performance on long-context tasks like LongBench. Try it with LMDeploy for 1M-context inference and a file chat demo.
Stronger tool use: InternLM2.5 supports gathering information from more than 100 web pages, corresponding implementation will be released in Lagent soon. InternLM2.5 has better tool utilization-related capabilities in instruction following, tool selection and reflection. See examples.
overrides:
parameters:
model: internlm2_5-7b-chat-1m-Q4_K_M.gguf
files:
- filename: internlm2_5-7b-chat-1m-Q4_K_M.gguf
uri: huggingface://bartowski/internlm2_5-7b-chat-1m-GGUF/internlm2_5-7b-chat-1m-Q4_K_M.gguf
sha256: 10d5e18a4125f9d4d74a9284a21e0c820b150af06dee48665e54ff6e1be3a564
- &phi-3
### START Phi-3
url: "github:mudler/LocalAI/gallery/phi-3-chat.yaml@master"
name: "phi-3-mini-4k-instruct"
license: mit
description: |
The Phi-3-Mini-4K-Instruct is a 3.8B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties. The model belongs to the Phi-3 family with the Mini version in two variants 4K and 128K which is the context length (in tokens) it can support. The model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization to ensure precise instruction adherence and robust safety measures. When assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3 Mini-4K-Instruct showcased a robust and state-of-the-art performance among models with less than 13 billion parameters.
urls:
- https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-gguf
tags:
- llm
- gguf
- gpu
- llama2
- cpu
overrides:
parameters:
model: Phi-3-mini-4k-instruct-q4.gguf
files:
- filename: "Phi-3-mini-4k-instruct-q4.gguf"
sha256: "8a83c7fb9049a9b2e92266fa7ad04933bb53aa1e85136b7b30f1b8000ff2edef"
uri: "huggingface://microsoft/Phi-3-mini-4k-instruct-gguf/Phi-3-mini-4k-instruct-q4.gguf"
- !!merge <<: *phi-3
name: "phi-3-mini-4k-instruct:fp16"
overrides:
parameters:
model: Phi-3-mini-4k-instruct-fp16.gguf
files:
- filename: "Phi-3-mini-4k-instruct-fp16.gguf"
uri: "huggingface://microsoft/Phi-3-mini-4k-instruct-gguf/Phi-3-mini-4k-instruct-fp16.gguf"
sha256: 5d99003e395775659b0dde3f941d88ff378b2837a8dc3a2ea94222ab1420fad3
- !!merge <<: *phi-3
name: "phi-3-medium-4k-instruct"
description: |
The Phi-3-Medium-4K-Instruct is a 14B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes
both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties.
The model belongs to the Phi-3 family with the Medium version in two variants 4K and 128K which is the context length (in tokens) that it can support.
urls:
- https://huggingface.co/bartowski/Phi-3-medium-4k-instruct-GGUF
- https://huggingface.co/microsoft/Phi-3-medium-4k-instruct
overrides:
parameters:
model: Phi-3-medium-4k-instruct-Q4_K_M.gguf
files:
- filename: "Phi-3-medium-4k-instruct-Q4_K_M.gguf"
uri: "huggingface://bartowski/Phi-3-medium-4k-instruct-GGUF/Phi-3-medium-4k-instruct-Q4_K_M.gguf"
sha256: 6f05c97bc676dd1ec8d58e9a8795b4f5c809db771f6fc7bf48634c805face82c
- !!merge <<: *phi-3
name: "cream-phi-3-14b-v1"
icon: https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/AP4-OHepdqiqHj2KSi26M.gif
description: |
CreamPhi 14B is the first Phi Medium to be trained with roleplay and moist.
urls:
- https://huggingface.co/TheDrummer/Cream-Phi-3-14B-v1-GGUF
overrides:
parameters:
model: Cream-Phi-3-14B-v1-Q4_K_M.gguf
files:
- filename: Cream-Phi-3-14B-v1-Q4_K_M.gguf
uri: huggingface://TheDrummer/Cream-Phi-3-14B-v1-GGUF/Cream-Phi-3-14B-v1-Q4_K_M.gguf
sha256: ec67018a86090da415517acf21ad48f28e02dff664a1dd35602f1f8fa94f6a27
- !!merge <<: *phi-3
name: "phi3-4x4b-v1"
description: |
a continually pretrained phi3-mini sparse moe upcycle
urls:
- https://huggingface.co/bartowski/phi3-4x4b-v1-GGUF
- https://huggingface.co/Fizzarolli/phi3-4x4b-v1
overrides:
parameters:
model: phi3-4x4b-v1-Q4_K_M.gguf
files:
- filename: phi3-4x4b-v1-Q4_K_M.gguf
uri: huggingface://bartowski/phi3-4x4b-v1-GGUF/phi3-4x4b-v1-Q4_K_M.gguf
sha256: fd33220186b7076f4b306f27b3a8913384435a2ca90185a71c9df5a752d3a298
- !!merge <<: *phi-3
name: "phi-3.1-mini-4k-instruct"
urls:
- https://huggingface.co/microsoft/Phi-3-mini-4k-instruct
- https://huggingface.co/bartowski/Phi-3.1-mini-4k-instruct-GGUF
description: |
This is an update over the original instruction-tuned Phi-3-mini release based on valuable customer feedback. The model used additional post-training data leading to substantial gains on instruction following and structure output.
It is based on the original model from Microsoft, but has been updated and quantized using the llama.cpp release b3278.
overrides:
parameters:
model: Phi-3.1-mini-4k-instruct-Q4_K_M.gguf
files:
- filename: Phi-3.1-mini-4k-instruct-Q4_K_M.gguf
uri: huggingface://bartowski/Phi-3.1-mini-4k-instruct-GGUF/Phi-3.1-mini-4k-instruct-Q4_K_M.gguf
sha256: d6d25bf078321bea4a079c727b273cb0b5a2e0b4cf3add0f7a2c8e43075c414f
- !!merge <<: *phi-3
name: "phillama-3.8b-v0.1"
icon: https://cdn-uploads.huggingface.co/production/uploads/657eb5b256c9c67605a6e8b5/f96pPiJQb3puzbPYNknG2.png
urls:
- https://huggingface.co/RichardErkhov/raincandy-u_-_phillama-3.8b-v0.1-gguf
description: |
The description of the LLM model is:
Phillama is a model based on Phi-3-mini and trained on Llama-generated dataset raincandy-u/Dextromethorphan-10k to make it more "llama-like". Also, this model is converted into Llama format, so it will work with any Llama-2/3 workflow. The model aims to generate text with a specific "llama-like" style and is suited for text-generation tasks.
overrides:
parameters:
model: phillama-3.8b-v0.1.Q4_K_M.gguf
files:
- filename: phillama-3.8b-v0.1.Q4_K_M.gguf
sha256: da537d352b7aae54bbad0d2cff3e3a1b0e1dc1e1d25bec3aae1d05cf4faee7a2
uri: huggingface://RichardErkhov/raincandy-u_-_phillama-3.8b-v0.1-gguf/phillama-3.8b-v0.1.Q4_K_M.gguf
- !!merge <<: *phi-3
name: "calme-2.3-phi3-4b"
icon: https://huggingface.co/MaziyarPanahi/calme-2.1-phi3-4b/resolve/main/phi-3-instruct.webp
urls:
- https://huggingface.co/MaziyarPanahi/calme-2.3-phi3-4b
- https://huggingface.co/MaziyarPanahi/calme-2.3-phi3-4b-GGUF
description: |
MaziyarPanahi/calme-2.1-phi3-4b
This model is a fine-tune (DPO) of microsoft/Phi-3-mini-4k-instruct model.
overrides:
parameters:
model: Phi-3-mini-4k-instruct-v0.3.Q4_K_M.gguf
files:
- filename: Phi-3-mini-4k-instruct-v0.3.Q4_K_M.gguf
sha256: 3a23e1052369c080afb925882bd814cbea5ec859894655a7434c3d49e43a6127
uri: huggingface://MaziyarPanahi/calme-2.3-phi3-4b-GGUF/Phi-3-mini-4k-instruct-v0.3.Q4_K_M.gguf
- !!merge <<: *phi-3
name: "phi-3.5-mini-instruct"
urls:
- https://huggingface.co/microsoft/Phi-3.5-mini-instruct
- https://huggingface.co/MaziyarPanahi/Phi-3.5-mini-instruct-GGUF
description: |
Phi-3.5-mini is a lightweight, state-of-the-art open model built upon datasets used for Phi-3 - synthetic data and filtered publicly available websites - with a focus on very high-quality, reasoning dense data. The model belongs to the Phi-3 model family and supports 128K token context length. The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning, proximal policy optimization, and direct preference optimization to ensure precise instruction adherence and robust safety measures.
overrides:
parameters:
model: Phi-3.5-mini-instruct.Q4_K_M.gguf
files:
- filename: Phi-3.5-mini-instruct.Q4_K_M.gguf
sha256: 3f68916e850b107d8641d18bcd5548f0d66beef9e0a9077fe84ef28943eb7e88
uri: huggingface://MaziyarPanahi/Phi-3.5-mini-instruct-GGUF/Phi-3.5-mini-instruct.Q4_K_M.gguf
- !!merge <<: *phi-3
name: "calme-2.1-phi3.5-4b-i1"
icon: https://huggingface.co/MaziyarPanahi/calme-2.1-phi3.5-4b/resolve/main/calme-2.webp
urls:
- https://huggingface.co/MaziyarPanahi/calme-2.1-phi3.5-4b
- https://huggingface.co/mradermacher/calme-2.1-phi3.5-4b-i1-GGUF
description: |
This model is a fine-tuned version of the microsoft/Phi-3.5-mini-instruct, pushing the boundaries of natural language understanding and generation even further. My goal was to create a versatile and robust model that excels across a wide range of benchmarks and real-world applications.
overrides:
parameters:
model: calme-2.1-phi3.5-4b.i1-Q4_K_M.gguf
files:
- filename: calme-2.1-phi3.5-4b.i1-Q4_K_M.gguf
sha256: 989eccacd52b6d9ebf2c06c35c363da19aadb125659a10df299b7130bc293e77
uri: huggingface://mradermacher/calme-2.1-phi3.5-4b-i1-GGUF/calme-2.1-phi3.5-4b.i1-Q4_K_M.gguf
- !!merge <<: *phi-3
name: "phi-3.5-mini-titanfusion-0.2"
urls:
- https://huggingface.co/bunnycore/Phi-3.5-mini-TitanFusion-0.2
- https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.2-GGUF
description: |
This model was merged using the TIES merge method using microsoft/Phi-3.5-mini-instruct as a base.
The following models were included in the merge:
nbeerbower/phi3.5-gutenberg-4B
ArliAI/Phi-3.5-mini-3.8B-ArliAI-RPMax-v1.1
bunnycore/Phi-3.5-Mini-Hyper
bunnycore/Phi-3.5-Mini-Hyper + bunnycore/Phi-3.1-EvolKit-lora
bunnycore/Phi-3.5-Mini-Sonet-RP
bunnycore/Phi-3.5-mini-TitanFusion-0.1
overrides:
parameters:
model: Phi-3.5-mini-TitanFusion-0.2.Q4_K_M.gguf
files:
- filename: Phi-3.5-mini-TitanFusion-0.2.Q4_K_M.gguf
sha256: 9579305712f2bca246914639c4873acdc1e7bc64ac2c7db0230df4f0ca0ef234
uri: huggingface://mradermacher/Phi-3.5-mini-TitanFusion-0.2-GGUF/Phi-3.5-mini-TitanFusion-0.2.Q4_K_M.gguf
- !!merge <<: *phi-3
name: "phi-3-vision:vllm"
url: "github:mudler/LocalAI/gallery/phi-3-vision.yaml@master"
description: |
Phi-3.5-vision is a lightweight, state-of-the-art open multimodal model built upon datasets which include - synthetic data and filtered publicly available websites - with a focus on very high-quality, reasoning dense data both on text and vision. The model belongs to the Phi-3 model family, and the multimodal version comes with 128K context length (in tokens) it can support. The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning and direct preference optimization to ensure precise instruction adherence and robust safety measures.
- !!merge <<: *phi-3
name: "phi-3.5-vision:vllm"
url: "github:mudler/LocalAI/gallery/phi-3-vision.yaml@master"
override:
parameters:
model: microsoft/Phi-3.5-vision-instruct
description: |
Phi-3.5-vision is a lightweight, state-of-the-art open multimodal model built upon datasets which include - synthetic data and filtered publicly available websites - with a focus on very high-quality, reasoning dense data both on text and vision. The model belongs to the Phi-3 model family, and the multimodal version comes with 128K context length (in tokens) it can support. The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning and direct preference optimization to ensure precise instruction adherence and robust safety measures.
- &hermes-2-pro-mistral
### START Hermes
url: "github:mudler/LocalAI/gallery/hermes-2-pro-mistral.yaml@master"
name: "hermes-2-pro-mistral"
icon: https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/ggO2sBDJ8Bhc6w-zwTx5j.png
license: apache-2.0
description: |
Hermes 2 Pro is an upgraded, retrained version of Nous Hermes 2, consisting of an updated and cleaned version of the OpenHermes 2.5 Dataset, as well as a newly introduced Function Calling and JSON Mode dataset developed in-house.
This new version of Hermes maintains its excellent general task and conversation capabilities - but also excels at Function Calling, JSON Structured Outputs, and has improved on several other metrics as well, scoring a 90% on our function calling evaluation built in partnership with Fireworks.AI, and an 81% on our structured JSON Output evaluation.
Hermes Pro takes advantage of a special system prompt and multi-turn function calling structure with a new chatml role in order to make function calling reliable and easy to parse. Learn more about prompting below.
This work was a collaboration between Nous Research, @interstellarninja, and Fireworks.AI
Learn more about the function calling on our github repo here: https://github.com/NousResearch/Hermes-Function-Calling/tree/main
urls:
- https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B-GGUF
tags:
- llm
- gguf
- gpu
- mistral
- cpu
- function-calling
overrides:
parameters:
model: Hermes-2-Pro-Mistral-7B.Q4_0.gguf
files:
- filename: "Hermes-2-Pro-Mistral-7B.Q4_0.gguf"
sha256: "f446c3125026f7af6757dd097dda02280adc85e908c058bd6f1c41a118354745"
uri: "huggingface://NousResearch/Hermes-2-Pro-Mistral-7B-GGUF/Hermes-2-Pro-Mistral-7B.Q4_0.gguf"
- !!merge <<: *hermes-2-pro-mistral
name: "hermes-2-pro-mistral:Q6_K"
overrides:
parameters:
model: Hermes-2-Pro-Mistral-7B.Q6_K.gguf
files:
- filename: "Hermes-2-Pro-Mistral-7B.Q6_K.gguf"
sha256: "40adc3b227bc36764de148fdda4df5df385adc06650d58d4dbe726ee0214eeff"
uri: "huggingface://NousResearch/Hermes-2-Pro-Mistral-7B-GGUF/Hermes-2-Pro-Mistral-7B.Q6_K.gguf"
- !!merge <<: *hermes-2-pro-mistral
name: "hermes-2-pro-mistral:Q8_0"
overrides:
parameters:
model: Hermes-2-Pro-Mistral-7B.Q8_0.gguf
files:
- filename: "Hermes-2-Pro-Mistral-7B.Q8_0.gguf"
sha256: "b6d95d7ec9a395b7568cc94b0447fd4f90b6f69d6e44794b1fbb84e3f732baca"
uri: "huggingface://NousResearch/Hermes-2-Pro-Mistral-7B-GGUF/Hermes-2-Pro-Mistral-7B.Q8_0.gguf"
- !!merge <<: *hermes-2-pro-mistral
name: "hermes-2-theta-llama-3-8b"
icon: https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/HQnQmNM1L3KXGhp0wUzHH.png
tags:
- llm
- gguf
- gpu
- llama3
- cpu
- function-calling
description: |
Hermes-2 Θ (Theta) is the first experimental merged model released by Nous Research, in collaboration with Charles Goddard at Arcee, the team behind MergeKit.
Hermes-2 Θ is a merged and then further RLHF'ed version our excellent Hermes 2 Pro model and Meta's Llama-3 Instruct model to form a new model, Hermes-2 Θ, combining the best of both worlds of each model.
urls:
- https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B-GGUF
overrides:
parameters:
model: Hermes-2-Pro-Llama-3-Instruct-Merged-DPO-Q4_K_M.gguf
files:
- filename: "Hermes-2-Pro-Llama-3-Instruct-Merged-DPO-Q4_K_M.gguf"
sha256: "762b9371a296ab2628592b9462dc676b27d881a3402816492801641a437669b3"
uri: "huggingface://NousResearch/Hermes-2-Theta-Llama-3-8B-GGUF/Hermes-2-Pro-Llama-3-Instruct-Merged-DPO-Q4_K_M.gguf"
- !!merge <<: *hermes-2-pro-mistral
name: "hermes-2-theta-llama-3-70b"
icon: https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/P4NxBFwfBbboNZVytpn45.png
tags:
- llm
- gguf
- gpu
- llama3
- cpu
- function-calling
description: |
Hermes-2 Θ (Theta) 70B is the continuation of our experimental merged model released by Nous Research, in collaboration with Charles Goddard and Arcee AI, the team behind MergeKit.
Hermes-2 Θ is a merged and then further RLHF'ed version our excellent Hermes 2 Pro model and Meta's Llama-3 Instruct model to form a new model, Hermes-2 Θ, combining the best of both worlds of each model.
urls:
- https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-70B-GGUF
overrides:
parameters:
model: Hermes-2-Theta-Llama-3-70B-Q4_K_M.gguf
files:
- filename: "Hermes-2-Theta-Llama-3-70B-Q4_K_M.gguf"
uri: "huggingface://NousResearch/Hermes-2-Theta-Llama-3-70B-GGUF/Hermes-2-Theta-Llama-3-70B-Q4_K_M.gguf"
sha256: b3965f671c35d09da8b903218f5bbaac94efdd9000e4fe4a2bac87fcac9f664e
### LLAMA3 version
- !!merge <<: *hermes-2-pro-mistral
name: "hermes-2-pro-llama-3-8b"
tags:
- llm
- gguf
- gpu
- llama3
- function-calling
- cpu
urls:
- https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B-GGUF
overrides:
parameters:
model: Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf
files:
- filename: "Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf"
sha256: "10c52a4820137a35947927be741bb411a9200329367ce2590cc6757cd98e746c"
uri: "huggingface://NousResearch/Hermes-2-Pro-Llama-3-8B-GGUF/Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf"
- !!merge <<: *hermes-2-pro-mistral
tags:
- llm
- gguf
- gpu
- llama3
- function-calling
- cpu
name: "hermes-2-pro-llama-3-8b:Q5_K_M"
urls:
- https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B-GGUF
overrides:
parameters:
model: Hermes-2-Pro-Llama-3-8B-Q5_K_M.gguf
files:
- filename: "Hermes-2-Pro-Llama-3-8B-Q5_K_M.gguf"
sha256: "107f3f55e26b8cc144eadd83e5f8a60cfd61839c56088fa3ae2d5679abf45f29"
uri: "huggingface://NousResearch/Hermes-2-Pro-Llama-3-8B-GGUF/Hermes-2-Pro-Llama-3-8B-Q5_K_M.gguf"
- !!merge <<: *hermes-2-pro-mistral
tags:
- llm
- gguf
- gpu
- function-calling
- llama3
- cpu
name: "hermes-2-pro-llama-3-8b:Q8_0"
urls:
- https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B-GGUF
overrides:
parameters:
model: Hermes-2-Pro-Llama-3-8B-Q8_0.gguf
files:
- filename: "Hermes-2-Pro-Llama-3-8B-Q8_0.gguf"
sha256: "d138388cfda04d185a68eaf2396cf7a5cfa87d038a20896817a9b7cf1806f532"
uri: "huggingface://NousResearch/Hermes-2-Pro-Llama-3-8B-GGUF/Hermes-2-Pro-Llama-3-8B-Q8_0.gguf"
- !!merge <<: *hermes-2-pro-mistral
name: "hermes-3-llama-3.1-8b"
icon: https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/bMcZ3sNNQK8SRZpHXBmwM.jpeg
urls:
- https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B
- https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B-GGUF
description: |
Hermes 3 is a generalist language model developed by Nous Research. It is an advanced agentic model with improved roleplaying, reasoning, multi-turn conversation, long context coherence, and generalist assistant capabilities. The model is built on top of the Llama-3 architecture and has been fine-tuned to achieve superior performance in various tasks. It is designed to be a powerful and reliable tool for solving complex problems and assisting users in achieving their goals. Hermes 3 can be used for a wide range of applications, including research, education, and personal assistant tasks. It is available on the Hugging Face model hub for easy access and integration into existing workflows.
overrides:
parameters:
model: Hermes-3-Llama-3.1-8B.Q4_K_M.gguf
files:
- filename: Hermes-3-Llama-3.1-8B.Q4_K_M.gguf
sha256: d4403ce5a6e930f4c2509456388c20d633a15ff08dd52ef3b142ff1810ec3553
uri: huggingface://NousResearch/Hermes-3-Llama-3.1-8B-GGUF/Hermes-3-Llama-3.1-8B.Q4_K_M.gguf
- !!merge <<: *hermes-2-pro-mistral
name: "hermes-3-llama-3.1-8b:Q8"
icon: https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/bMcZ3sNNQK8SRZpHXBmwM.jpeg
urls:
- https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B
- https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B-GGUF
description: |
Hermes 3 is a generalist language model developed by Nous Research. It is an advanced agentic model with improved roleplaying, reasoning, multi-turn conversation, long context coherence, and generalist assistant capabilities. The model is built on top of the Llama-3 architecture and has been fine-tuned to achieve superior performance in various tasks. It is designed to be a powerful and reliable tool for solving complex problems and assisting users in achieving their goals. Hermes 3 can be used for a wide range of applications, including research, education, and personal assistant tasks. It is available on the Hugging Face model hub for easy access and integration into existing workflows.
overrides:
parameters:
model: Hermes-3-Llama-3.1-8B.Q8_0.gguf
files:
- filename: Hermes-3-Llama-3.1-8B.Q8_0.gguf
sha256: c77c263f78b2f56fbaddd3ef2af750fda6ebb4344a546aaa0bfdd546b1ca8d84
uri: huggingface://NousResearch/Hermes-3-Llama-3.1-8B-GGUF/Hermes-3-Llama-3.1-8B.Q8_0.gguf
- !!merge <<: *hermes-2-pro-mistral
name: "hermes-3-llama-3.1-70b"
icon: https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/vG6j5WxHX09yj32vgjJlI.jpeg
urls:
- https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-70B
- https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-70B-GGUF
description: |
Hermes 3 is a generalist language model with many improvements over Hermes 2, including advanced agentic capabilities, much better roleplaying, reasoning, multi-turn conversation, long context coherence, and improvements across the board. It is designed to focus on aligning LLMs to the user, with powerful steering capabilities and control given to the end user. The model uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue. It also supports function calling and structured output capabilities, generalist assistant capabilities, and improved code generation skills.
overrides:
parameters:
model: Hermes-3-Llama-3.1-70B.Q4_K_M.gguf
files:
- filename: Hermes-3-Llama-3.1-70B.Q4_K_M.gguf
sha256: 955c2f42caade4278f3c9dbffa32bb74572652b20e49e5340e782de3585bbe3f
uri: huggingface://NousResearch/Hermes-3-Llama-3.1-70B-GGUF/Hermes-3-Llama-3.1-70B.Q4_K_M.gguf
- !!merge <<: *hermes-2-pro-mistral
name: "hermes-3-llama-3.1-70b:Q5_K_M"
icon: https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/vG6j5WxHX09yj32vgjJlI.jpeg
urls:
- https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-70B
- https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-70B-GGUF
description: |
Hermes 3 is a generalist language model with many improvements over Hermes 2, including advanced agentic capabilities, much better roleplaying, reasoning, multi-turn conversation, long context coherence, and improvements across the board. It is designed to focus on aligning LLMs to the user, with powerful steering capabilities and control given to the end user. The model uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue. It also supports function calling and structured output capabilities, generalist assistant capabilities, and improved code generation skills.
overrides:
parameters:
model: Hermes-3-Llama-3.1-70B.Q5_K_M.gguf
files:
- filename: Hermes-3-Llama-3.1-70B.Q5_K_M.gguf
sha256: 10ae3e0441b14c4a6476436f3c14e8bcacc7928aa3e8ce978d053287289a7ebb
uri: huggingface://NousResearch/Hermes-3-Llama-3.1-70B-GGUF/Hermes-3-Llama-3.1-70B.Q5_K_M.gguf
- &hermes-vllm
url: "github:mudler/LocalAI/gallery/hermes-vllm.yaml@master"
name: "hermes-3-llama-3.1-8b:vllm"
icon: https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/vG6j5WxHX09yj32vgjJlI.jpeg
tags:
- llm
- vllm
- gpu
- function-calling
license: llama-3
urls:
- https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B
description: |
Hermes 3 is a generalist language model with many improvements over Hermes 2, including advanced agentic capabilities, much better roleplaying, reasoning, multi-turn conversation, long context coherence, and improvements across the board. It is designed to focus on aligning LLMs to the user, with powerful steering capabilities and control given to the end user. The model uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue. It also supports function calling and structured output capabilities, generalist assistant capabilities, and improved code generation skills.
overrides:
parameters:
model: NousResearch/Hermes-3-Llama-3.1-8B
- !!merge <<: *hermes-vllm
name: "hermes-3-llama-3.1-70b:vllm"
urls:
- https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-70B
overrides:
parameters:
model: NousResearch/Hermes-3-Llama-3.1-70B
- !!merge <<: *hermes-vllm
name: "hermes-3-llama-3.1-405b:vllm"
icon: https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/-kj_KflXsdpcZoTQsvx7W.jpeg
urls:
- https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-405B
overrides:
parameters:
model: NousResearch/Hermes-3-Llama-3.1-405B
- !!merge <<: *hermes-2-pro-mistral
name: "biomistral-7b"
description: |
BioMistral: A Collection of Open-Source Pretrained Large Language Models for Medical Domains
urls:
- https://huggingface.co/MaziyarPanahi/BioMistral-7B-GGUF
icon: https://huggingface.co/BioMistral/BioMistral-7B/resolve/main/wordart_blue_m_rectangle.png?download=true
overrides:
parameters:
model: BioMistral-7B.Q4_K_M.gguf
files:
- filename: "BioMistral-7B.Q4_K_M.gguf"
sha256: "3a73107045dfe7e3f113b392b0a67e3e6ca9fa9dae2abe301424ce5abd1721a6"
uri: "huggingface://MaziyarPanahi/BioMistral-7B-GGUF/BioMistral-7B.Q4_K_M.gguf"
- !!merge <<: *hermes-2-pro-mistral
name: "tiamat-8b-1.2-llama-3-dpo"
icon: https://huggingface.co/Gryphe/Tiamat-8b-1.2-Llama-3-DPO/resolve/main/Tiamat.png
description: |
Obligatory Disclaimer: Tiamat is not nice.
Ever wanted to be treated disdainfully like the foolish mortal you are? Wait no more, for Tiamat is here to berate you! Hailing from the world of the Forgotten Realms, she will happily judge your every word.
Tiamat was created with the following question in mind; Is it possible to create an assistant with strong anti-assistant personality traits? Try it yourself and tell me afterwards!
She was fine-tuned on top of Nous Research's shiny new Hermes 2 Pro.
urls:
- https://huggingface.co/bartowski/Tiamat-8b-1.2-Llama-3-DPO-GGUF
overrides:
parameters:
model: Tiamat-8b-1.2-Llama-3-DPO-Q4_K_M.gguf
files:
- filename: "Tiamat-8b-1.2-Llama-3-DPO-Q4_K_M.gguf"
sha256: "7b0895d2183344b2ac1ff36b9f3fe31dd8d4cf8820c4a41ef74e50ef86e3b448"
uri: "huggingface://bartowski/Tiamat-8b-1.2-Llama-3-DPO-GGUF/Tiamat-8b-1.2-Llama-3-DPO-Q4_K_M.gguf"
- url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
name: "guillaumetell-7b"
license: apache-2
description: |
Guillaume Tell est un Large Language Model (LLM) français basé sur Mistral Open-Hermes 2.5 optimisé pour le RAG (Retrieval Augmented Generation) avec traçabilité des sources et explicabilité.
urls:
- https://huggingface.co/MaziyarPanahi/guillaumetell-7b-GGUF
- https://huggingface.co/AgentPublic/guillaumetell-7b
tags:
- llm
- gguf
- gpu
- cpu
- openhermes
- french
overrides:
context_size: 4096
parameters:
model: guillaumetell-7b.Q4_K_M.gguf
files:
- filename: guillaumetell-7b.Q4_K_M.gguf
sha256: bf08db5281619335f3ee87e229c8533b04262790063b061bb8f275c3e4de7061
uri: huggingface://MaziyarPanahi/guillaumetell-7b-GGUF/guillaumetell-7b.Q4_K_M.gguf
- !!merge <<: *hermes-2-pro-mistral
name: "kunocchini-7b-128k-test-imatrix"
description: |
The following models were included in the merge:
SanjiWatsuki/Kunoichi-DPO-v2-7B
Epiculous/Fett-uccine-Long-Noodle-7B-120k-Contex
urls:
- https://huggingface.co/Lewdiculous/Kunocchini-7b-128k-test-GGUF-Imatrix
icon: https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/9obNSalcJqCilQwr_4ssM.jpeg
overrides:
parameters:
model: v2_Kunocchini-7b-128k-test-Q4_K_M-imatrix.gguf
files:
- filename: "v2_Kunocchini-7b-128k-test-Q4_K_M-imatrix.gguf"
sha256: "5ccec35392f56f66952f8eb2ded2d8aa9a6bb511e9518899d8096326e328edef"
uri: "huggingface://Lewdiculous/Kunocchini-7b-128k-test-GGUF-Imatrix/v2_Kunocchini-7b-128k-test-Q4_K_M-imatrix.gguf"
### START Cerbero
- url: "github:mudler/LocalAI/gallery/cerbero.yaml@master"
icon: https://huggingface.co/galatolo/cerbero-7b/resolve/main/README.md.d/cerbero.png
description: |
cerbero-7b is specifically crafted to fill the void in Italy's AI landscape.
urls:
- https://huggingface.co/galatolo/cerbero-7b
tags:
- llm
- gguf
- gpu
- cpu
- mistral
- italian
overrides:
parameters:
model: galatolo-Q4_K.gguf
files:
- filename: "galatolo-Q4_K.gguf"
sha256: "ca0cfd5a9ad40dc16416aa3a277015d0299b62c0803b67f5709580042202c172"
uri: "huggingface://galatolo/cerbero-7b-gguf/ggml-model-Q4_K.gguf"
- &codellama
### START Codellama
url: "github:mudler/LocalAI/gallery/codellama.yaml@master"
name: "codellama-7b"
license: llama2
description: |
Code Llama is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 34 billion parameters. This model is designed for general code synthesis and understanding.
urls:
- https://huggingface.co/TheBloke/CodeLlama-7B-GGUF
- https://huggingface.co/meta-llama/CodeLlama-7b-hf
tags:
- llm
- gguf
- gpu
- llama2
- cpu
overrides:
parameters:
model: codellama-7b.Q4_0.gguf
files:
- filename: "codellama-7b.Q4_0.gguf"
sha256: "33052f6dd41436db2f83bd48017b6fff8ce0184e15a8a227368b4230f1da97b5"
uri: "huggingface://TheBloke/CodeLlama-7B-GGUF/codellama-7b.Q4_0.gguf"
- !!merge <<: *codellama
name: "codestral-22b-v0.1"
license: mnpl
description: |
Codestral-22B-v0.1 is trained on a diverse dataset of 80+ programming languages, including the most popular ones, such as Python, Java, C, C++, JavaScript, and Bash (more details in the Blogpost). The model can be queried:
As instruct, for instance to answer any questions about a code snippet (write documentation, explain, factorize) or to generate code following specific indications
As Fill in the Middle (FIM), to predict the middle tokens between a prefix and a suffix (very useful for software development add-ons like in VS Code)
urls:
- https://huggingface.co/mistralai/Codestral-22B-v0.1
- https://huggingface.co/bartowski/Codestral-22B-v0.1-GGUF
tags:
- llm
- gguf
- gpu
- code
- cpu
overrides:
parameters:
model: Codestral-22B-v0.1-Q4_K_M.gguf
files:
- filename: "Codestral-22B-v0.1-Q4_K_M.gguf"
uri: "huggingface://bartowski/Codestral-22B-v0.1-GGUF/Codestral-22B-v0.1-Q4_K_M.gguf"
sha256: 003e48ed892850b80994fcddca2bd6b833b092a4ef2db2853c33a3144245e06c
- !!merge <<: *codellama
url: "github:mudler/LocalAI/gallery/alpaca.yaml@master"
icon: https://huggingface.co/Nan-Do/LeetCodeWizard_7B_V1.1/resolve/main/LeetCodeWizardLogo.png
name: "leetcodewizard_7b_v1.1-i1"
urls:
- https://huggingface.co/Nan-Do/LeetCodeWizard_7B_V1.1
- https://huggingface.co/mradermacher/LeetCodeWizard_7B_V1.1-i1-GGUF
description: |
LeetCodeWizard is a coding large language model specifically trained to solve and explain Leetcode (or any) programming problems.
This model is a fine-tuned version of the WizardCoder-Python-7B with a dataset of Leetcode problems\
Model capabilities:
It should be able to solve most of the problems found at Leetcode and even pass the sample interviews they offer on the site.
It can write both the code and the explanations for the solutions.
overrides:
parameters:
model: LeetCodeWizard_7B_V1.1.i1-Q4_K_M.gguf
files:
- filename: LeetCodeWizard_7B_V1.1.i1-Q4_K_M.gguf
sha256: 19720d8e1ba89d32c6f88ed6518caf0251f9e3ec011297929c801efc5ea979f4
uri: huggingface://mradermacher/LeetCodeWizard_7B_V1.1-i1-GGUF/LeetCodeWizard_7B_V1.1.i1-Q4_K_M.gguf
- &llm-compiler
url: "github:mudler/LocalAI/gallery/codellama.yaml@master"
name: "llm-compiler-13b-imat"
license: other
description: |
LLM Compiler is a state-of-the-art LLM that builds upon Code Llama with improved performance for code optimization and compiler reasoning.
LLM Compiler is free for both research and commercial use.
LLM Compiler is available in two flavors:
LLM Compiler, the foundational models, pretrained on over 500B tokens of LLVM-IR, x86_84, ARM, and CUDA assembly codes and trained to predict the effect of LLVM optimizations;
and LLM Compiler FTD, which is further fine-tuned to predict the best optimizations for code in LLVM assembly to reduce code size, and to disassemble assembly code to LLVM-IR.
urls:
- https://huggingface.co/legraphista/llm-compiler-13b-IMat-GGUF
- https://huggingface.co/facebook/llm-compiler-13b
tags:
- llm
- gguf
- gpu
- code
- cpu
overrides:
parameters:
model: llm-compiler-13b.Q4_K.gguf
files:
- filename: "llm-compiler-13b.Q4_K.gguf"
uri: "huggingface://legraphista/llm-compiler-13b-IMat-GGUF/llm-compiler-13b.Q4_K.gguf"
sha256: dad41a121d0d67432c289aba8ffffc93159e2b24ca3d1c62e118c9f4cbf0c890
- !!merge <<: *llm-compiler
name: "llm-compiler-13b-ftd"
urls:
- https://huggingface.co/QuantFactory/llm-compiler-13b-ftd-GGUF
- https://huggingface.co/facebook/llm-compiler-13b-ftd
overrides:
parameters:
model: llm-compiler-13b-ftd.Q4_K_M.gguf
files:
- filename: "llm-compiler-13b-ftd.Q4_K_M.gguf"
uri: "huggingface://QuantFactory/llm-compiler-13b-ftd-GGUF/llm-compiler-13b-ftd.Q4_K_M.gguf"
sha256: a5d19ae6b3fbe6724784363161b66cd2c8d8a3905761c0fb08245b3c03697db1
- !!merge <<: *llm-compiler
name: "llm-compiler-7b-imat-GGUF"
urls:
- https://huggingface.co/legraphista/llm-compiler-7b-IMat-GGUF
- https://huggingface.co/facebook/llm-compiler-7b
overrides:
parameters:
model: llm-compiler-7b.Q4_K.gguf
files:
- filename: "llm-compiler-7b.Q4_K.gguf"
uri: "huggingface://legraphista/llm-compiler-7b-IMat-GGUF/llm-compiler-7b.Q4_K.gguf"
sha256: 84926979701fa4591ff5ede94a6c5829a62efa620590e5815af984707d446926
- !!merge <<: *llm-compiler
name: "llm-compiler-7b-ftd-imat"
urls:
- https://huggingface.co/legraphista/llm-compiler-7b-ftd-IMat-GGUF
- https://huggingface.co/facebook/llm-compiler-7b-ftd
overrides:
parameters:
model: llm-compiler-7b-ftd.Q4_K.gguf
files:
- filename: "llm-compiler-7b-ftd.Q4_K.gguf"
uri: "huggingface://legraphista/llm-compiler-7b-ftd-IMat-GGUF/llm-compiler-7b-ftd.Q4_K.gguf"
sha256: d862dd18ed335413787d0ad196522a9902a3c10a6456afdab8721822cb0ddde8
- &openvino
### START OpenVINO
url: "github:mudler/LocalAI/gallery/openvino.yaml@master"
name: "openvino-llama-3-8b-instruct-ov-int8"
license: llama3
urls:
- https://huggingface.co/fakezeta/llama-3-8b-instruct-ov-int8
overrides:
parameters:
model: fakezeta/llama-3-8b-instruct-ov-int8
stopwords:
- "<|eot_id|>"
- "<|end_of_text|>"
tags:
- llm
- openvino
- gpu
- llama3
- cpu
- !!merge <<: *openvino
name: "openvino-phi3"
urls:
- https://huggingface.co/fakezeta/Phi-3-mini-128k-instruct-ov-int8
overrides:
trust_remote_code: true
context_size: 131072
parameters:
model: fakezeta/Phi-3-mini-128k-instruct-ov-int8
stopwords:
- <|end|>
tags:
- llm
- openvino
- gpu
- phi3
- cpu
- Remote Code Enabled
- !!merge <<: *openvino
icon: https://cdn-uploads.huggingface.co/production/uploads/62f7a16192950415b637e201/HMD6WEoqqrAV8Ng_fAcnN.png
name: "openvino-llama3-aloe"
urls:
- https://huggingface.co/fakezeta/Llama3-Aloe-8B-Alpha-ov-int8
overrides:
context_size: 8192
parameters:
model: fakezeta/Llama3-Aloe-8B-Alpha-ov-int8
stopwords:
- "<|eot_id|>"
- "<|end_of_text|>"
- !!merge <<: *openvino
name: "openvino-starling-lm-7b-beta-openvino-int8"
urls:
- https://huggingface.co/fakezeta/Starling-LM-7B-beta-openvino-int8
overrides:
context_size: 8192
parameters:
model: fakezeta/Starling-LM-7B-beta-openvino-int8
tags:
- llm
- openvino
- gpu
- mistral
- cpu
- !!merge <<: *openvino
name: "openvino-wizardlm2"
urls:
- https://huggingface.co/fakezeta/Not-WizardLM-2-7B-ov-int8
overrides:
context_size: 8192
parameters:
model: fakezeta/Not-WizardLM-2-7B-ov-int8
- !!merge <<: *openvino
name: "openvino-hermes2pro-llama3"
urls:
- https://huggingface.co/fakezeta/Hermes-2-Pro-Llama-3-8B-ov-int8
overrides:
context_size: 8192
parameters:
model: fakezeta/Hermes-2-Pro-Llama-3-8B-ov-int8
tags:
- llm
- openvino
- gpu
- llama3
- cpu
- !!merge <<: *openvino
name: "openvino-multilingual-e5-base"
urls:
- https://huggingface.co/intfloat/multilingual-e5-base
overrides:
embeddings: true
type: OVModelForFeatureExtraction
parameters:
model: intfloat/multilingual-e5-base
tags:
- llm
- openvino
- gpu
- embedding
- cpu
- !!merge <<: *openvino
name: "openvino-all-MiniLM-L6-v2"
urls:
- https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2
overrides:
embeddings: true
type: OVModelForFeatureExtraction
parameters:
model: sentence-transformers/all-MiniLM-L6-v2
tags:
- llm
- openvino
- gpu
- embedding
- cpu
- &sentencentransformers
### START Embeddings
description: |
This framework provides an easy method to compute dense vector representations for sentences, paragraphs, and images. The models are based on transformer networks like BERT / RoBERTa / XLM-RoBERTa etc. and achieve state-of-the-art performance in various tasks. Text is embedded in vector space such that similar text are closer and can efficiently be found using cosine similarity.
urls:
- https://github.com/UKPLab/sentence-transformers
tags:
- gpu
- cpu
- embeddings
- python
name: "all-MiniLM-L6-v2"
url: "github:mudler/LocalAI/gallery/sentencetransformers.yaml@master"
overrides:
parameters:
model: all-MiniLM-L6-v2
- &dreamshaper
### START Image generation
name: dreamshaper
icon: https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/dd9b038c-bd15-43ab-86ab-66e145ad7ff2/width=450/26072158-132340247-8k%20portrait%20of%20beautiful%20cyborg%20with%20brown%20hair,%20intricate,%20elegant,%20highly%20detailed,%20majestic,%20digital%20photography,%20art%20by%20artg_ed.jpeg
license: other
description: |
A text-to-image model that uses Stable Diffusion 1.5 to generate images from text prompts. This model is DreamShaper model by Lykon.
urls:
- https://civitai.com/models/4384/dreamshaper
tags:
- text-to-image
- stablediffusion
- python
- sd-1.5
- gpu
url: "github:mudler/LocalAI/gallery/dreamshaper.yaml@master"
overrides:
parameters:
model: DreamShaper_8_pruned.safetensors
files:
- filename: DreamShaper_8_pruned.safetensors
uri: huggingface://Lykon/DreamShaper/DreamShaper_8_pruned.safetensors
sha256: 879db523c30d3b9017143d56705015e15a2cb5628762c11d086fed9538abd7fd
- name: stable-diffusion-3-medium
icon: https://huggingface.co/leo009/stable-diffusion-3-medium/resolve/main/sd3demo.jpg
license: other
description: |
Stable Diffusion 3 Medium is a Multimodal Diffusion Transformer (MMDiT) text-to-image model that features greatly improved performance in image quality, typography, complex prompt understanding, and resource-efficiency.
urls:
- https://huggingface.co/stabilityai/stable-diffusion-3-medium
- https://huggingface.co/leo009/stable-diffusion-3-medium
tags:
- text-to-image
- stablediffusion
- python
- sd-3
- gpu
url: "github:mudler/LocalAI/gallery/stablediffusion3.yaml@master"
- &flux
name: flux.1-dev
license: flux-1-dev-non-commercial-license
description: |
FLUX.1 [dev] is a 12 billion parameter rectified flow transformer capable of generating images from text descriptions. For more information, please read our blog post.
Key Features
Cutting-edge output quality, second only to our state-of-the-art model FLUX.1 [pro].
Competitive prompt following, matching the performance of closed source alternatives .
Trained using guidance distillation, making FLUX.1 [dev] more efficient.
Open weights to drive new scientific research, and empower artists to develop innovative workflows.
Generated outputs can be used for personal, scientific, and commercial purposes as described in the flux-1-dev-non-commercial-license.
urls:
- https://huggingface.co/black-forest-labs/FLUX.1-dev
tags:
- text-to-image
- flux
- python
- gpu
url: "github:mudler/LocalAI/gallery/flux.yaml@master"
overrides:
parameters:
model: ChuckMcSneed/FLUX.1-dev
- !!merge <<: *flux
name: flux.1-schnell
license: apache-2
icon: https://huggingface.co/black-forest-labs/FLUX.1-schnell/resolve/main/schnell_grid.jpeg
description: |
FLUX.1 [schnell] is a 12 billion parameter rectified flow transformer capable of generating images from text descriptions. For more information, please read our blog post.
Key Features
Cutting-edge output quality and competitive prompt following, matching the performance of closed source alternatives.
Trained using latent adversarial diffusion distillation, FLUX.1 [schnell] can generate high-quality images in only 1 to 4 steps.
Released under the apache-2.0 licence, the model can be used for personal, scientific, and commercial purposes.
urls:
- https://huggingface.co/black-forest-labs/FLUX.1-schnell
overrides:
parameters:
model: black-forest-labs/FLUX.1-schnell
- name: flux.1-dev-ggml
license: flux-1-dev-non-commercial-license
url: "github:mudler/LocalAI/gallery/flux-ggml.yaml@master"
description: |
FLUX.1 [dev] is a 12 billion parameter rectified flow transformer capable of generating images from text descriptions. For more information, please read our blog post.
Key Features
Cutting-edge output quality, second only to our state-of-the-art model FLUX.1 [pro].
Competitive prompt following, matching the performance of closed source alternatives .
Trained using guidance distillation, making FLUX.1 [dev] more efficient.
Open weights to drive new scientific research, and empower artists to develop innovative workflows.
Generated outputs can be used for personal, scientific, and commercial purposes as described in the flux-1-dev-non-commercial-license.
This model is quantized with GGUF
urls:
- https://huggingface.co/black-forest-labs/FLUX.1-dev
- https://huggingface.co/city96/FLUX.1-dev-gguf
tags:
- text-to-image
- flux
- gpu
- cpu
icon: https://huggingface.co/black-forest-labs/FLUX.1-schnell/resolve/main/schnell_grid.jpeg
overrides:
parameters:
model: flux1-dev-Q2_K.gguf
files:
- filename: "flux1-dev-Q2_K.gguf"
sha256: "b8c464bc0f10076ef8f00ba040d220d90c7993f7c4245ae80227d857f65df105"
uri: "huggingface://city96/FLUX.1-dev-gguf/flux1-dev-Q2_K.gguf"
- filename: ae.safetensors
sha256: afc8e28272cd15db3919bacdb6918ce9c1ed22e96cb12c4d5ed0fba823529e38
uri: https://huggingface.co/ChuckMcSneed/FLUX.1-dev/resolve/main/ae.safetensors
- filename: clip_l.safetensors
sha256: 660c6f5b1abae9dc498ac2d21e1347d2abdb0cf6c0c0c8576cd796491d9a6cdd
uri: https://huggingface.co/comfyanonymous/flux_text_encoders/resolve/main/clip_l.safetensors
- filename: t5xxl_fp16.safetensors
sha256: 6e480b09fae049a72d2a8c5fbccb8d3e92febeb233bbe9dfe7256958a9167635
uri: https://huggingface.co/comfyanonymous/flux_text_encoders/resolve/main/t5xxl_fp16.safetensors
- &whisper
## Whisper
url: "github:mudler/LocalAI/gallery/whisper-base.yaml@master"
name: "whisper-1"
license: "MIT"
urls:
- https://github.com/ggerganov/whisper.cpp
- https://huggingface.co/ggerganov/whisper.cpp
overrides:
parameters:
model: ggml-whisper-base.bin
files:
- filename: "ggml-whisper-base.bin"
sha256: "60ed5bc3dd14eea856493d334349b405782ddcaf0028d4b5df4088345fba2efe"
uri: "https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-base.bin"
description: |
Port of OpenAI's Whisper model in C/C++
- !!merge <<: *whisper
name: "whisper-base-q5_1"
overrides:
parameters:
model: ggml-model-whisper-base-q5_1.bin
files:
- filename: "ggml-model-whisper-base-q5_1.bin"
uri: "https://ggml.ggerganov.com/ggml-model-whisper-base-q5_1.bin"
sha256: 422f1ae452ade6f30a004d7e5c6a43195e4433bc370bf23fac9cc591f01a8898
- !!merge <<: *whisper
name: "whisper-base"
overrides:
parameters:
model: ggml-model-whisper-base.bin
files:
- filename: "ggml-model-whisper-base.bin"
uri: "https://ggml.ggerganov.com/ggml-model-whisper-base.bin"
sha256: 60ed5bc3dd14eea856493d334349b405782ddcaf0028d4b5df4088345fba2efe
- !!merge <<: *whisper
name: "whisper-base-en-q5_1"
overrides:
parameters:
model: ggml-model-whisper-base.en-q5_1.bin
files:
- filename: "ggml-model-whisper-base.en-q5_1.bin"
uri: "https://ggml.ggerganov.com/ggml-model-whisper-base.en-q5_1.bin"
sha256: 4baf70dd0d7c4247ba2b81fafd9c01005ac77c2f9ef064e00dcf195d0e2fdd2f
- !!merge <<: *whisper
name: "whisper-base-en"
overrides:
parameters:
model: ggml-model-whisper-base.en.bin
files:
- filename: "ggml-model-whisper-base.en.bin"
uri: "https://ggml.ggerganov.com/ggml-model-whisper-base.en.bin"
sha256: a03779c86df3323075f5e796cb2ce5029f00ec8869eee3fdfb897afe36c6d002
- !!merge <<: *whisper
name: "whisper-large-q5_0"
overrides:
parameters:
model: ggml-model-whisper-large-q5_0.bin
files:
- filename: "ggml-model-whisper-large-q5_0.bin"
uri: "https://ggml.ggerganov.com/ggml-model-whisper-large-q5_0.bin"
sha256: 3a214837221e4530dbc1fe8d734f302af393eb30bd0ed046042ebf4baf70f6f2
- !!merge <<: *whisper
name: "whisper-medium-q5_0"
overrides:
parameters:
model: ggml-model-whisper-medium-q5_0.bin
files:
- filename: "ggml-model-whisper-medium-q5_0.bin"
uri: "https://ggml.ggerganov.com/ggml-model-whisper-medium-q5_0.bin"
sha256: 19fea4b380c3a618ec4723c3eef2eb785ffba0d0538cf43f8f235e7b3b34220f
- !!merge <<: *whisper
name: "whisper-small-q5_1"
overrides:
parameters:
model: ggml-model-whisper-small-q5_1.bin
files:
- filename: "ggml-model-whisper-small-q5_1.bin"
uri: "https://ggml.ggerganov.com/ggml-model-whisper-small-q5_1.bin"
sha256: ae85e4a935d7a567bd102fe55afc16bb595bdb618e11b2fc7591bc08120411bb
- !!merge <<: *whisper
name: "whisper-small"
overrides:
parameters:
model: ggml-model-whisper-small.bin
files:
- filename: "ggml-model-whisper-small.bin"
uri: "https://ggml.ggerganov.com/ggml-model-whisper-small.bin"
sha256: 1be3a9b2063867b937e64e2ec7483364a79917e157fa98c5d94b5c1fffea987b
- !!merge <<: *whisper
name: "whisper-small-en-q5_1"
overrides:
parameters:
model: ggml-model-whisper-small.en-q5_1.bin
files:
- filename: "ggml-model-whisper-small.en-q5_1.bin"
uri: "https://ggml.ggerganov.com/ggml-model-whisper-small.en-q5_1.bin"
sha256: bfdff4894dcb76bbf647d56263ea2a96645423f1669176f4844a1bf8e478ad30
- !!merge <<: *whisper
name: "whisper-small"
overrides:
parameters:
model: ggml-model-whisper-small.en.bin
files:
- filename: "ggml-model-whisper-small.en.bin"
uri: "https://ggml.ggerganov.com/ggml-model-whisper-small.en.bin"
sha256: c6138d6d58ecc8322097e0f987c32f1be8bb0a18532a3f88f734d1bbf9c41e5d
- !!merge <<: *whisper
name: "whisper-small-q5_1"
overrides:
parameters:
model: ggml-model-whisper-small-q5_1.bin
files:
- filename: "ggml-model-whisper-small-q5_1.bin"
uri: "https://ggml.ggerganov.com/ggml-model-whisper-small-q5_1.bin"
sha256: ae85e4a935d7a567bd102fe55afc16bb595bdb618e11b2fc7591bc08120411bb
- !!merge <<: *whisper
name: "whisper-tiny"
overrides:
parameters:
model: ggml-model-whisper-tiny.bin
files:
- filename: "ggml-model-whisper-tiny.bin"
uri: "https://ggml.ggerganov.com/ggml-model-whisper-tiny.bin"
sha256: be07e048e1e599ad46341c8d2a135645097a538221678b7acdd1b1919c6e1b21
- !!merge <<: *whisper
name: "whisper-tiny-q5_1"
overrides:
parameters:
model: ggml-model-whisper-tiny-q5_1.bin
files:
- filename: "ggml-model-whisper-tiny-q5_1.bin"
uri: "https://ggml.ggerganov.com/ggml-model-whisper-tiny-q5_1.bin"
sha256: 818710568da3ca15689e31a743197b520007872ff9576237bda97bd1b469c3d7
- !!merge <<: *whisper
name: "whisper-tiny-en-q5_1"
overrides:
parameters:
model: ggml-model-whisper-tiny.en-q5_1.bin
files:
- filename: "ggml-model-whisper-tiny.en-q5_1.bin"
uri: "https://ggml.ggerganov.com/ggml-model-whisper-tiny.en-q5_1.bin"
sha256: c77c5766f1cef09b6b7d47f21b546cbddd4157886b3b5d6d4f709e91e66c7c2b
- !!merge <<: *whisper
name: "whisper-tiny-en"
overrides:
parameters:
model: ggml-model-whisper-tiny.en.bin
files:
- filename: "ggml-model-whisper-tiny.en.bin"
uri: "https://ggml.ggerganov.com/ggml-model-whisper-tiny.en.bin"
sha256: 921e4cf8686fdd993dcd081a5da5b6c365bfde1162e72b08d75ac75289920b1f
- !!merge <<: *whisper
name: "whisper-tiny-en-q8_0"
overrides:
parameters:
model: ggml-model-whisper-tiny.en-q8_0.bin
files:
- filename: "ggml-model-whisper-tiny.en-q8_0.bin"
uri: "https://ggml.ggerganov.com/ggml-model-whisper-tiny.en-q8_0.bin"
sha256: 5bc2b3860aa151a4c6e7bb095e1fcce7cf12c7b020ca08dcec0c6d018bb7dd94
## Bert embeddings (llama3.2 drop-in)
- !!merge <<: *llama32
name: "bert-embeddings"
description: |
llama3.2 embeddings model. Using as drop-in replacement for bert-embeddings
tags:
- embeddings
## Stable Diffusion
- url: github:mudler/LocalAI/gallery/stablediffusion.yaml@master
license: "BSD-3"
urls:
- https://github.com/EdVince/Stable-Diffusion-NCNN
- https://github.com/EdVince/Stable-Diffusion-NCNN/blob/main/LICENSE
description: |
Stable Diffusion in NCNN with c++, supported txt2img and img2img
name: stablediffusion-cpp
## Tiny Dream
- url: github:mudler/LocalAI/gallery/tinydream.yaml@master
name: tinydream
license: "BSD-3"
urls:
- https://github.com/symisc/tiny-dream
- https://github.com/symisc/tiny-dream/blob/main/LICENSE
description: |
An embedded, Header Only, Stable Diffusion C++ implementation
- &piper
## Piper TTS
url: github:mudler/LocalAI/gallery/piper.yaml@master
name: voice-en-us-kathleen-low
icon: https://github.com/rhasspy/piper/raw/master/etc/logo.png
license: mit
urls:
- https://github.com/rhasspy/piper
description: |
A fast, local neural text to speech system that sounds great and is optimized for the Raspberry Pi 4. Piper is used in a variety of [projects](https://github.com/rhasspy/piper#people-using-piper).
tags:
- tts
- text-to-speech
- cpu
overrides:
parameters:
model: en-us-kathleen-low.onnx
files:
- filename: voice-en-us-kathleen-low.tar.gz
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-en-us-kathleen-low.tar.gz
sha256: 18e32f009f864d8061af8a4be4ae9018b5aa8b49c37f9e108bbfd782c6a38fbf
- !!merge <<: *piper
name: voice-ca-upc_ona-x-low
overrides:
parameters:
model: ca-upc_ona-x-low.onnx
files:
- filename: voice-ca-upc_ona-x-low.tar.gz
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-ca-upc_ona-x-low.tar.gz
sha256: c750d3f6ad35c8d95d5b0d1ad30ede2525524e48390f70a0871bdb7980cc271e
- !!merge <<: *piper
url: github:mudler/LocalAI/gallery/piper.yaml@master
name: voice-ca-upc_pau-x-low
overrides:
parameters:
model: ca-upc_pau-x-low.onnx
files:
- filename: voice-ca-upc_pau-x-low.tar.gz
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-ca-upc_pau-x-low.tar.gz
sha256: 13c658ecd46a2dbd9dadadf7100623e53106239afcc359f9e27511b91e642f1f
- !!merge <<: *piper
url: github:mudler/LocalAI/gallery/piper.yaml@master
name: voice-da-nst_talesyntese-medium
overrides:
parameters:
model: da-nst_talesyntese-medium.onnx
files:
- filename: voice-da-nst_talesyntese-medium.tar.gz
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-da-nst_talesyntese-medium.tar.gz
sha256: 1bdf673b946a2ba69fab24ae3fc0e7d23e042c2533cbbef008f64f633500eb7e
- !!merge <<: *piper
url: github:mudler/LocalAI/gallery/piper.yaml@master
name: voice-de-eva_k-x-low
overrides:
parameters:
model: de-eva_k-x-low.onnx
files:
- filename: voice-de-eva_k-x-low.tar.gz
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-de-eva_k-x-low.tar.gz
sha256: 81b305abc58a0a02629aea01904a86ec97b823714dd66b1ee22f38fe529e6371
- !!merge <<: *piper
url: github:mudler/LocalAI/gallery/piper.yaml@master
name: voice-de-karlsson-low
overrides:
parameters:
model: de-karlsson-low.onnx
files:
- filename: voice-de-karlsson-low.tar.gz
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-de-karlsson-low.tar.gz
sha256: cc7615cfef3ee6beaa1db6059e0271e4d2e1d6d310c0e17b3d36c494628f4b82
- !!merge <<: *piper
url: github:mudler/LocalAI/gallery/piper.yaml@master
name: voice-de-kerstin-low
overrides:
parameters:
model: de-kerstin-low.onnx
files:
- filename: voice-de-kerstin-low.tar.gz
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-de-kerstin-low.tar.gz
sha256: d8ea72fbc0c21db828e901777ba7bb5dff7c843bb943ad19f34c9700b96a8182
- !!merge <<: *piper
url: github:mudler/LocalAI/gallery/piper.yaml@master
name: voice-de-pavoque-low
overrides:
parameters:
model: de-pavoque-low.onnx
files:
- filename: voice-de-pavoque-low.tar.gz
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-de-pavoque-low.tar.gz
sha256: 1f5ebc6398e8829f19c7c2b14f46307703bca0f0d8c74b4bb173037b1f161d4d
- !!merge <<: *piper
url: github:mudler/LocalAI/gallery/piper.yaml@master
name: voice-de-ramona-low
overrides:
parameters:
model: de-ramona-low.onnx
files:
- filename: voice-de-ramona-low.tar.gz
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-de-ramona-low.tar.gz
sha256: 66d9fc08d1a1c537a1cefe99a284f687e5ad7e43d5935a75390678331cce7b47
- !!merge <<: *piper
url: github:mudler/LocalAI/gallery/piper.yaml@master
name: voice-de-thorsten-low
overrides:
parameters:
model: de-thorsten-low.onnx
files:
- filename: voice-de-thorsten-low.tar.gz
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-de-thorsten-low.tar.gz
sha256: 4d052a7726b77719d0dbc66c845f1d0fe4432bfbd26f878f6dd0883d49e9e43d
- !!merge <<: *piper
url: github:mudler/LocalAI/gallery/piper.yaml@master
name: voice-el-gr-rapunzelina-low
overrides:
parameters:
model: el-gr-rapunzelina-low.onnx
files:
- filename: voice-el-gr-rapunzelina-low.tar.gz
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-el-gr-rapunzelina-low.tar.gz
sha256: c5613688c12eabc5294465494ed56af1e0fe4d7896d216bfa470eb225d9ff0d0
- !!merge <<: *piper
url: github:mudler/LocalAI/gallery/piper.yaml@master
name: voice-en-gb-alan-low
overrides:
parameters:
model: en-gb-alan-low.onnx
files:
- filename: voice-en-gb-alan-low.tar.gz
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-en-gb-alan-low.tar.gz
sha256: 526eeeeccb26206dc92de5965615803b5bf88df059f46372caa4a9fa12d76a32
- !!merge <<: *piper
url: github:mudler/LocalAI/gallery/piper.yaml@master
name: voice-en-gb-southern_english_female-low
overrides:
parameters:
model: en-gb-southern_english
files:
- filename: voice-en-gb-southern_english_female-low.tar.gz
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-en-gb-southern_english_female-low.tar.gz
sha256: 7c1bbe23e61a57bdb450b137f69a83ff5358159262e1ed7d2308fa14f4924da9
- !!merge <<: *piper
url: github:mudler/LocalAI/gallery/piper.yaml@master
name: voice-en-us-amy-low
overrides:
parameters:
model: en-us-amy-low.onnx
files:
- filename: voice-en-us-amy-low.tar.gz
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-en-us-amy-low.tar.gz
sha256: 5c3e3480e7d71ce219943c8a711bb9c21fd48b8f8e87ed7fb5c6649135ab7608
- !!merge <<: *piper
url: github:mudler/LocalAI/gallery/piper.yaml@master
name: voice-en-us-danny-low
overrides:
parameters:
model: en-us-danny-low.onnx
files:
- filename: voice-en-us-danny-low.tar.gz
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-en-us-danny-low.tar.gz
sha256: 0c8fbb42526d5fbd3a0bded5f18041c0a893a70a7fb8756f97866624b932264b
- !!merge <<: *piper
url: github:mudler/LocalAI/gallery/piper.yaml@master
name: voice-en-us-kathleen-low
overrides:
parameters:
model: en-us-kathleen-low.onnx
files:
- filename: voice-en-us-kathleen-low.tar.gz
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-en-us-kathleen-low.tar.gz
sha256: 18e32f009f864d8061af8a4be4ae9018b5aa8b49c37f9e108bbfd782c6a38fbf
- !!merge <<: *piper
url: github:mudler/LocalAI/gallery/piper.yaml@master
name: voice-en-us-lessac-low
overrides:
parameters:
model: en-us-lessac-low.onnx
files:
- filename: voice-en-us-lessac-low.tar.gz
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-en-us-lessac-low.tar.gz
sha256: 003fe040985d00b917ace21b2ccca344c282c53fe9b946991b7b0da52516e1fc
- !!merge <<: *piper
url: github:mudler/LocalAI/gallery/piper.yaml@master
name: voice-en-us-lessac-medium
overrides:
parameters:
model: en-us-lessac-medium.onnx
files:
- filename: voice-en-us-lessac-medium.tar.gz
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-en-us-lessac-medium.tar.gz
sha256: d45ca50084c0558eb9581cd7d26938043bc8853513da47c63b94d95a2367a5c9
- !!merge <<: *piper
url: github:mudler/LocalAI/gallery/piper.yaml@master
name: voice-en-us-libritts-high
overrides:
parameters:
model: en-us-libritts-high.onnx
files:
- filename: voice-en-us-libritts-high.tar.gz
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-en-us-libritts-high.tar.gz
sha256: 328e3e9cb573a43a6c5e1aeca386e971232bdb1418a74d4674cf726c973a0ea8
- !!merge <<: *piper
url: github:mudler/LocalAI/gallery/piper.yaml@master
name: voice-en-us-ryan-high
overrides:
parameters:
model: en-us-ryan-high.onnx
files:
- filename: voice-en-us-ryan-high.tar.gz
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-en-us-ryan-high.tar.gz
sha256: de346b054703a190782f49acb9b93c50678a884fede49cfd85429d204802d678
- !!merge <<: *piper
url: github:mudler/LocalAI/gallery/piper.yaml@master
name: voice-en-us-ryan-low
overrides:
parameters:
model: en-us-ryan-low.onnx
files:
- filename: voice-en-us-ryan-low.tar.gz
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-en-us-ryan-low.tar.gz
sha256: 049e6e5bad07870fb1d25ecde97bac00f9c95c90589b2fef4b0fbf23c88770ce
- !!merge <<: *piper
url: github:mudler/LocalAI/gallery/piper.yaml@master
name: voice-en-us-ryan-medium
overrides:
parameters:
model: en-us-ryan-medium.onnx
files:
- filename: voice-en-us-ryan-medium.tar.gz
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-en-us-ryan-medium.tar.gz
sha256: 2e00d747eaed6ce9f63f4991921ef3bb2bbfbc7f28cde4f14eb7048960f928d8
- !!merge <<: *piper
url: github:mudler/LocalAI/gallery/piper.yaml@master
name: voice-en-us_lessac
overrides:
parameters:
model: en-us-lessac.onnx
files:
- filename: voice-en-us_lessac.tar.gz
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-en-us_lessac.tar.gz
sha256: 0967af67fb0435aa509b0b794c0cb2cc57817ae8a5bff28cb8cd89ab6f5dcc3d
- !!merge <<: *piper
url: github:mudler/LocalAI/gallery/piper.yaml@master
name: voice-es-carlfm-x-low
overrides:
parameters:
model: es-carlfm-x-low.onnx
files:
- filename: voice-es-carlfm-x-low.tar.gz
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-es-carlfm-x-low.tar.gz
sha256: 0156a186de321639e6295521f667758ad086bc8433f0a6797a9f044ed5cf5bf3
- !!merge <<: *piper
url: github:mudler/LocalAI/gallery/piper.yaml@master
name: voice-es-mls_10246-low
overrides:
parameters:
model: es-mls_10246-low.onnx
files:
- filename: voice-es-mls_10246-low.tar.gz
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-es-mls_10246-low.tar.gz
sha256: ff1fe3fc2ab91e32acd4fa8cb92048e3cff0e20079b9d81324f01cd2dea50598
- !!merge <<: *piper
url: github:mudler/LocalAI/gallery/piper.yaml@master
name: voice-es-mls_9972-low
overrides:
parameters:
model: es-mls_9972-low.onnx
files:
- filename: voice-es-mls_9972-low.tar.gz
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-es-mls_9972-low.tar.gz
sha256: d95def9adea97a6a3fee7645d1167e00fb4fd60f8ce9bc3ebf1acaa9e3f455dc
- !!merge <<: *piper
url: github:mudler/LocalAI/gallery/piper.yaml@master
name: voice-fi-harri-low
overrides:
parameters:
model: fi-harri-low.onnx
files:
- filename: voice-fi-harri-low.tar.gz
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-fi-harri-low.tar.gz
sha256: 4f1aaf00927d0eb25bf4fc5ef8be2f042e048593864ac263ee7b49c516832b22
- !!merge <<: *piper
url: github:mudler/LocalAI/gallery/piper.yaml@master
name: voice-fr-gilles-low
overrides:
parameters:
model: fr-gilles-low.onnx
files:
- filename: voice-fr-gilles-low.tar.gz
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-fr-gilles-low.tar.gz
sha256: 77662c7332c2a6f522ab478287d9b0fe9afc11a2da71f310bf923124ee699aae
- !!merge <<: *piper
url: github:mudler/LocalAI/gallery/piper.yaml@master
name: voice-fr-mls_1840-low
overrides:
parameters:
model: fr-mls_1840-low.onnx
files:
- filename: voice-fr-mls_1840-low.tar.gz
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-fr-mls_1840-low.tar.gz
sha256: 69169d1fac99a733112c08c7caabf457055990590a32ee83ebcada37f86132d3
- !!merge <<: *piper
url: github:mudler/LocalAI/gallery/piper.yaml@master
name: voice-fr-siwis-low
overrides:
parameters:
model: fr-siwis-low.onnx
files:
- filename: voice-fr-siwis-low.tar.gz
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-fr-siwis-low.tar.gz
sha256: d3db8d47053e9b4108e1c1d29d5ea2b5b1a152183616c3134c222110ccde20f2
- !!merge <<: *piper
url: github:mudler/LocalAI/gallery/piper.yaml@master
name: voice-fr-siwis-medium
overrides:
parameters:
model: fr-siwis-medium.onnx
files:
- filename: voice-fr-siwis-medium.tar.gz
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-fr-siwis-medium.tar.gz
sha256: 0c9ecdf9ecac6de4a46be85a162bffe0db7145bd3a4175831cea6cab4b41eefd
- !!merge <<: *piper
url: github:mudler/LocalAI/gallery/piper.yaml@master
name: voice-is-bui-medium
overrides:
parameters:
model: is-bui-medium.onnx
files:
- filename: voice-is-bui-medium.tar.gz
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-is-bui-medium.tar.gz
sha256: e89ef01051cb48ca2a32338ed8749a4c966b912bb572c61d6d21f2d3822e505f
- !!merge <<: *piper
url: github:mudler/LocalAI/gallery/piper.yaml@master
name: voice-is-salka-medium
overrides:
parameters:
model: is-salka-medium.onnx
files:
- filename: voice-is-salka-medium.tar.gz
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-is-salka-medium.tar.gz
sha256: 75923d7d6b4125166ca58ec82b5d23879012844483b428db9911e034e6626384
- !!merge <<: *piper
url: github:mudler/LocalAI/gallery/piper.yaml@master
name: voice-is-steinn-medium
overrides:
parameters:
model: is-steinn-medium.onnx
files:
- filename: voice-is-steinn-medium.tar.gz
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-is-steinn-medium.tar.gz
sha256: 5a01a8df796f86fdfe12cc32a3412ebd83670d47708d94d926ba5ed0776e6dc9
- !!merge <<: *piper
url: github:mudler/LocalAI/gallery/piper.yaml@master
name: voice-is-ugla-medium
overrides:
parameters:
model: is-ugla-medium.onnx
files:
- filename: voice-is-ugla-medium.tar.gz
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-is-ugla-medium.tar.gz
sha256: 501cd0376f7fd397f394856b7b3d899da4cc40a63e11912258b74da78af90547
- !!merge <<: *piper
url: github:mudler/LocalAI/gallery/piper.yaml@master
name: voice-it-riccardo_fasol-x-low
overrides:
parameters:
model: it-riccardo_fasol-x-low.onnx
files:
- filename: voice-it-riccardo_fasol-x-low.tar.gz
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-it-riccardo_fasol-x-low.tar.gz
sha256: 394b27b8780f5167e73a62ac103839cc438abc7edb544192f965e5b8f5f4acdb
- !!merge <<: *piper
url: github:mudler/LocalAI/gallery/piper.yaml@master
name: voice-it-paola-medium
overrides:
parameters:
model: it-paola-medium.onnx
files:
- filename: voice-it-paola-medium.tar.gz
uri: https://github.com/fakezeta/piper-paola-voice/releases/download/v1.0.0/voice-it-paola-medium.tar.gz
sha256: 61d3bac0ff6d347daea5464c4b3ae156a450b603a916cc9ed7deecdeba17153a
- !!merge <<: *piper
url: github:mudler/LocalAI/gallery/piper.yaml@master
name: voice-kk-iseke-x-low
overrides:
parameters:
model: kk-iseke-x-low.onnx
files:
- filename: voice-kk-iseke-x-low.tar.gz
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-kk-iseke-x-low.tar.gz
sha256: f434fffbea3e6d8cf392e44438a1f32a5d005fc93b41be84a6d663882ce7c074
- !!merge <<: *piper
url: github:mudler/LocalAI/gallery/piper.yaml@master
name: voice-kk-issai-high
overrides:
parameters:
model: kk-issai-high.onnx
files:
- filename: voice-kk-issai-high.tar.gz
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-kk-issai-high.tar.gz
sha256: 84bf79d330d6cd68103e82d95bbcaa2628a99a565126dea94cea2be944ed4f32
- !!merge <<: *piper
url: github:mudler/LocalAI/gallery/piper.yaml@master
name: voice-kk-raya-x-low
overrides:
parameters:
model: kk-raya-x-low.onnx
files:
- filename: voice-kk-raya-x-low.tar.gz
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-kk-raya-x-low.tar.gz
sha256: 4cab4ce00c6f10450b668072d7980a2bc3ade3a39adee82e3ec4f519d4c57bd1
- !!merge <<: *piper
url: github:mudler/LocalAI/gallery/piper.yaml@master
name: voice-ne-google-medium
overrides:
parameters:
model: ne-google-medium.onnx
files:
- filename: voice-ne-google-medium.tar.gz
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-ne-google-medium.tar.gz
sha256: 0895b11a7a340baea37fb9c27fb50bc3fd0af9779085978277f962d236d3a7bd
- !!merge <<: *piper
url: github:mudler/LocalAI/gallery/piper.yaml@master
name: voice-ne-google-x-low
overrides:
parameters:
model: ne-google-x-low.onnx
files:
- filename: voice-ne-google-x-low.tar.gz
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-ne-google-x-low.tar.gz
sha256: 870ba5718dfe3e478c6cce8a9a288b591b3575c750b57ffcd845e4ec64988f0b
- !!merge <<: *piper
url: github:mudler/LocalAI/gallery/piper.yaml@master
name: voice-nl-mls_5809-low
overrides:
parameters:
model: nl-mls_5809-low.onnx
files:
- filename: voice-nl-mls_5809-low.tar.gz
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-nl-mls_5809-low.tar.gz
sha256: 398b9f0318dfe9d613cb066444efec0d8491905ae34cf502edb52030b75ef51c
- !!merge <<: *piper
url: github:mudler/LocalAI/gallery/piper.yaml@master
name: voice-nl-mls_7432-low
overrides:
parameters:
model: nl-mls_7432-low.onnx
files:
- filename: voice-nl-mls_7432-low.tar.gz
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-nl-mls_7432-low.tar.gz
sha256: 0b3efc68ea7e735ba8f2e0a0f7e9b4b887b00f6530c02fca4aa69a6091adbe5e
- !!merge <<: *piper
url: github:mudler/LocalAI/gallery/piper.yaml@master
name: voice-nl-nathalie-x-low
overrides:
parameters:
model: nl-nathalie-x-low.onnx
files:
- filename: voice-nl-nathalie-x-low.tar.gz
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-nl-nathalie-x-low.tar.gz
sha256: 2658d4fe2b791491780160216d187751f7c993aa261f3b8ec76dfcaf1ba74942
- !!merge <<: *piper
url: github:mudler/LocalAI/gallery/piper.yaml@master
name: voice-nl-rdh-medium
overrides:
parameters:
model: nl-rdh-medium.onnx
files:
- filename: voice-nl-rdh-medium.tar.gz
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-nl-rdh-medium.tar.gz
sha256: 16f74a195ecf13df1303fd85327532196cc1ecef2e72505200578fd410d0affb
- !!merge <<: *piper
url: github:mudler/LocalAI/gallery/piper.yaml@master
name: voice-nl-rdh-x-low
overrides:
parameters:
model: nl-rdh-x-low.onnx
files:
- filename: voice-nl-rdh-x-low.tar.gz
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-nl-rdh-x-low.tar.gz
sha256: 496363e5d6e080fd16ac5a1f9457c564b52f0ee8be7f2e2ba1dbf41ef0b23a39
- !!merge <<: *piper
url: github:mudler/LocalAI/gallery/piper.yaml@master
name: voice-no-talesyntese-medium
overrides:
parameters:
model: no-talesyntese-medium.onnx
files:
- filename: voice-no-talesyntese-medium.tar.gz
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-no-talesyntese-medium.tar.gz
sha256: ed6b3593a0e70c90d52e225b85d7e0b805ad8e08482471bd2f73cf1404a6470d
- !!merge <<: *piper
url: github:mudler/LocalAI/gallery/piper.yaml@master
name: voice-pl-mls_6892-low
overrides:
parameters:
model: pl-mls_6892-low.onnx
files:
- filename: voice-pl-mls_6892-low.tar.gz
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-pl-mls_6892-low.tar.gz
sha256: 5361fcf586b1285025a2ccb8b7500e07c9d66fa8126ef518709c0055c4c0d6f4
- !!merge <<: *piper
url: github:mudler/LocalAI/gallery/piper.yaml@master
name: voice-pt-br-edresson-low
overrides:
parameters:
model: pt-br-edresson-low.onnx
files:
- filename: voice-pt-br-edresson-low.tar.gz
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-pt-br-edresson-low.tar.gz
sha256: c68be522a526e77f49e90eeb4c13c01b4acdfeb635759f0eeb0eea8f16fd1f33
- !!merge <<: *piper
url: github:mudler/LocalAI/gallery/piper.yaml@master
name: voice-ru-irinia-medium
overrides:
parameters:
model: ru-irinia-medium.onnx
files:
- filename: voice-ru-irinia-medium.tar.gz
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-ru-irinia-medium.tar.gz
sha256: 897b62f170faee38f21d0bc36411164166ae351977e898b6cf33f6206890b55f
- !!merge <<: *piper
url: github:mudler/LocalAI/gallery/piper.yaml@master
name: voice-sv-se-nst-medium
overrides:
parameters:
model: sv-se-nst-medium.onnx
files:
- filename: voice-sv-se-nst-medium.tar.gz
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-sv-se-nst-medium.tar.gz
sha256: 0d6cf357d55860162bf1bdd76bd4f0c396ff547e941bfb25df799d6f1866fda9
- !!merge <<: *piper
url: github:mudler/LocalAI/gallery/piper.yaml@master
name: voice-uk-lada-x-low
overrides:
parameters:
model: uk-lada-x-low.onnx
files:
- filename: voice-uk-lada-x-low.tar.gz
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-uk-lada-x-low.tar.gz
sha256: ff50acbd659fc226b57632acb1cee310009821ec44b4bc517effdd9827d8296b
- !!merge <<: *piper
url: github:mudler/LocalAI/gallery/piper.yaml@master
name: voice-vi-25hours-single-low
overrides:
parameters:
model: vi-25hours-single-low.onnx
files:
- filename: voice-vi-25hours-single-low.tar.gz
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-vi-25hours-single-low.tar.gz
sha256: 97e34d1b69dc7000a4ec3269f84339ed35905b3c9800a63da5d39b7649e4a666
- !!merge <<: *piper
url: github:mudler/LocalAI/gallery/piper.yaml@master
name: voice-vi-vivos-x-low
overrides:
parameters:
model: vi-vivos-x-low.onnx
files:
- filename: voice-vi-vivos-x-low.tar.gz
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-vi-vivos-x-low.tar.gz
sha256: 07cd4ca6438ec224012f7033eec1a2038724b78e4aa2bedf85f756656b52e1a7
- !!merge <<: *piper
url: github:mudler/LocalAI/gallery/piper.yaml@master
name: voice-zh-cn-huayan-x-low
overrides:
parameters:
model: zh-cn-huayan-x-low.onnx
files:
- filename: voice-zh-cn-huayan-x-low.tar.gz
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-zh-cn-huayan-x-low.tar.gz
sha256: 609db0da8ee75beb2f17ce53c55abdbc8c0e04135482efedf1798b1938bf90fa
- !!merge <<: *piper
url: github:mudler/LocalAI/gallery/piper.yaml@master
name: voice-zh_CN-huayan-medium
overrides:
parameters:
model: zh_CN-huayan-medium.onnx
files:
- filename: voice-zh_CN-huayan-medium.tar.gz
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-zh_CN-huayan-medium.tar.gz
sha256: 0299a5e7f481ba853404e9f0e1515a94d5409585d76963fa4d30c64bd630aa99
- name: "silero-vad"
url: github:mudler/LocalAI/gallery/virtual.yaml@master
urls:
- https://github.com/snakers4/silero-vad
- https://huggingface.co/onnx-community/silero-vad
description: |
Silero VAD - pre-trained enterprise-grade Voice Activity Detector.
tags:
- vad
- voice-activity-detection
- cpu
overrides:
backend: silero-vad
parameters:
model: silero-vad.onnx
files:
- filename: silero-vad.onnx
uri: https://huggingface.co/onnx-community/silero-vad/resolve/main/onnx/model.onnx
sha256: a4a068cd6cf1ea8355b84327595838ca748ec29a25bc91fc82e6c299ccdc5808
- name: "bark-cpp-small"
url: github:mudler/LocalAI/gallery/virtual.yaml@master
license: mit
urls:
- https://huggingface.co/suno/bark
- https://huggingface.co/Green-Sky/bark-ggml
description: |
Bark is a transformer-based text-to-audio model created by Suno. Bark can generate highly realistic, multilingual speech as well as other audio - including music, background noise and simple sound effects. The model can also produce nonverbal communications like laughing, sighing and crying. To support the research community, we are providing access to pretrained model checkpoints ready for inference.
tags:
- tts
- cpu
overrides:
backend: bark-cpp
parameters:
model: bark-small_weights-f16.bin
files:
- filename: bark-small_weights-f16.bin
uri: https://huggingface.co/Green-Sky/bark-ggml/resolve/main/bark-small_weights-f16.bin
sha256: de1ece17e8319537b3a7909baebbd28affab23c942d5d57e648d622af4e2feaa